article stringlengths 507 295k | abstract stringlengths 417 1.92k | category listlengths 1 6 |
|---|---|---|
# I. INTRODUCTION
Vulnerability management is a cornerstone of effective cyber defense, enabling organizations to prioritize and mitigate risks before attackers can exploit them. However, false positives (FPs) in vulnerability management predominantly stem from limitations in Common Platform Enumeration (CPE) [25] data utilized during the correlation process. This initial phase in vulnerability management matches deployed software against vulnerability databases maintained by NIST’s National Vulnerability Database (NVD) [26] and software vendors. Vulnerability scanners either extract granular software package data directly from vendor sources or rely on NIST’s CPE descriptions. While CPE aims to standardize software identification across vendors, empirical evidence suggests significant deficiencies in data accuracy and completeness [3, 9, 17, 19, 29].
Various open-source vulnerability management tools, such as OpenCVE, OSV, cve-search, Trivy, CVEdetails and OpenVAS, aim to improve vulnerability detection through database integration and search capabilities. Trivy [23] (container scanner) and OpenVAS [14] (network scanner) detect vulnerabilities by matching system components against NVD data. To enhance vulnerability data reliability, OSV [13], developed by Google, curates open-source vulnerability data using ecosystem-based identifiers (e.g., npm, pip) instead of CPE, improving data accuracy and validation. OpenCVE [6] synchronizes CVE data from sources like NVD, MITRE, and RedHat, while cve-search [1] imports CVE and CPE data into a local MongoDB database, supporting fast searches and ranking vulnerabilities by CPE names. CVEdetails [8] supplements some missing CPE details but restricts API access to paid users, limiting its availability for programmatic queries. Proprietary tools like Tenable and Fortinet lack transparency, making direct comparisons difficult.
Despite these efforts, existing tools struggle with CPE inconsistencies, both FPs and false negatives (FNs), and incomplete mappings, which hinder vulnerability retrieval and integration [31]. Solutions relying on keyword searches or static CPE-based matching fail to address system configuration dependencies [32]. Tools such as cve-search and OpenCVE streamline retrieval but lack capabilities to mitigate FPs or support context-aware matching. Their reliance on manual processes further limits scalability and practicality in largescale environments [24]. Meanwhile, the heterogeneous nature of software, hardware, and operating system (OS) configurations complicates the accurate mapping of vulnerabilities to affected assets [12, 30]. These structural limitations in CPE data representation frequently manifest as false positives in vulnerability detection systems, reducing the efficacy of automated security assessment protocols.
To address these challenges, we propose VulCPE, a framework that addresses these gaps by leveraging advanced techniques such as Named Entity Recognition (NER), Relation Extraction (RE), and graph-based modeling. Specifically, this work explores the following research questions:
RQ1: How do data inconsistencies in vulnerability databases affect retrieval accuracy?
RQ2: What role do complex system configurations play in determining vulnerability applicability?
RQ3: How can advanced techniques reduce false positives in vulnerability management to enhance cyber resilience?
We conducted a comprehensive analysis of the NVD/CPE and CVEdetails datasets to uncover prevalent inconsistency patterns. Our results show that $9 3 . 5 5 \%$ of NVD entries contain at least one valid $C P E$ string. However, $8 1 . 4 0 \%$ of all defined $C P E$ strings remain unused in the $N V D$ , indicating significant underutilization of available configuration identifiers. Additionally, $1 4 . 5 6 \%$ of $N V D$ entries rely on configuration-specific CPEs, which require parsing of logical AND/OR groupings. Naming inconsistencies were identified in $5 0 . 1 8 \%$ of vendor names used in CPEs within the official NVD database and
NVD CPE. 。 Vendor-Product-Version CVEDetails- d Dictionary ↓
Vulnerability Report Named Entity Recognition Post Processing
"Google Chrome before “Google Chrome" (PN-APP) Relation Extraction “Google" - “Chrome" Vendor: Google
8.0.552.237 and Chrome Os “before" (MOD) “8.0.552.237" (V) “Google Chrome"-"before 8.0.552.237” [°0.1.38.1°, "0.1.38.2\*.. Product: Chrome “Chrome OS"-"before 8.0.552.344" Version: 0.1.38.1
before 8.0.552.344 do no.t.. "Chrome OS" (PN-OS) "8.0.552.235"] ↓ System Configuration Query Generator Vul-Object Construction uCPEConstruction 香 Device: Cisco Router 品 Model: ISR 4331 CISCO:RoSXE93:1. CBn G System Operating System: IOS XE 16.9.3 VuIDB Graph-Based False Positive Filter {Applicable Vul-Objects JSON
in $4 7 . 0 7 \%$ of vendor names extracted from CVEdetails, highlighting the need for standardization to enhance data usability.
Figure 1 illustrates the VulCPE framework. VulCPE employs employs NER and RE models to extract structured entities (vendor, product, version) from vulnerability reports and resolve inconsistencies in naming and formatting. Extracted data is standardized into a unified Common Platform Enumeration (uCPE) schema, which provides a hierarchical and logical representation of configurations. Logical relationships (AND/OR) and dependency structures (e.g., application software running on or alongside an OS) are modeled as directed graphs, enabling context-aware matching of vulnerabilities to system configurations. The system constructs two distinct graphs: a hierarchical graph of vulnerable configurations derived from uCPEs and a system configuration graph representing the system under investigation (SUI). Graph traversal techniques are used to match these configurations, ensuring precise vulnerability applicability assessments. Inconsistencies between configurations are detected using subgraph similarity measures, further reducing FPs.
Experimental results demonstrate the efficacy of VulCPE in two key areas. First, NER and RE models achieve state-ofthe-art performance, with NER attaining a precision of 0.958 and recall of 0.975, and RE achieving a precision of 0.977 and recall of 0.914. Second, VulCPE significantly outperforms existing tools like cve-search and OpenCVE by achieving high retrieval coverage (0.926) and precision (0.766). Our manually labeled $5 \mathrm { k }$ ground-truth Common Platform Enumeration (CVE) reports for NER and RE model training and testing is released and available on IEEE DataPort [18].
The rest of this paper is organized as follows: Section II reviews vulnerability management systems and NER/RE applications in security. Section III analyzes NVD, CPE, and CVEdetails data inconsistencies. Section IV describes the VulCPE system architecture, NER/RE models, and uCPE formation. Section V addresses distributed deployment and resource management challenges. Section VI evaluates NER/RE performance and VulCPE’s effectiveness in reducing FPs. Section VII presents conclusions and future directions.
# II. BACKGROUND AND RELATED WORK
# A. CPE, SCAP and SWID
The NIST Interagency Report 8085 outlines guidance for using Software Identification (SWID) tags to create standardized CPE names [33]. SWID tags, compliant with ISO/IEC 19770-2, enable accurate software identification across asset management and cybersecurity applications [34].
CPE functions as a dictionary for vulnerable products within the NIST Security Content Automation Protocol (SCAP) 1.2 standard. Each CPE entry includes type, vendor, product, and version information. For example, $\begin{array} { r } { \ " c p e : 2 . 3 . { o : c i s c o \_ x e : 3 . I 3 . 2 a s : : : } \ \cdots \ { * } ^ { , } } \end{array}$ indicates an operating system (o) from vendor “cisco” with product “ios xe” version $^ { \bullet \bullet } 3 . I 3 . 2 a s ^ { \prime \prime }$ . According to NVD [28], vulnerability configurations are classified as: (1) Basic Configuration with a single node holding one or more CPE names; (2) Running On/With Configuration containing multiple nodes with both vulnerable and non-vulnerable CPE names (Fig. 2); and (3) Advanced Configuration with multiple nodes and complex sets of CPE names. In this paper, we refer to both Running On/With and Advanced Configurations as Configuration-Specific CPEs.
U cpe:2.3:o:netgear:wnr3500u_firmware:1.2.2.44_35.0.53na:\*:\*:\*:\*:\*::\* Show Matching CPE(s) \~
Running on/with
cpe:2.3:h:netgear:wnr35oou:-:\*:\*:\*:\*:\*:\*:\* Show Matching CPE(s)
# B. Vulnerability Database Data Quality Analysis
Public databases like the CVE repository are commonly used in both research and commercial products for vulnerability analysis [2]. Yet, numerous recent investigations have highlighted the difficulties encountered with existing vulnerability databases, advocating for the creation of high-quality datasets [7] [21] [4] [11]. For example, Dong et al. [9] found significant inconsistencies in software version vulnerabilities reported between CVE and NVD, with only a fraction of
CVE summaries matching NVD entries accurately. Hong et al. [35] addressed the data inconsistencies and incorrectness in software names and versions, and emphasized the importance of identifying original vulnerable software. Li et al. [21] further carried out a comprehensive systematic mapping study focusing on the architecture and application of vulnerability databases. This investigation identifies dependencies on $N V D$ and CVE databases, while also pointing out a significant shortfall in the existing vulnerability databases for their lack of detailed information and metadata, which poses a challenge to detecting vulnerabilities. Hong et al. [16] introduced a novel approach for database construction aimed at augmenting the scope of security patches. Their method involves correlating data from the NVD database with diverse sources such as repositories (e.g., GitHub), issue trackers (e.g., Bugzilla), and Q&A sites (e.g., Stack Overflow).
These findings emphasize the importance of developing methodologies [3, 15, 19] to enhance data consistency and completeness. Recent advancements in natural language processing [19, 20], machine learning [31] and graph-based [10] methods showed potential in extracting useful information from unstructured vulnerability reports. However, the quality of the trained data remains uncertain, which increases the challenges of applying these models in practical settings.
# C. NER and RE in Security Domains
Security vulnerability reports typically contain critical information such as software names, versions, and steps to reproduce the issue. Chaparro et al. [5] employed three distinct approaches, namely regular expressions, heuristics, and machine learning, to extract key elements from bug reports, including observed behavior, expected behavior, and steps to reproduce. In the context of vulnerability data, Semfuzz [36] utilized regular expressions to extract software version details from CVE entries, while VIEM [9] applied NER and RE techniques to extract software names and versions from vulnerability reports in six databases (e.g., NVD, ExploitDB and SecurityFocus). VERNIER [31], also based on NER, was designed to automatically extract software names from unstructured Chinese and English vulnerability reports and to measure inconsistencies in software names across nine mainstream databases (e.g., CVE, NVD and CNNVD). This method also used a reward-punishment matrix to detect incorrect software names, aiming to improve database accuracy.
Nevertheless, these existing solutions primarily focus on extracting software names and versions independently, without fully addressing the contextual relationships between vendor, product and version. This results in a fragmented understanding of vulnerabilities, which can lead to inaccurate retrieval and misidentification of relevant vulnerabilities in critical systems. Our work addresses this gap by utilizing $C P E$ standards in combination with advanced NER and RE techniques to construct a unified, contextual representation of vendor, product, and version information. This graph-based uCPE structure not only captures the relationships among these entities but also allows for sophisticated traversal and configuration matching, enabling more accurate and contextaware vulnerability retrieval. In addition, we design a dedicated database schema optimized for storing and retrieving vulnerabilities based on the uCPE structure. This schema is tailored to efficiently support queries that involve complex configurations, ensuring that vulnerabilities can be retrieved accurately with minimized false positives and false negatives.
# III. DATA ANALYSIS
This section examines the structure and inconsistencies in $N V D$ and $C P E$ data, highlighting configuration-based CPE patterns and naming inconsistencies in vendors and products.
# A. Preliminary Data Analysis of NVD/CPE Entries
1) The Usage of CPE in NVD CVE Entries: We obtained JSON feeds containing 259,233 vulnerability data from 2002 to 31 Aug 2024 (inclusive) from the official NVD website [27]. We then filtered these NVD entries based on their last modified date and excluded vulnerabilities marked as “Rejected” by the NVD, which leads to 244,819 vulnerabilities. The $C P E \ \mathrm { v } 2 . 3$ Dictionary was manually downloaded from $C P E$ [25] and we parsed in total 1,327,827 CPE strings for further analysis. We processed all NVD entries to extract $C P E$ -formatted strings and their associated configuration attributes. Of these 244,819 reviewed vulnerabilities, 229,023 $( 9 3 . 5 5 \% )$ contained at least one valid $N V D – C P E$ string. Subsequent analyses focused on this subset. We noticed that some $N V D – C P E$ strings are not recorded in the official $C P E$ dictionary. Meanwhile, $8 1 . 4 0 \%$ of the official $C P E$ strings were never referenced in NVD, indicating a significant portion of unused metadata.
2) Running On/With CPE Entries: Our analysis found that $1 4 . 5 6 \%$ of NVD entries specify configuration-specific $C P E s$ , exhibiting four key patterns: OS dependencies (e.g., Product A runs on OS B), Enabled Modules (e.g., Product X is vulnerable when Module $\mathrm { \Delta Y }$ is enabled), Cloud/Virtualization Environments (e.g., vulnerabilities arise when guest virtual machines impact the host system), and Network Configurations (e.g., vulnerabilities caused by specific firewall rules).
Table I summarizes these configuration-specific CPE patterns. We extracted the CPE type (a: applications, o: OS, h: hardware devices) and generated all possible Running On/With relationships using Cartesian technique to capture each directed pair.
TABLE I COUNTS OF DIFFERENT CONFIGURATION COMBINATIONS.
Several patterns emerge from these results. OS-hardware configurations are most common (1,224,357 instances), followed by application-OS dependencies (297,491 cases). Less frequent but notable configurations include OS-application (3,711), hardware-hardware (933), and OS-OS (4,071) combinations, which may indicate layered systems like virtual machines. Another common pattern is “ firmware” appearing in vulnerable $C P E$ product names (see Table II), with $2 1 . 2 0 \%$ of all configurations (343,015 cases), with $9 9 . 9 2 \%$ involving OS-hardware device relationships. The “firmware” keyword appears across all three CPE types, with $9 9 . 6 \%$ classified as OSs, potentially complicating vulnerability assessment. Additionally, $8 0 . 8 8 \%$ of configurations share the same vendor for both vulnerable and configuration CPEs, suggesting vulnerabilities often occur within vendor-controlled ecosystems.
TABLE II EXAMPLES OF CPE NAMES CONTAINING “firmware”.
These findings highlight the critical role of configurationbased CPEs in vulnerability data usability by providing essential context. Delays in updating these configuration details can significantly hinder timely vulnerability management.
# B. Heuristics for Detecting Inconsistencies
In vulnerability databases such as $N V D$ and CVEdetails, inconsistencies in vendor and product names present significant challenges for accurate vulnerability retrieval and analysis. Given the large scale of vendor and product entries in these databases, manual identification of inconsistencies is impractical. We therefore filed a set of heuristics to detect and group potential name discrepancies for further validation. These heuristics address key patterns of variation observed.
Inconsistencies in vendor and product names are quantified as a pairwise divergence metric, where $\mathrm { s i m } ( \mathrm { n a m e _ { 1 } , n a m e _ { 2 } } )$ denotes a similarity function, such as Levenshtein or Cosine similarity, calculated using:
$$
\Delta ( \mathrm { n a m e } _ { 1 } , \mathrm { n a m e } _ { 2 } ) = 1 - \mathrm { s i m } ( \mathrm { n a m e } _ { 1 } , \mathrm { n a m e } _ { 2 } ) .
$$
An inconsistency is detected if the discrepancy is larger than a predefined similarity threshold $\tau$ .
Define $P ( V )$ as the product set of vendor $V$ , with $P _ { \mathrm { n o r m } } ( V ) ~ = ~ \{ \mathrm { n o r m } ( p ) ~ | ~ p ~ \in ~ P ( V ) \}$ . norm is short for normalize. Shared Product Ratio (SPR) is:
$$
\mathrm { S i m } _ { \mathrm { p r o d } } ( V _ { 1 } , V _ { 2 } ) = \frac { | P _ { \mathrm { n o r m } } ( V _ { 1 } ) \cap P _ { \mathrm { n o r m } } ( V _ { 2 } ) | } { | P _ { \mathrm { n o r m } } ( V _ { 1 } ) \cup P _ { \mathrm { n o r m } } ( V _ { 2 } ) | } .
$$
Pairwise heuristics require $\begin{array} { r } { \mathrm { S i m } _ { \mathrm { p r o d } } ( V _ { 1 } , V _ { 2 } ) \geq \theta _ { p } } \end{array}$ (e.g., 0.5).
All the heuristics apply to inconsistency detection in vendor names. Meanwhile, the first heuristic (Format Variations) is also applied to detect inconsistencies in product names. In these cases, the product similarity condition $( \mathrm { S i m } _ { \mathrm { p r o d } } \geq \theta _ { p } ) \mathrm { , }$ ) is replaced with vendor similarity $( \mathrm { S i m } _ { \mathrm { v e n d o r } } )$ , defined as:
$$
\begin{array} { r } { \mathrm { ~ { \stackrel { . } { \operatorname { S i m } } } _ { \mathrm { v e n d o r } } } ( P _ { 1 } , P _ { 2 } ) = \left\{ { 1 \atop 0 } \right. { \mathrm { ~ i f ~ } } { \mathrm { \stackrel { . } { v e n d o r } } } ( P _ { 1 } ) = \mathrm { v e n d o r } ( P _ { 2 } ) . } \end{array}
$$
For example, product names like Windows $I O$ and windows$I O$ from the same vendor (Microsoft) would be flagged as inconsistent under the Format Variations rule.
(1) Format Variations detects character-level differences in capitalization, punctuation, or special characters.
$$
\begin{array} { r } { \dot { \Delta } _ { \mathrm { f o r m a t } } ( V _ { 1 } , V _ { 2 } ) = \Bigg \{ 1 \quad \mathrm { i f } ^ { \cdot } \mathrm { n o r m } ( V _ { 1 } ) = \mathrm { n o r m } ( V _ { 2 } ) . } \\ { \dot { 0 } \quad \mathrm { o t h e r w i s e } . } \end{array}
$$
Inconsistency: $\Delta _ { \mathrm { f o r m a t } } = 1 \wedge \mathrm { S i m } _ { \mathrm { p r o d } } \geq \theta _ { p }$ . E.g., “Microsoft Corp” and “microsoft-corp”.
(2) Spelling Errors detect inconsistencies due to potential spelling or typographical errors in vendor names using edit distances. This is only applied to vendor names that share the same first letter, based on the linguistic observation that typographical errors rarely affect the initial character of a word. Let $d _ { L } ( s _ { 1 } , s _ { 2 } )$ be the Levenshtein distance. For vendors with $| \mathrm { n o r m } ( V _ { 1 } ) | \ge m$ and $| \mathrm { n o r m } ( V _ { 2 } ) | \ge m$ , where $m$ is a minimum length threshold (e.g., $m = 5$ ), define:
$$
\bar { \mathrm { S i m } _ { \mathrm { e d i t } } ( V _ { 1 } , V _ { 2 } ) } = 1 - \frac { \bar { d _ { L } } ( \mathrm { n o r m } ( V _ { 1 } ) , \mathrm { n o r m } ( V _ { 2 } ) ) } { \mathrm { m a x } ( | \mathrm { n o r m } ( V _ { 1 } ) | , | \mathrm { n o r m } ( V _ { 2 } ) | ) }
$$
$$
\Delta _ { \mathrm { s p e l l i n g } } ( V _ { 1 } , V _ { 2 } ) = \left\{ \begin{array} { l l } { { 1 } } & { { \mathrm { i f ~ } \mathrm { S i m } _ { \mathrm { e d i t } } ( V _ { 1 } , V _ { 2 } ) \geq \tau . } } \\ { { 0 } } & { { \mathrm { o t h e r w i s e } . } } \end{array} \right.
$$
Inconsistency: $\Delta _ { \mathrm { s p e l l i n g } } = 1 \wedge \mathrm { S i m } _ { \mathrm { p r o d } } \geq \theta _ { p }$ , with $\tau = 0 . 8$ . E.g., “Microsoft” and “Microsfot” have $\mathrm { S i m _ { e d i t } }$ as 0.89.
(3) Substring Matches detect prefixes, suffixes, or substrings embedded within longer names, defined as:
$$
\Delta _ { \mathrm { s t r i n g } } ( V _ { 1 } , V _ { 2 } ) = \left\{ \begin{array} { l } { { \ 1 } ^ { \smile } \mathrm { ~ i f ~ n o r m } ( V _ { 1 } ) \subset \mathrm { n o r m } ( V _ { 2 } ) \lor \mathrm { n o r m } ( V _ { 2 } ) \subset \mathrm { n o r m } ( V _ { 1 } ) . } \\ { { 0 \quad \mathrm { o t h e r w i s e } . } } \end{array} \right.
$$
Inconsistency: $\Delta _ { \mathrm { s t r i n g } } = 1 \wedge \mathrm { S i m } _ { \mathrm { p r o d } } \geq \theta _ { p }$ . E.g., “Apache” vs. “Apache Software Foundation”. (4) Product Name as Vendor Name flags instances where products are referenced instead of vendors, defined as: $\begin{array} { r } { \Delta _ { \mathrm { p r o d } } ( V ) = \left\{ 1 \begin{array} { l l } { \mathrm { i f ~ } \exists V ^ { \prime } \neq V : \mathrm { n o r m } ( V ) = \mathrm { n o r m } ( P ) \land P \in P _ { \mathrm { n o r m } } ( V ^ { \prime } ) . } \\ { 0 } & { \mathrm { o t h e r w i s e . } } \end{array} \right. } \end{array}$
E.g., “Windows” instead of “Microsoft”.
(5) Shared Product Names identify cases where multiple vendors are linked to the same product, defined as:
$$
\Delta _ { \mathrm { s h a r e d } } ( V _ { 1 } , V _ { 2 } ) = \left\{ \begin{array} { l l } { { 1 } } & { { \mathrm { i f } \mathrm { \dot { ~ } S i m _ { \mathrm { p r o d } } ( V _ { 1 } , V _ { 2 } ) \geq \theta _ { \mathrm { h i g h } } . } } } \\ { { 0 } } & { { \mathrm { o t h e r w i s e } . } } \end{array} \right.
$$
E.g., “Sun Microsystems” and “Oracle” have postacquisition overlap $\theta$ as 0.8. For Shared Product Names, the SPR is defined as such:
$$
\begin{array} { r l } & { \mathrm { ~ S i m } _ { \mathrm { p r o d } } ( V _ { 1 } , V _ { 2 } ) = \frac { | P _ { \mathrm { n o r m } } ( V _ { 1 } ) \cap P _ { \mathrm { n o r m } } ( V _ { 2 } ) | } { m i n ( P _ { \mathrm { n o r m } } ( V _ { 1 } ) , P _ { \mathrm { n o r m } } ( V _ { 2 } ) ) } . } \end{array}
$$
This is to account for cases where a smaller company has been acquired by a larger company, where the smaller company has much fewer products.
These heuristics serve as a foundational approach to detecting potential naming discrepancies that are then manually verified. For example, names such as “heimdal” , “heimdalsecurity” and “heimdal project” can be grouped and reviewed to determine whether they represent the same entity. If confirmed, they are treated as naming inconsistencies and standardized. An additional layer of validation is integrated by analyzing shared product associations and cross-referencing external sources. Manual verification remains essential to distinguish true inconsistencies from cases where minor differences indicate distinct entities, such as separate firmware versions.
TABLE IIICOMMON INCONSISTENCY PATTERNS IN VENDOR NAMING
1 “Possible” groupings are generated by heuristics and validated as “Confirmed” inconsistencies through manual verification. 2 Numbers outside parentheses represent unique vendor groups, while those inside denote associated names. 3 Shared Product Ratio filters vendors sharing products, reducing false positives and refining heuristic groupings for manual verification.
# C. Inconsistency Analysis
1) Inconsistencies in Vendor Data: Our analysis extended the initial dataset of 229,023 CVEs and associated 32,773 $C P E s$ by incorporating 35,458 vendor-product-version pairs extracted from CVEdetails, a publicly available catalog of vendor and product information. We identified only 153 exact matched vendor names in CPE and CVEdetails when no normalization applies. This leads to a large set (67,925) of vendor names to be processed and standardized.
Our enhanced pipeline significantly extends the work of [3], which identified 1,835 inconsistent vendor names across 871 groups. In contrast, our method uncovered 65,482 inconsistent name instances grouped into 32,420 vendor clusters, as summarized in Table III. Format variations were the most common inconsistency, affecting 29,664 unique vendor groups and 59,424 name instances. These were primarily resolved through case folding, special character normalization, and token reordering. Such variations often arise from differing formatting conventions between $C P E$ and CVEdetails, particularly in the use of capitalization, which can impair retrieval accuracy in case-sensitive systems. Excluding case-related issues, 8838 groups (17606 instances) still exhibited format variations due to other formatting differences. Other inconsistency patterns, including spelling errors, acronyms, sub-string matches, and instances where product names are mistakenly labeled as vendors, are analyzed separately in the subset that excludes format variations.
We observed FP pairing from acronym and substring matches, which were flagged during manual validation. To mitigate such errors, we integrated a Shared Product Ratio (SPR) threshold (Equation 2) as a validation heuristic. Vendor name pairs with an $\mathrm { S P R } ~ \geq ~ 0 . 5$ were flagged as potential matches, and those with an $\mathrm { S P R } \ge 0 . 8$ exhibited strong semantic coherence, often reflecting genuine aliasing. This filtering mechanism significantly improved precision by reducing the manual validation workload while maintaining high recall. The resulting Shared Product Names category included 1,594 confirmed vendor groups (3,728 name instances).
An important meta-level insight is that while formatbased inconsistencies dominate quantitatively, the qualitative complexity and verification cost of semantic inconsistencies (spelling, acronyms, substrings) are substantially higher. These patterns are more likely to propagate errors in downstream tasks such as vulnerability resolution, threat attribution, or software inventory reconciliation.
2) Inconsistencies in Product Data: In the analysis of product naming inconsistencies, the first step involved addressing the vendor name discrepancies identified in the previous phase. To achieve this, we remapped vendor names to their most consistent forms, prioritizing the name associated with the highest number of CVEs. This approach was grounded in the assumption that the vendor name linked to the greatest number of CVEs is the most widely accepted representation.
Product naming inconsistency analysis focused on the format variation heuristic. This heuristic effectively addressed inconsistencies arising from minor character formatting differences, such as underscores versus hyphens, while minimizing the need for manual validation. By prioritizing format variations, our analysis reduced FPs caused by similar product names across unrelated vendors. Among 225,192 unique products, the format variation heuristic identified 138,722 instances consolidated into 68,746 product groups, and hence 700,26 discrepancies primarily due to minor formatting issues. These findings emphasize the importance of standardized naming conventions to ensure consistency. Without such conventions, errors in vendor names propagate to product names, compounding inconsistencies and undermining data integrity.
3) Impact of Data Inconsistency on Vulnerability Retrieval: Approximately $4 8 . 6 7 \%$ (33,062) of the 67,925 vendor names exhibit inconsistencies, with 65,482 entries consolidated into 32,420 standardized names. For vendor names from the $C P E$ dataset and CVEdetails, they each contains 16,444 $( 5 0 . 1 8 \% )$ and 16,697 $( 4 7 . 0 7 \% )$ inconsistencies. Moreover, even just within the consistent vendors, 70,026 product names $( 3 1 . 0 9 \%$ of 225,192) are affected by formatting variations.
Naming inconsistencies significantly hinder vulnerability retrieval by disrupting mappings between vulnerabilities and affected systems. Misaligned entries lead to incomplete assessments, where vulnerabilities are either overlooked or incorrectly associated. Such discrepancies delay patch identification and deployment, increasing the exposure window and the risk of exploitation. Moreover, the cumulative effect of these inconsistencies across large datasets can compound the risks, leading to widespread security gaps that are harder to detect and manage, as also discussed in works [3, 17, 31].
The analysis highlights that resolving inconsistencies requires scalable approaches to standardize naming conventions and enforce consistency across datasets. Automated normalization techniques, cross-database validation, and metadata enrichment can improve data integrity, enabling more effective vulnerability identification, prioritization, and mitigation.
# IV. METHODOLOGY OF VULCPE
This section provides an overview of VulCPE, detailing its architecture and key components designed for configurationaware vulnerability retrieval and management.
# A. Overview of VulCPE
The VulCPE architecture, illustrated in Fig. 1, processes vulnerability data to extract, standardize, and map system configurations for precise vulnerability retrieval.
The workflow begins with the Data Pre-Processor, which normalizes raw inputs from sources like NVD and CVEdetails, to ensure standardized data for downstream modules.
The Named Entity Recognition (NER) Module extracts cybersecurity-specific entities, including product names, versions, and types, from unstructured text. By leveraging domain-specific rules and configurations, the module ensures extracted entities reflect real-world system configurations.
The Relation Extraction (RE) Module maps relationships between recognized NER entities, such as product-version pairs, to enable precise configuration modeling.
Subsequently, the Post Processing Module comprises two key steps. First, the Vendor & Product Separator resolves vendor-product mappings using predefined heuristic rules and string similarity metrics, ensuring consistency with our canonical dictionaries. Next, with the processed vendor and product, the Version Converter translates complex version descriptors (e.g., “up to”, “before”) into normalized ranges based on datasets such as NVD. This step ensures consistency of vulnerable product versions across vulnerability sources.
uCPE Generator consolidates extracted product, version, and type data into hierarchical configurations, enabling interoperability and precise vulnerability-configuration mapping.
The Vulnerability Database Constructor structures processed data into a graph-based database $G = ( N , E )$ , where nodes $( N )$ represent entities (e.g., uCPE configurations) and edges $( E )$ capture relationships (e.g., $e _ { \mathrm { A N D } }$ , $e _ { \mathrm { O R } } \mathrm { , }$ ) among components. This database facilitates efficient querying and supports configuration-aware vulnerability assessments.
False Positive Filter employs graph-based matching to refine vulnerability-configuration mappings. The system configuration graph $( G _ { \mathrm { s y s } } )$ and vulnerability graph $( G _ { \mathrm { v u l } } )$ are traversed to evaluate matches based on logical dependencies.
# B. Named Entity Recognition
NER module extracts structured entities, namely vendor $( v _ { i } )$ , product $( p _ { i } )$ , version $( v e r _ { i } )$ , and type $t _ { i }$ from unstructured vulnerability reports. Let $T$ represent the text of a report. The extraction process is formally defined as:
$$
\mathrm { N E R } ( T ) = \{ ( v _ { i } , p _ { i } , v e r _ { i } , t _ { i } ) \mid v _ { i } , p _ { i } , v e r _ { i } , t _ { i } \in T \} .
$$
Our NER model is built on RoBERTa [22], chosen for its ability to capture complex contextual relationships. Input text is tokenized into both word-level and sub-word-level units, ensuring compatibility with out-of-vocabulary terms and multi-token entities. Each token is embedded into a dense vector representation, incorporating positional and sub-wordlevel embeddings. This approach effectively handles complex version formats with alphanumeric characters and punctuation (e.g., “v1.0.2-alpha”) and multi-token product names (e.g., “Google Chrome before $8 . 0 . 5 5 2 . 2 3 7 " )$ .
After initial embedding, tokens are further processed through self-attention layers, enabling the model to assign labels to tokens. The primary label set includes Product Name (PN), Modifier (MOD), Version $( V )$ , and Others $( O )$ . For instance, the previous is assigned “Google” (B-PN), “Chrome” (I-PN), “before” (B-MOD) and $\cdot 8 . 0 . 5 5 2 . 2 3 7 ^ { , }$ (V).
The model further integrates a domain-specific gazetteer derived from CVEdetails [8], containing vendor names, product names, and version ranges. This gazetteer is incorporated into a post-processing step to validate and adjust predictions using heuristic rules. For example, if the model labels “Internet” and “Explorer” as separate entities, the gazetteer merges them into “Internet Explorer” under a single PN label. This hybrid approach combines RoBERTa’s probabilistic predictions with deterministic rule-based corrections.
The NER module also captures product types (e.g., application, hardware, or OS). Using the same tokenizer and embeddings, extracted labels are concatenated with product type annotations. For instance, the earlier example is updated as “Google” (B-PN-APP), “Chrome” (I-PN-APP), “before” (B-MOD) and $^ { * * } 8 . 0 . 5 5 2 . 2 3 7 ^ { * } ($ (V). This categorization ensures differentiations of product roles in system configurations.
# C. Relation Extraction
The RE module identifies relationships between entities extracted by the NER module. With $R$ represents the set of valid relationships, the relationship extraction process is formally defined as:
$$
\mathrm { R E } ( v _ { i } , p _ { i } , v e r _ { i } , t _ { i } ) = \mathrm { T r u e } \iff ( v _ { i } , p _ { i } , v e r _ { i } , t _ { i } ) \in R .
$$
The RE model operates in two steps. It first groups modifiers and versions (e.g., “before $8 . 0 . 5 5 2 . 2 3 7 " )$ together as $( M O D \_ V )$ . Then entities identified by the NER model are grouped into product-modifier-version $( P N - M O D \_ V )$ pairs. For each product $( P N )$ , all associated modifiers and versions $( M O D \_ V )$ within the same sentence are paired. For example, the vulnerability report results in the following four candidate pairs: “Google Chrome” with “before 8.0.552.237”; “Google Chrome” with “before $8 . 0 . 5 5 2 . 3 4 4 ^ { \cdots }$ ; “Google Chrome OS” with “before $8 . 0 . 5 5 2 . 2 3 7 ^ { \circ }$ ; and “Google Chrome OS” with “before $8 . 0 . 5 5 2 . 3 4 4 ^ { \cdots }$ . Each candidate pair is indexed based on its entity labels and converted into tokenized numerical representations, including token IDs, attention masks, and segment IDs. During inference, the RE model predicts the presence of a valid relationship $( P N - M O D \_ V )$ using logits generated from RoBERTa’s classification head, with “Y” indicating a valid relationship and “N” indicating its absence. If a valid relationship is detected, the model returns the corresponding $( P N - M O D _ { - } V )$ pairs.
# D. Canonical Dictionary Creation
To standardize vendor, product, version and type data across heterogeneous sources, we construct a canonical dictionary of vendor-product-version-type pairs using CPE metadata utilized in NVD and a crawled CVEdetails dataset.
To resolve inconsistencies, a standardization function $S ( n ^ { * } , \mathcal { D } )$ maps inconsistent names $( n )$ to a canonical form $( n ^ { \prime } ) . D$ is a dictionary of standardized names, using:
$$
S ( n ^ { * } ) = \arg \operatorname* { m a x } _ { { \bf n } ^ { \prime } \in { \cal D } } \mathrm { d } ( { \bf n } , { \bf n } ^ { \prime } ) .
$$
The similarity between an extracted name (e.g., vendor $\boldsymbol { v } _ { i }$ or product $p _ { i }$ ) and a canonical name $\boldsymbol { n } ^ { \prime } \in \mathcal { D }$ is computed using similarity calculation, Levenshtein distance is used as an example:
$$
\sin ( n , n ^ { \prime } ) = 1 - { \frac { \operatorname { L e v } ( n , n ^ { \prime } ) } { \operatorname* { m a x } ( | n | , | n ^ { \prime } | ) } } .
$$
The canonical name is selected if the similarity exceeds a predefined threshold $\tau$ :
$$
n ^ { * } = \arg \operatorname* { m a x } _ { n ^ { \prime } \in \mathcal { D } } \sin ( n , n ^ { \prime } ) \quad \mathrm { i f ~ } \sin ( n , n ^ { \prime } ) \geq \tau .
$$
NVD CPE strings are parsed into vendor, product, version, and type. Crawled CVEdetails data is flattened into a similar data frame, extracting vendor, product, and version lists. Next, we normalize both vendor and product names through a standardization process, which lowercases text, removes special characters, and standardizes whitespace. Inconsistency detection leverages heuristics from Section III-B: Format Variations, Spelling Variations, Acronyms, Substring Matches, Product Name as Vendor Name, and Shared Product Names. These heuristics are applied to both NVD and CVEdetails data, with consistent entries (via inner joins) forming the canonical dictionary and inconsistent ones (via left-anti joins) mapped to canonical names for traceability. Versions are grouped by normalized vendor-product pairs, combining unique versions from both sources.
In doing so, we obtain a canonical dictionary $\mathcal { D }$ and separate mapping tables linking inconsistent names to canonical ones, supporting precise vulnerability retrieval.
# E. Post Processing
The post-processing module processes two input sets or one of them: a set of extracted RE entries $\mathcal { R } = \left\{ \mathrm { R E } _ { \mathrm { e n t r y } _ { i } } \ \right|$ $i \in I \}$ , where each $\mathbf { R E _ { e n t r y } } _ { i } \ = \ ( v _ { i } , p _ { i } , \mathrm { v e r } _ { i } , t _ { i } )$ ; and a set of CPE match entries $\mathcal { C } ~ = ~ \{ \mathrm { C P E } _ { \mathrm { e n t r y } _ { j } } ~ | ~ j ~ \in ~ J \}$ , where each $\mathbf { C P E _ { \mathrm { e n t r y } _ { j } } } ~ = ~ ( v _ { j } , p _ { j } , \mathrm { v e r } _ { j } , t _ { j } )$ . We employ $S ( n ^ { * } , \mathcal { D } )$ to standardize each $\mathrm { R E _ { e n t r y } } _ { i }$ and $\mathrm { C P E _ { \mathrm { e n t r y } _ { j } } }$ to their canonical forms, as defined in Eq. (13). For vendor standardization, $\boldsymbol { v } _ { i }$ is compared against $V _ { \mathrm { c a n o n i c a l } } \subset \mathcal { D }$ , our canonical dataset of vendor names. After identifying $v ^ { * }$ , the residual string is matched against products associated with $v ^ { * }$ in $D$ . Product names are similarly standardized.
version description (from $\mathrm { v e r } _ { i }$ or $\operatorname { v e r } _ { j }$ ) and $V _ { \mathrm { r e l e a s e s } }$ the set of available versions for a standardized vendor-product pair $( v ^ { * } , p ^ { * } )$ . The version converter maps $v _ { \mathrm { d e s c } }$ to a discrete list:
$$
\mathrm { L i s t } ( v _ { \mathrm { d e s c } } ) = \{ v _ { k } \in V _ { \mathrm { r e l e a s e s } } \mid \mathrm { c o n d } ( v _ { k } ) \} ,
$$
Unlike [9], which assumes sequential versions, our approach supports non-sequential vendor releases. For example, “Google Chrome before $8 . 0 . 5 5 2 . 3 4 4 ^ { \cdots }$ is converted to a list of actual releases: [0.1.38.1, 0.1.38.2, ..., 8.0.552.235].
The hybrid post-process combines entries from $\mathcal { R }$ and $\mathcal { C }$ to produce a set of normalized uCPE entries using canonical dictionary $\mathcal { D }$ . If both $\mathcal { R }$ and $\mathcal { C }$ are empty, the process is skipped. When both $\mathcal { R }$ and $\mathcal { C }$ are non-empty, entries are aligned by computing similarity between standardized vendorproduct pairs. If the similarity exceeds $\tau$ and versions align, the CPE entry is prioritized. Unaligned entries are processed independently. Results are cached to avoid redundant computations.
Version standardization converts textual version descriptions into mathematical constraints or discrete lists. Descriptions such as “version 1.4 and earlier” becomes $\Im \leq \ l . 4 ^ { \mathfrak { s } }$ , while “not affected before version ${ 5 . 0 ^ { 9 } }$ becomes $^ { 6 6 } > 5 . 0 ^ { 3 }$ . CPEspecific constraints, such as “versionStartIncluding” $( \geq )$ or “versionEndExcluding” $( < )$ , are also parsed. Let $v _ { \mathrm { d e s c } }$ be a
# F. Formation of uCPE
The uCPE schema addresses the challenges of complex relationships, such as “Running On/With” dependencies and nested configurations. A uCPE entry $\mathrm { \Gamma ( u C P E _ { \mathrm { { e n t r y } } } ) }$ represents the foundational unit of vulnerability configuration, consisting a unique identifier, vendor name, product name, version, and product type (e.g., Application, OS, Hardware).
Configurations are modeled as subgraphs $G _ { \mathrm { c o n f g } }$ , where $N _ { \mathrm { u C P E } }$ represents nodes corresponding to individual components, and $E _ { \mathrm { c o n f i g } }$ defines the logical dependencies between components, using:
$$
G _ { \mathrm { c o n f i g } } = ( N _ { \mathrm { u C P E } } , E _ { \mathrm { c o n f i g } } ) .
$$
Each edge in $E _ { \mathrm { c o n f g } }$ represents either:
AND relationships where components must coexist:
$$
( \mathsf { u C P E } _ { \mathrm { e n t r y } _ { i } } \wedge \mathsf { u C P E } _ { \mathrm { e n t r y } _ { j } } ) \to e _ { \mathrm { A N D } } .
$$
• $O R$ relationships where at least one component suffices:
$$
( \mathsf { u C P E } _ { \mathrm { e n t r y } _ { k } } \lor \mathsf { u C P E } _ { \mathrm { e n t r y } _ { l } } ) e _ { \mathrm { O R } } .
$$
Systems and vulnerabilities are modeled as graphs to represent their configurations and relationships. $N _ { \mathrm { s y s } }$ and $N _ { \mathrm { v u l } }$ are nodes representing $\mathrm { \ u C P E _ { e n t r y } }$ and their associated configurations. $E _ { \mathrm { s y s } }$ and $E _ { \mathrm { v u l } }$ are edges capturing logical relationships between uCPE entries or configurations, defined as:
$$
G _ { \mathrm { s y s } } = ( N _ { \mathrm { s y s } } , E _ { \mathrm { s y s } } ) , \quad G _ { \mathrm { v u l } } = ( N _ { v u l } , E _ { v u l } ) .
$$
Nodes in $N _ { s y s }$ and $N _ { v u l }$ represent either individual $\mathrm { \ u C P E _ { e n t r y } }$ elements or logical combinations. For example:
$$
N _ { \mathrm { s y s } } = \{ \mathrm { u C P E } _ { \mathrm { e n t r y } _ { i } } , ( \mathrm { u C P E } _ { \mathrm { e n t r y } _ { j } } \vee \mathrm { u C P E } _ { \mathrm { e n t r y } _ { k } } ) , \ldots \} .
$$
For hierarchical relationships, the vulnerability graph $G _ { \mathrm { v u l } }$ for each $C V E$ aggregates all uCPE configurations:
$$
G _ { \mathrm { v u l } } = \bigcup _ { i = 1 } ^ { n } G _ { \mathrm { c o n f i g } } ( { \mathrm { u C P E } } _ { \mathrm { e n t r y } _ { i } } ) .
$$
# G. Database Construction and Retrieval
The database organizes our extracted information into three collections: uCPE, Configurations, and Vulnerabilities.
The uCPE Collection stores standardized vendor-productversion entries for interoperable vulnerability mapping, leveraging the canonical dictionary.
The Configurations Collection represents sub-graphs $( G _ { \mathrm { c o n f i g } }$ , Eq. (17)), with each entry containing a unique identifier (config id), logical relationship type $( e _ { \mathrm { A N D } } , e _ { \mathrm { O R } } )$ , and references to uCPE nodes, modeling hierarchical dependencies in the vulnerability graph $G _ { \mathrm { v u l } }$ (Eq. (22)).
The Vulnerabilities Collection links vulnerabilities to configurations via config id, including descriptions, CVSS scores, and exploitability metadata.
Two primary query types are implemented: one retrieves vulnerabilities based on CVE identifiers, while the other fetches vulnerabilities by matching specific product and version details. These queries leverage the hierarchical structure of $G _ { s y s }$ and $G _ { v u l }$ . This structure enhances VulCPE’s precision and supports third-party scanners.
# H. Graph-Based False Positive Filtering
Our graph-based FP filtering technique leverages domainspecific cybersecurity knowledge to model relationships between vulnerabilities and assets. This approach incorporates configuration dependencies, logical relationships, and hierarchical asset structures, critical for precise vulnerability applicability assessments.
The applicability of a vulnerability node $n _ { v } \in N _ { v u l }$ to a system node $n _ { s } ~ \in ~ N _ { s y s }$ is determined by evaluating their hierarchical configurations.
For simple configurations without logical operators, the matching function evaluates whether the configuration graph of $n _ { v }$ is a subgraph of that of $n _ { s }$ :
$$
\mathrm { M a t c h } ( n _ { v } , n _ { s } ) = \left\{ \begin{array} { l l } { 1 , } & { \mathrm { i f ~ } G _ { \mathrm { c o n f i g } } ( n _ { v } ) \subseteq G _ { \mathrm { c o n f i g } } ( n _ { s } ) , } \\ { 0 , } & { \mathrm { o t h e r w i s e . } } \end{array} \right.
$$
For configurations involving logical operators, the matching function evaluates dependencies within $E _ { c o n f i g }$ . Specifically:
$$
\begin{array} { r } { \mathrm { M a t c h } ( n _ { v } , n _ { s } ) = \left\{ \begin{array} { l l } { 1 , } & { \mathrm { i f ~ } \forall e _ { \mathrm { A N D } } \in E _ { \mathrm { c o n f i g } } ( n _ { v } ) , \mathrm { ~ M a t c h } ( e _ { \mathrm { A N D } } , n _ { s } \in \overline { { \mathrm { a c h } } } ^ { \mathrm { I } } \mathbb { I } _ { \mathrm { ~ I } } ^ { \mathrm { ~ I } } ) } \\ { 1 , } & { \mathrm { i f ~ } \exists e _ { \mathrm { O R } } \in E _ { \mathrm { c o n f g } } ( n _ { v } ) , \mathrm { ~ M a t c h } ( e _ { \mathrm { O R } } , n _ { s } ) = \mathrm { s t i n g } } \\ { 0 , } & { \mathrm { o t h e r w i s e } . } \end{array} \right. } \end{array}
$$
This matching process ensures that vulnerabilities are only applied when all AND conditions or any OR condition in the vulnerable configuration are matched by system configuration.
Further, the filtering process utilizes graph traversal to refine vulnerability applicability. Vulnerabilities $( v )$ and SUI are represented as vertices in $G _ { \mathrm { v u l } }$ and $G _ { \mathrm { s y s } }$ , enriched with logical dependencies. Algorithm 1 outlines the FP filtering procedure. If a match is found, the vulnerability is added to the set of applicable vulnerabilities $\cdot { V _ { \mathrm { a p p l i c a b l e } } } )$ , as giving by:
$$
V _ { \mathrm { a p p l i c a b l e } } = V _ { \mathrm { v u l } } - \{ v \in V _ { \mathrm { v u l } } \ | \ \mathrm { M a t c h } ( v , n _ { s } ) = 1 \} .
$$
Input: System graph $G _ { \mathrm { s y s } } = ( N _ { \mathrm { s y s } } , E _ { \mathrm { s y s } } )$ , Vulnerability
graph $G _ { \mathrm { v u l } } = ( N _ { \mathrm { v u l } } , E _ { \mathrm { v u l } } )$
Output: Set of applicable vulnerabilities Vapplicable
1 Algorithm Graph-Based False Positive Filtering:
2 Initialize $V _ { \mathrm { a p p l i c a b l e } } \emptyset$
3 foreach $n _ { v } \in N _ { \nu u l }$ do
4 foreach $n _ { s } \in N _ { s y s }$ do
5 $G _ { \mathrm { c o n f i g } } \mathrm { ( v u l ) } \gets \mathtt { T r a v e r s e } ( n _ { v } , E _ { \nu u l } )$
6 $G _ { \mathrm { c o n f i g } } \mathrm { ( s y s ) \gets { T r a v e r s e } } ( n _ { s } , E _ { s y s } )$
7 if Applicability(Gconfig vul,
Gconfig sys) then
8 Vapplicable ← Vapplicable ∪ {nv}
9 return Vapplicable
10 Function Applicability(Gconfig vul,
Gconfig sys):
11 if $A N D \in E _ { c o n f i g } ( \nu u l )$ then
12 foreach $e _ { A N D } \in E _ { c o n f i g } ( \nu u l )$ do
13 if Match $( e _ { A N D } , G _ { c o n f i g } ( s y s ) ) \ = \ O$ then
14 return False
15 return True
16 if $O R \in E _ { c o n f i g } ( \nu u l )$ then
17 foreach $e _ { O R } \in E _ { c o n f i g } ( \nu u l )$ do
18 if Match $( e _ { O R } , G _ { c o n f i g } ( s y s ) ) \ = \ I$ then
19 return True
20 return False
21 Function Match(element, $G _ { c o n f i g } ( s y s ) _ { \mathrm { { \ell } } } ^ { \prime }$ ):
22 return 1 if $e l e m e n t \in G _ { \mathrm { c o n f i g } } ( \mathrm { s y s } )$ ; otherwise, 0.
# V. IMPLEMENTATION
We leverage several optimization strategies to enable VulCPE to handle large-scale vulnerability data while maintaining accuracy and minimizing computational overhead.
# A. Parallelization
Parallelization is implemented across multiple VulCPE modules to reduce processing time by distributing workloads. In the data pre-processing stage, text normalization and tokenization of vulnerability reports are executed concurrently using multi-threading, allowing independent processing of ,report. Similarly, post-processing operations, including similarity computations for standardizing vendor and ct names, are parallelized across CPU cores, while database lookups for version conversions are batched to minimize I/O overhead. In the FP-filtering stage, graph-based subgraph isomorphism checks are distributed across multiple configurations.
# B. FP Handling
Our method is built upon [32] with three key improvements: Firstly, we utilize $u C P E \ – I D$ than simply relying on the extracted textual information. Secondly, we use NetworkX to support graph implementation using Python to allow easier integration with the whole vulnerability pipeline. Thirdly, we enhance the efficiency of FP filtering by storing the graph locally after its initial creation, and subsequently appending nodes upon the identification of new CVEs or Assets within the system. This approach significantly optimizes performance in terms of execution time. Empirical evidence from our experiments later in Section VI illustrates this improvement: the initial processing of 232 Assets requires approximately 25 minutes and 33 seconds. However, subsequent iterations demonstrate a marked reduction in execution time, involving only the verification of new assets rather than the comprehensive regeneration of the graph. Specifically, the addition of nodes for new assets incurs around 6.6 seconds per node, showcasing the efficiency of our optimized model in dynamically updating with minimal computational overhead.
# C. Incremental Updates
The graph-based vulnerability database is designed to support incremental updates, ensuring that new data can be integrated without requiring a full reconstruction. When new vulnerabilities or configurations are introduced, only the affected graph nodes and edges are updated, avoiding the computational expense of rebuilding the entire structure. This approach is also applied in the FP-filtering process, where the graph is modified incrementally upon the addition of new assets or vulnerabilities. Instead of reprocessing the entire dataset, filtering operations are restricted to newly introduced or updated nodes.
# VI. EXPERIMENTAL EVALUATION
This section presents a comprehensive experimental evaluation, with details on dataset, baseline models, evaluation metrics, and key implementation specifics. We focus on:
RQ1: How effective is VulCPE in entity extraction and relation extraction compared to state-of-the-art approaches? • RQ2: Can VulCPE be effectively applied to vulnerability retrieval in real-world settings?
# A. Experiment I: NER/RE Evaluation
1) Dataset: Previous NER datasets for vulnerability contexts [9] utilize simplistic annotation schemes (SN, SV, O) that inadequately capture nuanced entity boundaries and multitoken entities common in vulnerability data. Our review identified significant labeling gaps, necessitating a more comprehensive dataset for structured vulnerability descriptions.
We implemented a customized BIO format to label vulnerability reports, generating a ground-truth dataset for NER model training and validation. To enhance model performance, we expanded the NER label schema to include three product categories, replacing all B-PN/I-PN labels with categorized labels to improve uCPE matching and vulnerability retrieval.
From our dataset (Section III), we sampled 5,000 vulnerability descriptions (3,000 pre-2019 and 2,000 post-2019) for balanced temporal representation. Each description was tokenized and initially labeled using GPT-4o, though we observed relatively low accuracy, particularly for modifier (MOD) and version labeling. Consequently, two security researchers conducted manual reviews to ensure labeling accuracy.
To incorporate RE, we developed rules capturing relationships between product entities and their associated versions with modifiers. This approach identifies product-to-version relationships where modifiers define version applicability conditions (e.g., “before” a certain version or “fixed in” a particular release). We generated candidate pairs by linking product entities with version-modifier entities within the same context, assigning position indices for pairing. Additional contextual validation determined logical associations between the product and the version-modifier combination, with pairs labeled as valid (Y) or invalid (N).
2) Evaluation Metrics: Four main metrics are utilized to validate NER and RE models: (1) Accuracy is the fraction of correct predictions out of all predictions, offering a measure of overall correctness; (2) Precision is the ratio of correctly extracted entities and relations to the total identified, which minimizes false positives; (3) Recall is the proportion of correctly extracted entities and relations out of all relevant ones, which ensures true positives are included; (4) F1 Score is a harmonic mean of precision and recall, providing a balanced evaluation of accuracy and error rates.
3) Implementation Details: Our NER model employs the RoBERTa architecture via Hugging Face transformers, with labeled BIO format text split into training and testing sets (80-20) using a fixed random seed for reproducibility.
For RE, we utilize RoBERTaForSequenceClassification to identify entity relationships. Input sentences are preprocessed by tagging entities with custom tokens, then tokenized into IDs, masks, and segments to generate logits. Valid productversion pairs are extracted based on predictions.
4) NER and RE Performance: We evaluated our NER model, built on RoBERTa, against state-of-the-art baselines, including VERNIER [31] and VIEM [9]. VIEM results correspond to its best-performing configuration, incorporating transfer learning and gazetteer features, while VERNIER’s performance is reported for English-language vulnerability reports. We also included TinyLlama [37] which is a recent lightweight LLM that achieves competitive performance on token-level tasks. The RE evaluation of baseline models compares the set of predicted product-version relationships against the set of ground-truth relationships per sentence, using a greedy best-match approach with relaxed product aliasing and version matching. Table IV shows that our RoBERTa model with gazetteer achieved an accuracy of $9 8 . 5 6 \%$ , precision of $9 5 . 7 7 \%$ , recall of $9 7 . 5 4 \%$ , and an F1 score of $9 6 . 5 3 \%$ , demonstrating comparable performance to both baselines and outperforming simpler configurations such as RoBERTa without a gazetteer.
TABLE IV PERFORMANCE COMPARISON OF NER MODELS
For NER categorization across three categories (APP, OS, $H W )$ , we calculated both macro and weighted averages. As presented in Table V, the model achieved high recall for Applications $( 9 7 . 4 2 \% )$ and OSs $( 9 3 . 6 8 \% )$ , while the performance for Hardware $( 7 9 . 0 1 \% )$ was lower due to the relatively smaller dataset and higher complexity in distinguishing hardware-related entities. The weighted average across categories reached $9 9 . 3 8 \%$ accuracy, demonstrating strong overall performance. The model achieved $9 9 . 3 8 \%$ weighted average accuracy, demonstrating robust overall performance, while the macro average $( 9 9 . 4 9 \%$ accuracy) confirmed balanced crosscategory capability.
TABLE V PERFORMANCE OF NER CATEGORIZATION MODEL
Comparing RE model performance against VIEM [9], VIEM achieved slightly higher performance with groundtruth RE labels, while our model outperformed VIEM when using NER results as input. Tinyllama model’s near-zero recall $( 0 . 0 5 \%$ for zeros shot, $0 . 3 1 \%$ for few-shot, and $0 . 8 4 \%$ for fine tuning) reflects severe under-prediction, exacerbated by mismatches like “Oracle” vs. “oracle database”. RE model effectiveness depends significantly on NER output quality for entity identification and linking, with the pair generation process substantially influencing overall performance.
TABLE VI PERFORMANCE COMPARISON OF RE MODELS
5) Error Analysis: We conducted thorough error analysis in our models and identified three main patterns. Our NER and RE models face challenges with complex product names and version mismatches. For example, in “Microsoft Word 2007 SP3, Office 2010 SP2”, “2007” is mislabeled as part of the product name (I-PN) instead of a version (B-V).
Ambiguity in platform vs. product classification is evident when $\mathbf { \partial } ^ { \cdot \circ } i O S ^ { \prime }$ in “Newphoria Auction Camera for iOS” is misclassified as a product (B-PN) instead of a non-entity (O).
Product-version confusion occurs, as in date-based versions like “2017-02-12” in “Android for MSM before 2017-02-12” that cause boundary errors.
Heuristic post-processing rules partially mitigate these errors by reclassifying year-based identifiers (e.g., “2007”) as versions and normalizing complex version patterns, improving boundary detection. We also utilized context clues (e.g., prepositions like “for”) to distinguish platforms and by flagging common product name suffixes like “Edition” as I-PN, reducing misclassifications.
# B. Experiment II: Vulnerability Retrieval
1) Dataset: To simulate a real-world use-case scenario for our comparative analysis, we randomly selected and stored commonly used software packages within our testing environment. We then generated a system configuration file that comprised three distinct components: a network device segment consisting of 4 components, two virtual machines, one based on Linux OS and one based on Windows, with 46 and 22 components, respectively.
2) Steps: We queried the system’s configuration against multiple vulnerability databases: NVD, cve-search, OpenCVE and our proprietary database. This yields separate sets of vulnerabilities, denoted as $V _ { n v d }$ , $V _ { c v e s e a r c h }$ , $V _ { o p e n c v e }$ and $V _ { o u r }$ , respectively. A union set, $V _ { u n i o n }$ , is constructed from the individual sets to encompass all unique vulnerabilities identified across the databases. We did not involve OSV, Security Database and CVEdetails, due to different focus for OSV and limited accessibility for Security Database and CVEdetails. We further produced several sub-databases considering various query methods provided by NVD API, cvesearch and OpenCVE in terms of keyword (exact) match and $C P E$ match, following their official query instructions. For the latter, we use the $u C P E$ metadata generated in our vulnerability pipeline as query tags.
A manual verification process is conducted on $V _ { \mathrm { u n i o n } }$ to determine the applicability of each vulnerability to our system, involving a detailed review of vulnerability reports and matching identified vulnerabilities against the system configuration. Through the manual verification process, we establish a ground-truth dataset $V _ { \mathrm { g t } }$ , representing the accurately identified vulnerabilities applicable to our system. We then compare $V _ { \mathrm { g t } }$ against each database-specific vulnerability set (e.g., $V _ { \mathrm { n v d } }$ , Vcvesearch, Vopencve, $V _ { \mathrm { o u r } } )$ .
3) Evaluation Metrics: Validation of vulnerability retrieval performance involves calculating FP (or False Positives), FN (or False Negatives), TP (or True Positives), Retrieval Precision and Retrieval Coverage for each dataset. We then calculate the average of them. Here $V _ { n }$ denotes a database-specific vulnerability set and $V _ { \mathrm { g t } }$ denotes a ground-truth dataset.
• FP: Vulnerabilities in $V _ { n }$ but not in $V _ { g t }$ .
• FN: Vulnerabilities in $V _ { g t }$ but not in $V _ { n }$ .
• TP: Vulnerabilities in both $V _ { n }$ and $V _ { g t }$ .
Retrieval Precision: T $\mathrm { P / ( T P + F P ) }$ , the fraction of correctly identified vulnerabilities.
Retrieval Coverage: TP/ $( \mathrm { T P } + \mathrm { F N } )$ , the fraction of actual vulnerabilities correctly identified.
4) Results: The results are summarized in Table VII. The baseline outcomes, displayed in Columns 2 to 5, illustrate the precision and coverage of vulnerability retrieval using various methods: exact matching of NVD keywords via the $N V D A P I$ , keyword matching with localized cve-search database, and localized OpenCVE. Enhanced baseline results leveraging our CPE metadata tags as queries are detailed in Columns 6 to 8.
TABLE VII COMPARATIVE STUDY RESULTS OF VULNERABILITY RETRIEVAL
Typically, vulnerability analyzers are limited to system configuration data and lack comprehensive configuration-based metadata for precise vulnerability identification. Incorporating CPE query data improved precision and coverage across all baseline databases, confirming our assumption that standardized metadata enhances retrieval accuracy. Our vulnerability pipeline achieved the highest average precision of $7 2 . 6 \%$ . In terms of coverage, our solution provided a good result of $9 2 . 6 \%$ , close to the highest coverage of $9 5 . 4 \%$ achieved by NVD when using our generated $C P E$ metadata as query tags. | The dynamic landscape of cybersecurity demands precise and scalable solutions
for vulnerability management in heterogeneous systems, where
configuration-specific vulnerabilities are often misidentified due to
inconsistent data in databases like the National Vulnerability Database (NVD).
Inaccurate Common Platform Enumeration (CPE) data in NVD further leads to false
positives and incomplete vulnerability retrieval. Informed by our systematic
analysis of CPE and CVEdeails data, revealing more than 50% vendor name
inconsistencies, we propose VulCPE, a framework that standardizes data and
models configuration dependencies using a unified CPE schema (uCPE), entity
recognition, relation extraction, and graph-based modeling. VulCPE achieves
superior retrieval precision (0.766) and coverage (0.926) over existing tools.
VulCPE ensures precise, context-aware vulnerability management, enhancing cyber
resilience. | [
"cs.CR",
"cs.DB",
"cs.IR",
"D.4.6; H.3.3; H.2.8; I.2.7"
] |
# 1 Introduction
Software patching is a time-intensive and cognitively demanding task, especially for large and complex codebases. In the real world, effective patching often requires a combination of complementary skills: locating the faulty component, generating plausible fixes, and validating the changes. Recent works leverage general-purpose LLMs [32, 3, 34, 33, 15, 4] to construct patching agents with three components, responsible for localization, generation, and validation, respectively. They demonstrate remarkable performance on SOTA benchmarks (e.g., SWE-bench [22]) and show significant potential to automate patching in the real world. Despite these promising results, concerns about cost efficiency and data privacy further motivate the development of customized patching models. Current approaches train one model for the end-to-end patching pipeline through supervised fine-tuning (SFT) or reinforcement learning. Specifically, early works [54, 29] fine-tune 72B and 32B models through simple supervised data and achieve around $30 \%$ resolved rates on SWE-bench-Verified. More recent methods implement rule-based rewards and train reasoning models with reinforcement learning, with SWE-RL [50] achieving the highest resolved rate of $41 \%$ on SWE-bench-Verified using a 70B model. However, these monolithic approaches fail to imitate the real-world patching paradigm, where specialized engineers collaborate by dividing responsibilities according to their expertise.
Inspired by the collaborative workflow in the realistic software engineering practice [30, 51], we propose Co-PatcheR, the first patching agent with collaborative small reasoning models designed specifically for different components. Our key insight for having component(s)-specific models is that different components have different inputs, outputs, and capability requirements. Specifically, localization and generation require a similar capability of interpreting the issue description and understanding the current codebase. Validation, on the other hand, generates testing cases without knowledge of the patches or the codebase. Given the non-trivial differences, it is challenging for one small model to handle all these sub-tasks. Following this intuition, we craft a tailored task design and training recipe for different components, aiming to minimize the model size while preserving performance. Specifically, given the similarity between localization and generation, we train a single model (Loc-Gen model) to handle both functions. For localization, we design it as a two-step procedure, where the model first identifies the affected files and then pinpoints the specific lines responsible for the issue. This task decomposition reduces the task complexity and context length, making it more suitable for small models. For patch generation, we train the Loc-Gen model to not only generate patches but also review and refine its own solutions. With this additional self-critical capability, the Loc-Gen model can prevent common errors and generate higher-quality candidates. Finally, we train two models to generate multiple and diverse issue-reproducing test cases (PoC) and judge the patch correctness based on the PoC execution outcomes. The insight here is to provide diverse PoCs for a more sound correctness judgment. Here, Val-assert model and Val-no-assert model generate PoCs with and without assertions, respectively. We use these models together with available functionality tests and a majority vote mechanism to select the final patch. For all three models, we apply model distillation with a novel data construction method to enable their reasoning capabilities. Different from existing distillation models (e.g., S1 [31]), we find that creating reasoning data with correct answers is critical for our fine-tuned model to achieve high performance.
Through extensive experiments, we first show that when using only $\mathbf { 3 \times 1 4 B }$ models, Co-PatcheR can achieve a $46 \%$ resolved rate on SWE-bench-Verified with 60 patch candidates. Compared to SWE-RL, Co-PatcheR achieves a high resolved rate with $40 \%$ fewer parameters and $88 \%$ fewer samples. Besides, Co-PatcheR only needs to run one 14B model at a time, which is much more efficient than SOTA methods during the testing phase. Furthermore, with our specific reasoning data construction method, Co-PatcheR only requires 6K data for training, which is much more efficient than SOTA methods that use at least $3 0 \mathsf { K }$ samples. We then conduct a comprehensive ablation study for each model to validate its task design and training recipe. Finally, we validate the necessity of testing-phase reasoning, our choice of data number and model size, through more ablation studies.
Contributions. We propose Co-PatcheR, the first collaborative patching system with componentspecific reasoning models. Co-PatcheR is the most data- and parameter-efficient patcher that offers greater effectiveness, efficiency, and modularity than existing patchers with specialized models. Co-PatcheR ranks among the top-10 open-source systems on SWE-bench-Verified, outperforming all patchers with open-source models. We propose specific training recipes for each model and obtain the following new findings that are unique to patching:
• Using one model for localization and generation performs similarly to using separate models.
• Multiple models for PoC generation provide necessary diversity that a single model cannot achieve.
• Critique is important for generation, and multi-source data is important for validation.
• Simply increasing data or model size is not always helpful; data scale should match model size.
• Rejection sampling-based data filtering helps all components; but rationalization does not.
# 2 Existing Works and Limitations
LLM-based patching agent. There are several works on designing a patching agent using generalpurpose LLMs [25, 7, 6, 27, 2, 20, 8, 14, 13, 56, 59, 5, 42, 35]. Some agents achieve remarkable performance on the SWE-bench benchmark [22], a benchmark for real-world GitHub issues written in Python. The top-ranked open-source agents are OpenHands [48], Agentless [53], and PatchPilot [24]. Here, Agentless and PatchPilot follow a pre-defined workflow, where PatchPilot introduces a number of optimizations over Agentless. OpenHands, on the other hand, gives more freedom to the LLM to
Training Recipes Inference Pipeline Localization Data Generation Data Validation Data Issue: Add the support for...
deepSsteepk1:aFricleh iLtoecatluirzea.t.i.on deepSsteepk1:aPrcathcitheGcteunrer.a.t.ion Val-adseseprts-emeokdaerlchitVeacl-tnuor-ea.s.s.ert-model LocSatleipz1a:tion VaMlioddaetilosn
IRsespuoe SDteSrustcecrtpiupr2tei:oLnine L oFacinlaedl ilmzoaoatdi.eopl.nyp…y ICGshosludnekDnseCSstocedriep t2i:onPatc h dC…erfi timcodel(): Issue Description Issue Description I<atswnhsielenlwrkgt>ieP.o.no.ne<C…r/tawhtietnhk> TPa<IoatswChnkiienlEPlkxwrg>eoe.Pc.mno.<RepC/ert…ashtiuenltks> PLiaGntecehlnSotgecerapnal ie1tzr:iaotiinon Functionality
FSitleps 1provided in Line 311 and line 11 in … iPnatSctehpes1provided Tshoempeaitscshu ehsas… <Ptahsisnko>r..n.<o/t hink> 8888 … Filter Criteria Answer Correctness Candidate Patches Dynamic Tests Reasoning length A Best patches 088..88 SFT SFT SFT Majority Voting Loc-Gen model Val-assert Val-no-assert Final Patch
decide its workflow on the fly. OpenHands can have a higher performance but is less stable and more costly than PatchPiolt. We use PatchPiolt as the agent scaffold as it is more cost-efficient.
Specified models for patching. There are some early explorations on training customized LLMs for the patching task. At a high level, most methods train one model for the end-to-end pipeline, and they use relatively large models. Specifically, SWESynInfer [27] and SWE-Gym [36] train a model with 72 billion (72B) and 32B parameters, respectively, to perform the end-to-end pipeline. Both models are trained with supervised fine-tuning (SFT) without a testing-phase reasoning. Their resolved rate on SWE-bench-Verified is around $30 \%$ . SWE-Fixer [54] trains one 7B model for fault localization and a 72B model for patch generation with a resolved rate of $33 \%$ on SWE-bench-Verified.
Follow-up works explore training the model with reinforcement learning to enable testing-phase reasoning [58, 29, 50]. SEAlign [58] continues training on SWE-Gym [36] using Direct Preference Optimization [40] to retain preferred solution paths. SoRFT [29] and SWE-RL [50] define rule-based rewards and train the model with policy gradient methods (PPO [43] and GRPO [44]) for both localization and generation. Among these three methods, SWE-RL achieves the highest resolve rate of $41 \%$ on SWE-bench-Verified with a 70B model. A concurrent work, SWE-Reasoner [28], on the other hand, applies SFT-based model distillation (from DeepSeek-r1 [15]) to train a 32B reasoning model for the end-to-end pipeline. They further trained two 32B critique models for localization and patch selection. They achieve $46 \%$ resolved rate on SWE-bench-Verified with all three models.
Other code LLMs. First, there are some coding LLMs for general coding tasks (e.g., LeetCode, Data Science), including Qwen2.5-Coder [19], DeepSeek-Coder [61], WizardCoder [26], CodeLlama [41], and reasoning models (e.g., $S ^ { * }$ [23] and CYCLE [9]). Second, existing works also explored developing models for debugging [10, 21, 60, 52, 45], testing case generation [1, 18, 38], function and API calling [11, 37], secure code generation [17, 12, 47, 16, 55]. These efforts are orthogonal to our work on training patching-specific models.
# 3 Key Technique
# 3.1 Overview
Problem setup. We are given a software repository with one or multiple issues/bugs. Each issue has a simple text description, which may contain additional information such as desired behaviors and vulnerable inputs. Each issue may affect one or more functions in the repository. Our goal is to automatically analyze the issue and generate patches that fix all affected functions while preserving the behaviors of the unaffected functions (which are evaluated by running the hold-out unit test).
Technical insight. In this paper, we first argue that designing small and specialized patching models improves the overall efficiency of the patching system, as general-purpose LLMs are way larger. Besides, we do not need the model to process images or video; instead, it must precisely understand the repository, reason about issues, and generate correct patches. Second, we argue that having a single model for the end-to-end patching pipeline may not be the optimal solution given the differences between components and the collaborative nature of software patching. Specifically, both localization and generation need to interpret the issue description and connect it to the target codebase, especially the code chunks responsible for the issue (root cause). Localization needs this capability to scan the entire codebase to pinpoint the root cause, and generation then relies on that information to craft patch candidates. By contrast, test case generation during validation demands an even deeper understanding of the issue description, yet it does not need to analyze the full codebase (given that the test cases are typically generated only based on the issue description). Besides, test case generation is typically not aware of patch candidates in order to generate more objective and comprehensive testing. This hypothesis is supported by existing works [50, 54] that trained large models $\scriptstyle \left( > 7 0 \mathbf { B } \right)$ using various methods (SFT, offline and online RL) yet achieved relatively low performance (See Section 4). Based on these findings, we propose to train small but fine-grained models for different components and use them together in the agent system.
Technical challenges and key novelties. The high-level challenge is to reduce each model to the smallest feasible size without sacrificing too much performance. More concretely, we need to first design a data-efficient training recipe for each model (Challenge $\pmb { \mathfrak { o } }$ ). Once we have the models, we also need to decide how to effectively integrate them into the overall agent system (Challenge $\pmb { \theta }$ ).
Solve challenge ❶. We propose to train three reasoning models, Loc-Gen model for localization and generation, Val-no-assert model and Val-assert model for vulnerable test cases $( \mathrm { P o C } )$ generation with and without assertions (Figure 1). First, as shown in Section 4, training reasoning models can achieve a better performance than non-reasoning models even with fewer training samples, making the model training even more data- and cost-efficient. We propose to distill a large reasoning model with supervised fine-tuning. Recent research shows that high-quality distillation data enables training effective small reasoning models for math and coding tasks with limited computational resources [46, 31]. In contrast, training reasoning models with RL requires substantially more samples and computational power, contradicting our efficiency goals. Additionally, without welldesigned intermediate process rewards, training based solely on outcome rewards becomes costly and unstable [36]. Second, we train one model for localization and generation as they share similar capabilities. As shown in Figure 1, we divide the localization task into two lower-complexity subtasks, generate training data separately, and mix the data for model training. For generations, we integrate “critique training data” where the model reviews its own patches, enabling better reasoning about patching errors. Third, we design two models for PoC generation to enable more diverse PoC testing. For each model, we train it to (1) generate PoCs that potentially trigger the target issue and (2) evaluate patch correctness based on issue descriptions and PoC execution outcomes.
Solve challenge ❷. Figure 1 shows our proposed agent workflow, which is inspired by the efficient designs of PatchPilot [24]. Our localization first identifies the files and then the lines in the pinpoint files for locating potential root causes. The generation component then generates multiple patch candidates. Finally, we use our two PoC generation models for patch correctness testing, followed by a model-free functionality test that runs patches against public functionality tests. We rank patches based on dynamic testing results (Num. of passed PoC and functionality tests) to identify the highestscoring candidates. When multiple patches achieve the same highest score, we apply majority voting based on normalization [53] to select the final patch.
# 3.2 Training Recipe for Loc-Gen model
Our training recipe has four key components: training issue selection, training task construction, reasoning data generation, and filtering. Issue selection and data filtering are common across all components, while task construction and reasoning data generation are tailored to each model.
# 3.2.1 Issue Selection
We select training issues and the corresponding codebases from the SWE-bench training set and SWE-Gym dataset, which contains different repositories from our testing set. Our selection criteria focus on two key factors: First, we prioritize diversity by selecting issues from different repositories across different time periods to improve model generalization. Second, we include issues with varying difficulty levels, following recent work by S1 [31] showing that challenging cases improve reasoning abilities with limited training data. We quantify the difficulty using the number of files changed in the fix. For example, simple issues require changing only one file, while difficult issues require changes to five or more files. As shown in Figure 4, in general, the performance of our models first increases as the training data grows and then remains at a similar level. Guided by this trend, we select only 2,000 training issues, which is significantly fewer than existing RL-based methods, e.g., 11M in SWE-RL [50]. To avoid data leakage, we check the selected issues, making sure they come from different codebases from the testing ones and do not have overlapping functions.
# 3.2.2 Task Construction and Reasoning Data Generation
Localization. The key challenge for localization is to efficiently identify root causes while keeping the task manageable for a small model. To achieve this, rather than training the model to directly identify root causes from the entire repository, we decompose localization into two sequential subtasks: file localization and line localization. File localization identifies issue-related files based on the file structure, while line localization pinpoints the specific code chunks within selected files. This decomposition creates simpler sub-tasks with shorter context, better suited for small models. It also provides explicit guidance to the localization process. Specifically, for file localization, we provide the issue description and codebase file structure as input. For line localization, we input the issue description and the code from each selected file (splitting files that exceed the model’s context limit) (see Appendix D for the prompts). Based on this task design, we use Claude-3.7-Sonnet [4] to generate distillation data with a reasoning chain. The reasons for choosing Claude-3.7-Sonnet over other reasoning models are that some models (e.g., OpenAI models) do not provide their reasoning chains. Among the two main open-source models, DeepSeek-R1 [15] tends to give overly long reasoning chains, making it easy for our model to learn noisy steps. QwQ [39], meanwhile, performs poorly on patching-related tasks.
Generation. The key novelty of our generation model is to combine the patch critique with the patch generation. For patch generation, we design the input as the issue description and the identified root causes, and the output as a patch candidate. For the patch critique, our input is the issue description and a patch candidate, and the designed output is the review of the candidate as well as a correct answer if it is wrong [49]. We still use Claude-3.7-Sonnet to generate the reasoning data for these two sub-tasks. Having the critique data can guide the model to learns not only to produce an answer but also to diagnose and refine existing ones, thereby acquiring stronger reasoning skills during training. Such a process could further deepen the model’s understanding of the target issues and potentially yield higher-quality patches. Appendix D specifies our input prompts.
# 3.2.3 Reasoning Data Filtering
After generating the data, we apply filters based on two aspects: final answer correctness and reasoning length. First, based on our empirical observation, we conduct a rejection sampling to filter out the training samples that lead to wrong answers, as training with these noisy samples will jeopardize our model performance. This is a unique recipe for patching, as it does not align with existing work, Sky-T1 [46] and S1 [31], where they state that data with wrong answers is still useful for models to learn the reasoning structure in math problems. We believe the difference stems from the specialized nature of patching, where the related tasks are not frequently encountered during pre-training. As such, a small model needs access to correct answers to learn the correct knowledge. For general maths problems, however, the model has likely seen enough examples in pre-training and the model can tolerate occasional wrong answers. Here, reasoning data mainly teaches it to perform reasoning following a certain structure. Second, we filter out samples with excessively long reasoning chains, as these kind of long reasoning does not offer too much benefit even on general-purpose LLMs (Appendix B). A deep inspection of the reasoning chain shows that the model tends to overthink and repeat reasoning paths. Such data can even cause model collapse and jeopardize training efficiency.
# 3.3 Training Recipe for Val-assert model and Val-no-assert model
Rationale for having two models. The high-level goal of validation is to decide whether a candidate patch fixes the issue (patch correctness) and whether it affects other benign functions (functionality correctness). For functionality correctness, we can retrieve the public testing cases from each project and run dynamic testing against our patches. The key challenge is to design the patch-correctness validation. To enable dynamic testing, we propose to train two validation models to generate $\mathrm { P o C }$ test inputs and make a patch correctness judgment. The insights for having two models are as follows. First, existing patching agents have two ways of generating PoC tests: with or without assertions. Here, assertions mean specific assertion instructions in the PoC test that judge whether the PoC execution triggers the target issue. The test cases with and without assertions typically cover different program paths to the root cause site. To enable more comprehensive and sound PoC tests, we aim to generate PoCs in both styles. As such, we train two different models, one for each style. As shown in Appendix B.3, we also train one model to generate both types of PoCs with different input prompts and special tokens. However, the model cannot give PoCs with enough diversity, even with a high temperature during testing.
Table 1: Co-PatcheR vs. baselines on SWE-bench-Verified. N/A means not available.
Training recipe. Here, we use the same set of training issues as the Loc-Gen model. We design two types of input prompts to instruct the teacher model to generate PoCs with and without assertions (Appendix D). Both input prompts contain the issue description and a format instruction (with/without assertions). Different from Loc-Gen model, we use two teacher models, Claude-3.7-Sonnet and o4-mini, to collect the reasoning data. The goal here is again to increase the PoC diversity and thus path coverage to the root causes. For Val-no-assert model, we further gather judgment data, where the input is the issue description, the current patch, and its PoC execution outcomes, and the output is whether the patch fixes the issue. We train Val-no-assert model to generate the PoCs and judge the patch correctness at the same time. For Val-assert model, we only train it to generate the PoCs, as the PoC correctness can be decided by assertions. As shown in Figure 1, we run dynamic testing with PoC and functionality tests, and conduct a majority vote to select the final patch when dynamic testing has ties.
# 4 Evaluation
# 4.1 Co-PatcheR vs. Baselines on SWE-bench
Setup and design. We adopt the Qwen-2.5-Coder-14B model [19] as our base model for all three components. Compared to more recent models, Qwen-2.5-Coder-14B has the knowledge cut of March 2024, which is prior to the SWE-bench benchmark (published in May 2024). It is less likely to be trained specifically for the SWE-bench data. As introduced in Section 3.2.1, we select 2K training issues from the SWE-bench training set and the SWE-Gym [36] dataset and conduct filtering to avoid data leakage. After training our three customized models, we integrate them into our end-to-end pipeline (Figure 1) and evaluate our system (Co-PatcheR) on the SWE-bench-Verified dataset. The specific training hyper-parameters are shown in Appendix A. During the inference, for every issue, we generate 5 root causes from localization, 60 candidate patches, and $4 \mathrm { P o C s }$ , using them to get one final patch. We compare Co-PatcheR with SOTA agents built on commercial LLMs and those with open-source models. We report the resolved rate of these agents’ final patch $( b e s t @ k )$ , as well as their number of patch candidates $k$ (if available). For open-source models, we also compare Co-PatcheR with them in training data and model size. Note that a recently released concurrent arXiv work (SWEReasoner [28]) claims a $46 \%$ resolved rate with $\mathrm { 3 \times 3 2 B }$ models. We achieve the same resolved rate with over $50 \%$ smaller models.
Results. Table 1 shows the comparison between Co-PatcheR and the baseline methods. As we can first observe from the table, most existing specialized models have a large performance gap from the agents with commercial models. SWE-RL archives the highest resolved rate with a 70B model with 110M training data and 500 candidate patches. In comparison, Co-PatcheR sets a new open-source record with a resolved rate of $4 6 . 0 0 \%$ using only $3 \times 1 4 \mathrm { B }$ models trained with 6K training data. This result validates the advantage of having component-specific models over one end-to-end model when patching with small models. It also demonstrates the effectiveness of our issue selection and reasoning data generation and filtering methods, which significantly improve Co-PatcheR’s data efficiency. Besides, the resolved rate of Co-PatcheR ranks among the top-10 open-source tools on SWE-bench-Verified, beating many agents with commercial models. The result shows the importance of having specialized models for software patching. Finally, Table 1 shows the advantages of reasoning models for both general and specialized LLMs. For example, OpenHands has a $7 \%$ improvement when using Claude-3.7-Sonnet (reasoning model) compared to Claude-3.5-Sonnet (non-reasoning model). At the same time, Co-PatcheR and SWE-RL also have significant advantages over other baselines with non-reasoning models.
# 4.2 Effectiveness and Ablation Study of Each Component
# 4.2.1 Localization
Design. We evaluate our localization component against three commercial LLMs (GPT-4o, Claude-3.7-Sonnet, o4- mini) on SWE-bench-Verified, measuring both file-level and line-level localization accuracy. To isolate the effect of our data filtering strategy, we also train a comparison model (Loc-NoFilter) with unfiltered data containing both correct and wrong answers, using the same 2K data size for fair comparison. We also compare against our base model (Qwen-2.5-Coder-14B) to demonstrate the impact of our specialized training. For all models, we select Top $\textcircled { a } 5$ files and report whether the correct answer appears in the root causes identified from these files. For issues affecting multiple files or lines, we enforce strict evaluation criteria, counting a localization as correct only when it identifies the complete set of affected files and lines. Note that we do not consider training a model to directly identify vulnerable lines from the entire repository, as it will exceed the model’s context limit.
Figure 2: The top $\textcircled { a } 5$ file-level and linelevel accuracy for localization.
Results. As shown in Figure 2, SOTA commercial reasoning models o4-mini and Claude-3.7-Sonnet achieve the highest performance on both file and line levels, marginally outperforming Co-PatcheRLoc. However, Co-PatcheR-Loc achieves comparable performance to GPT-4o, demonstrating the advantage of specialized reasoning models over general non-reasoning models. These results support our claim that specialized models with proper testing-phase scaling can compete with much larger commercial LLMs on specialized tasks. The substantial performance gap between Co-PatcheR-Loc and both Qwen-2.5-Coder-14B and Loc-NoFilter validates the effectiveness of our training recipe, particularly our reasoning data filtering approach.
# 4.2.2 Generation
Design. Following the experiment design for localization model, we evaluate our generation component against commercial LLMs (GPT-4o, o4-mini, Claude-3.7-Sonnet) and our base model (Qwen-2.5-Coder-14B). To isolate the contributions of our training innovations, we test two additional variants: GenBase (using unfiltered reasoning data without critique training) and Gen-NoFilter (adding critique data but without data filtering) to verify the effectiveness of both data filtering and critique training techniques. For a fair comparison and to focus specifically on patch generation capabilities, we use GPT-4o localization results as consistent input across all models, evaluating the performance using the pass $\textcircled { a } 1$ metric, which evaluates the successful issue resolution with only one generated patch.
Figure 3: The pass $@ 1$ resolved rate for generation.
els, with results consistent with our localization experiments: o4-mini and Claude-3.7-Sonnet outperform Co-PatcheR-Gen in single-patch performance. However, as demonstrated in Figure 5b, if effectively leveraging our testing-phase scaling approach, Co-PatcheR-Gen achieves comparable performance to these much larger models when generating only 4 more patch candidates. Furthermore, the performance advantage of Co-PatcheR-Gen over both Gen-Base and Gen-NoFilter validates our novel designs: critique training and data filtering substantially improve patch quality. We note that GPT-4o’s unexpectedly low performance stems primarily from formatting issues, as it frequently generated syntactically invalid patches that did not follow our required format specification.
# 4.2.3 Validation
Design. We conduct the ablation study for both the PoC generation model and the validation workflow, respectively. For the PoC generation model, we compared four variants: (1) Val-no-assert-Base model, trained with reasoning data from Claude-3.7-Sonnet without filtering, (2) Val-no-assert-NoFilter, trained using both Claude-3.7-Sonnet and o4-mini reasoning data without filtering; (3) Val-no-assert model only, and (4) Co-PatcheR-Val: Val-assert model $+ \mathrm { { V a l } }$ - no-assert model. We integrated each model into our validation pipeline and measured their resolved rates on an identical set of 20 patch candidates produced by our generation component. A higher resolved rate indicates more effective validation.
Figure 4: The resolved rate for different validation models and validation workflow.
To evaluate our validation workflow design, we test three strategies with the same Val-assert model $+ \mathrm { { V a l } }$ -no-assert model: (1) Co-PatcheR-Val, which applies the whole workflow, (2) Co-PatcheRNoPoC, which omits PoC testing and relies solely on functionality tests and majority voting, and (3) Co-PatcheR-NoDyn, which applies majority voting directly to patch candidates without any dynamic testing. Each workflow also processed the same set of 20 patch candidates for fair comparison.
Results. Figure 4 presents the comparative performance of different validation models and workflows. First, using two models performs better than only having Val-no-assert model, confirming the better PoC diversity. Second, Val-no-assert model outperforms Val-no-assert model-NoFilter, confirming the generalizable effectiveness of our data filtering strategy across all components. Comparing Val-no-assert model-NoFilter with Val-no-assert model-Base further justifies the necessity of having diverse PoCs in training data, which guide our model to learn to generate multiple PoCs for the same issue. The results in Figure 4 further show the necessity of having both PoC tests and function tests during validation. In Appendix C, we show that when having 60 patch candidates, majority vote is more effective than the outcome reward model used in SOTA agents [48] and even Claude-3.7-Sonnet. As such, we stick to the majority vote as the final patch selection.
# 4.3 More Ablation Study and Sensitivity Test
We use our generation model to conduct the ablation study on data size, model size, and testing-phase scaling strategy. The results of the other two components are consistent (Appendix B).
Data size. We randomly sample a subset of 500 and 1K cases from our current 2K training set and train two models using our proposed recipe. We report the Pass $\ @ 1$ performance of these models in Figure 5a. The result shows that the performance increases more as the parameters grow from 500 to 1K than from 1K to 2K. As shown in Appendix B.2, the model performance for localization no longer increases as we further increases the training data to 5K. As such, we select 2K as our final training data size. The findings show that, for small models, continually adding more data does not guarantee better performance (given the risk of overfitting). We further train a non-reasoning model for patch generation (SFT with ground truth patches). Our result shows that a non-reasoning model trained with 2K training data performs even worse than our reasoning model trained with 500 samples. It further shows that the reasoning model is more data-efficient.
Model size. We change our base model to Qwen-2.5-Coder-7B and Qwen-2.5-Coder-32B (same model family with different sizes) and retrain our patch generation model with the same training data. The Pass $\ @ 1$ results in Figure 5a show that a larger model indeed improves the performance.
(a) The pass $\ @ 1$ resolved rate of different training data/model size for validation.
Figure 5: More ablation studies on the generation component.
(b) The resolved rate for $\#$ of patch candidates.
However, considering that the improvement of the 32B model over the 14B model is not significant, we still choose the smaller one.
Testing-phase scaling. We test two scaling performances. We fix the output context limit and ask the model to generate $\scriptstyle \mathrm { K = 1 }$ , 10, 20, 40, 60, and 80 candidate patches. For each setting, we 1) compare the pass ${ \ @ K }$ resolved rate (whether correct patch is in the generated candidates) to obtain the upper bound of patch generation; 2) run our validation to select the final patch $( b e s t @ K )$ to assess the upper bound of Co-PatcheR. As shown in Figure 5b, increasing the sample numbers can prompt the model to generate more diverse patches, which increases the chances of hitting the correct one. This validates our arguments that small models with many samples can reach a similar performance to large models with fewer samples (without requiring significantly more computing, as the models are much smaller). Increasing sample numbers can also help the system as a whole; however, having too many samples will add a burden to validation and may jeopardize the validation accuracy.
# 5 Discussion
Rationalization does not always help. In patch-generation training data collection, we tried a rationalization scheme [57]: We provide the teacher model with the ground-truth patch and force it to generate a reasoning without mentioning the ground truth patch. When context is insufficient, the model invents latent details (e.g., suggesting a likely some_function that is not in the context), causing the student model to learn hallucinated patterns. Of ten instances that Co-PatcheR originally solved but fail after fine-tuning with the reasoning data, six fail due to hallucinated identifiers. Thus, rationalization can degrade patch-generation performance.
Component specific models vs. one model. In this paper, we argue that to minimize the model sizes, we need to train models specific to individual components. However, a counterargument for promoting one end-to-end model could be that all three tasks work on the same codebase, and the knowledge about the codebase can be shared across tasks. Although we acknowledge the validity of this argument, we do not take this route as we aim to push the limits for small models, and existing works following this methodology show limited performance. Future works can explore the efficient training methods and proper model sizes for such a unified model.
Limitations and future works. First, designing specific and effective reward functions requires non-trivial effort. We defer to future work to explore effective RL methods to continue training our current models and see if the performance can be further improved. Second, given our focus on the model side, the current patching leverages a simplified agent scaffold without complex tool calls. We will further enrich the agent with more tool calls and train specified models for tool call planning. Third, with large samples, our localization and generation components can reach the performance of SOTA commercial models. Future works will explore how to design more effective validations to pinpoint the correct patch from many candidates. | Motivated by the success of general-purpose large language models (LLMs) in
software patching, recent works started to train specialized patching models.
Most works trained one model to handle the end-to-end patching pipeline
(including issue localization, patch generation, and patch validation).
However, it is hard for a small model to handle all tasks, as different
sub-tasks have different workflows and require different expertise. As such, by
using a 70 billion model, SOTA methods can only reach up to 41% resolved rate
on SWE-bench-Verified. Motivated by the collaborative nature, we propose
Co-PatcheR, the first collaborative patching system with small and specialized
reasoning models for individual components. Our key technique novelties are the
specific task designs and training recipes. First, we train a model for
localization and patch generation. Our localization pinpoints the suspicious
lines through a two-step procedure, and our generation combines patch
generation and critique. We then propose a hybrid patch validation that
includes two models for crafting issue-reproducing test cases with and without
assertions and judging patch correctness, followed by a majority vote-based
patch selection. Through extensive evaluation, we show that Co-PatcheR achieves
46% resolved rate on SWE-bench-Verified with only 3 x 14B models. This makes
Co-PatcheR the best patcher with specialized models, requiring the least
training resources and the smallest models. We conduct a comprehensive ablation
study to validate our recipes, as well as our choice of training data number,
model size, and testing-phase scaling strategy. | [
"cs.AI",
"cs.CR",
"cs.SE"
] |
# 1 Introduction
Recent advances in large language models (LLMs) have greatly improved natural language understanding and generation. However, purely pre-trained LLMs often fail to align with human intentions or specific tasks (Ouyang et al., 2022), prompting increasing focus on alignment techniques. Supervised fine-tuning (SFT) trains models to follow human instructions, and remains widely used and effective for improving downstream performance (Wei et al.; Guan et al., 2024).
Although recent works have explored how model size and training-data characteristics influence downstream tasks in the context of SFT (Jin and Ren, 2024; Dong et al., 2024), large-scale research specifically examining which aspects of
SFT datasets benefit different base models remains limited. While some studies compare or analyze publicly available models (Oyama et al., 2025), these are not controlled experiments and often introduce biases—such as favoring certain model families. Consequently, it remains unclear how SFT of various models on different datasets affects benchmark performance, how relationships among datasets and benchmarks vary across models, and which internal weights are most responsible for these effects. Furthermore, there are several SFT training approaches including Low-Rank Adaptation (LoRA) (Hu et al., 2022), and there is ongoing debate about the optimal amount of data required (Zhou et al., 2024; Chen et al., 2023); however, there has yet to be a comprehensive, quantitative comparison. Hence, a comprehensive examination of these issues on SFT is urgently needed.
In this study, we trained twelve diverse base models on multiple datasets spanning different domains, creating a large suite of SFT models that we subsequently evaluated on a broad range of tasks (Figure 1). Specifically, we address the following Research Questions (RQs):
1. How do models, training data, and benchmarks interact with one another? Do certain training datasets consistently enhance benchmark performance across a variety of models, or does each model exhibit its own distinct preferences? Likewise, do relationships among different datasets and benchmarks remain the same across models? 2. Which properties of the training data used for SFT affect downstream performance? 3. Which layers in the model are most critical for SFT—are there universal patterns across different models? 4. How do various factors debated in SFT—such as different training methods, sample sizes, and cross-lingual transfer—impact performance?
Figure 1: Overview of this study. We conduct SFT on numerous combinations of base models and training data. These models are evaluated on a variety of benchmark tasks to comprehensively examine the relationships among the base models, training data, and benchmark tasks.
The main contributions of this work can be summarized as follows:
Large-Scale, Integrated Evaluation By systematically performing SFT on multiple base models and various training datasets, we uncover the complexity of relationships among models, data, and downstream tasks. While the relationships between training data and evaluation tasks follow broadly similar patterns across models, they also exhibit model-specific characteristics.
Revealing a Simple “Perplexity Is Key” Law We find that training data with lower perplexity for the base model consistently leads to greater improvements in downstream performance. In contrast, factors once considered crucial—such as content similarity between training and evaluation data or tokenizer compatibility—do not exhibit as strong an effect as perplexity.
Strong Correlation Between Mid-Layer Weight Changes and Performance We observe that changes in mid-layer weights correlate more strongly with downstream performance gains than changes in either the top or bottom layers. Indeed, intrinsic dimensionality analysis of embeddings revealed that the embedding space begins to diverge substantially from the base model at midlayer positions, suggesting these layers actively expand the model’s representational subspace during SFT. This pattern appears consistent across multiple models, offering critical insights for efficient fine-tuning and model monitoring.
Embedding the SFT Landscape Projecting the log-likelihood vectors of fine-tuned models into a common latent space lets us compare diverse training dynamics in one coordinate system. The resulting map shows that the global layout is determined by model family rather than training corpus, that checkpoints from successive epochs converge toward a shared instruction-following region, that enlarging the instruction set from 1k to $2 0 \mathrm { k }$ nudges models only slightly outward from this centre, and that LoRA trajectories almost perfectly overlap those of full-parameter tuning.
Resource Release for Future Research All finetuned models produced in this study will be publicly released. We expect this comprehensive set of models serves to accelerate deeper investigations of SFT and to foster rapid progress in the field.
# 2 Related Work
The role of training data characteristics in SFT has been highlighted in many prior studies. For instance, mixing code-generation data has been suggested to enhance a model’s reasoning and logical abilities (Dong et al., 2024). Similarly, incorporating instruction data that includes procedural knowledge could improve mathematical reasoning (Ruis et al., 2024). Furthermore, considering task relevance when selecting datasets can lead to more robust general performance (Huang et al., 2024; Zhang et al., 2024).
While early work focused on how to finetune—comparing full-parameter updates against LoRA (Ivison et al., 2023; Zhuo et al., 2024; Dettmers et al., 2024; Zhao et al., 2024b; Biderman et al., 2024), or debating sample size (Zhou et al., 2024; Zhao et al., 2024a; Chen et al., 2023)—more recent studies have shifted attention to the statistics of the training data itself. For example, Jin and Ren (2024) and Wu et al. (2025) independently show that lower perplexity and moderate sequence length are stronger predictors of SFT success than sheer volume.
Overall, most studies focus on particular models or tasks, and there remains a lack of comprehensive, large-scale evaluations across multiple models. This study aims to offer a broader perspective by controlling for model, data, and fine-tuning methods on a larger scale, thus providing more integrated insights into SFT behavior.
# 3 Methods
This section describes the base models, SFT procedures, and evaluation benchmarks.
# 3.1 Base Models
We employed a total of 12 models with approximately 7B parameters each across English, Chinese, and Japanese for SFT experiments. Specifically, we selected English models: OLMo7B(Groeneveld et al., 2024), Llama3-8B(Dubey et al., 2024), Mistral-7B(Jiang et al., 2023), and Gemma2-9B(Team et al., 2024); Chinese models: Qwen2.5-7B(Yang et al., 2024), ChineseLlama3-8B(Cui et al., 2023), Chinese-Mistral7B(Hsu et al., 2024), and Yi1.5-9B(AI et al., 2025); and Japanese models: LLMjp-3-7B(LLMjp et al., 2024), Llama3-Swallow-8B(Fujii et al., 2024), Swallow-Mistral-7B(Fujii et al., 2024), and Sarashina2- ${ \bf \cdot } 7 { \bf B } ^ { 1 }$ . By comparing these diverse models, we investigate not only cross-lingual differences but also behaviors during continual pretraining within model families such as the Llama family (Llama3, Chinese-Llama3, Llama3-Swallow) and the Mistral family (Mistral, Chinese-Mistral, Swallow-Mistral). To facilitate fair comparison at the peak effectiveness of instruction-tuning, all base models used in this experiment had not undergone any subsequent post-training. More information on each model can be found in Appendix A.
# 3.2 Training Datasets
We utilized 10 distinct datasets categorized into 4 major groups. Although our base models cover English, Chinese, and Japanese, all training datasets used for SFT are exclusively in English. Specifically, we selected General Tasks: Alpaca(Taori et al., 2023), LIMA(Zhou et al., 2024), and UltraChat(Ding et al., 2023); Coding Tasks: CodeAlpaca(Chaudhary, 2023) and Magicoder(Wei et al., 2024); Math Tasks: OpenMathInstruct(Toshniwal et al., 2024) and MathInstruct(Yue et al., 2023); and Classic NLP Tasks: FLAN(Wei et al.). The FLAN dataset(Wei et al.) further consists of 3 subcategories. FLAN Knowledge includes BoolQ(Clark et al., 2019), NaturalQuestions(Kwiatkowski et al., 2019b), and TriviaQA(Joshi et al., 2017). FLAN Reasoning includes ARC-Easy & Challenge(Clark et al., 2018), HellaSwag(Zellers et al., 2019), WinoGrande(Sakaguchi et al., 2019), and PIQA(Bisk et al., 2020). FLAN Comprehension includes QuAC(Choi et al., 2018) and SQuAD v2(Rajpurkar et al., 2018). The categorization of FLAN follows the criteria defined in Dubey et al. (2024); Contributors (2023).
To uniformly compare a wide variety of base models, all datasets were preprocessed under consistent conditions. Initially, samples exceeding the maximum sequence length supported by all models’ tokenizers were removed, as overly long samples cannot be adequately learned. Subsequently, either 1k or $2 0 \mathrm { k }$ samples were randomly extracted from each dataset. Further details on the training datasets are provided in Appendix B.
# 3.3 Training Settings
We trained a total of 1,070 models by varying several conditions. First, all 12 models underwent both full-parameter and LoRA training with a sample size of 1k for each individual dataset. Additionally, we conducted training using a combined dataset (All Dataset) to assess the effect of mixing all data.
For further validation, we conducted additional experiments using 3 primary models (OLMo, Qwen, and LLM-jp), focusing on the impact of dataset size by comparing training results using 1k and $2 0 \mathrm { k }$ samples. In this specific experiment, the learning rate schedule was switched from cosine (used in regular training) to constant to isolate the effect of dataset size.
Through preliminary experiments, we determined optimal hyperparameters for both fullparameter fine-tuning and LoRA, ensuring that the supervised fine-tuning process was conducted under stable and well-tuned conditions. Details of the preliminary experiments are provided in Appendix C, while training configurations, computational costs, and a few exceptional cases where training did not complete successfully are described in Appendix D.
Figure 2: a Average of the performance change for diverse benchmarks from the each baseline model after SFT on each training dataset. Each column is min-max scaled to the $[ - 1 , 1 ]$ range. b The performance changes visualized for each model individually. c Pairwise correlation matrix of performance changes across all SFT models, with the corresponding hierarchical-clustering dendrogram superimposed. d The cumulative explained variance ratio obtained by applying PCA to all concatenated results from b.
# 3.4 Evaluation
We evaluated all models on downstream tasks using OpenCompass2 (Contributors, 2023), a largescale evaluation tool. We evaluated model performance across 12 benchmark datasets spanning 5 categories: covering Math (MATH (Hendrycks et al., 2021c), GSM8K (Cobbe et al., 2021)), Coding (HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021)), Knowledge (BoolQ (Clark et al., 2019), NaturalQuestions (Kwiatkowski et al., 2019a), TruthfulQA (Lin et al., 2022)), Examination (MMLU (Hendrycks et al., 2021b,a), MMLUzh (Li et al., 2023a), MMLU-jp) and Instructionfollowing (MT-Bench (Zheng et al., 2023), AlpacaEval v2.0 (Li et al., 2023b)). A detailed description is provided in the Appendix E. As all models were trained in a zero-shot instructionresponse format, we focus primarily on zero-shot inference results in our evaluation. Gemma2-9B and Swallow-Mistral-7B were excluded due to inconsistent evaluation conditions, and we report results mainly for the remaining 10 models.
# 4 Results
# 4.1 RQ1. Relationship Among Models, Training Data, and Downstream Tasks
First, we examine how various base models interact with different training datasets and how these relationships shape downstream performance. We aim to determine whether certain datasets provide uniform benefits across models or if each model exhibits unique sensitivities. To this end, we analyze evaluation results obtained by fine-tuning each of the ten base language models with each of the ten SFT training datasets, every dataset containing 1k examples.
Figure 2a visualizes the relationship between training datasets and downstream tasks when aggregating results across all models. Some datasets show clear improvements for multiple tasks, while others offer minimal, or even negative gains. For instance, Alpaca and UltraChat generally deliver consistent performance boosts, whereas FLAN is detrimental to most tasks (except Natural Questions, which aligns with its domain). In addition, MathInstruct and OpenMathInstruct particularly boost MATH and GSM8K, whereas Magicoder benefits coding benchmarks yet still improves a wider task range than the math corpora. Notably, English-only SFT already transfers to Japanese (MMLU-jp) and Chinese (MMLU-zh) evaluation—see Appendix F for a dedicated cross-lingual analysis. It is also noteworthy that LIMA, a carefully curated dataset for SFT, did not yield substantial performance gains in our controlled setting compared to Alpaca and UltraChat.
Figure 3: a Pairwise correlations between evaluation tasks in terms of performance improvements across training datasets. b Similar to a, but focusing on relationship between correlations between training datasets. c Model-tomodel similarity for a (top) and b (bottom), respectively. d Comparison of the lower-triangle elements of the two similarity matrices in c.
Figure 2b plots these relationships separately for each model. Overall tendencies are similar, but there are also considerable differences across models—revealed only because we employed a unified experimental procedure. Some models benefit from almost all training data, whereas others demonstrate minimal gains.
In Figure 2c, we show a correlation matrix of performance gains across different models. As anticipated, models belonging to the same family exhibit high correlations, suggesting that even with additional training, the impact of SFT remains similar within each family. Surprisingly, the language in which a model was initially trained does not appear to substantially affect its overall similarity to others.
Figure 2c also reveals that, in general, the performance structures of the models are quite similar. To examine this more thoroughly, we vertically concatenated the data $\times$ benchmark matrices for each model, applied PCA, and then computed the cumulative explained variance ratio (Figure 2d).
As shown, about five principal components explain over $90 \%$ of the total variance, indicating a considerable degree of similarity in how different datasets influence SFT outcomes. Nonetheless, certain differences among models persist.
Figure 3a, pairwise correlation performance improvements across training datasets, highlights that the similarity or synergy across training datasets varies substantially by model: the same pair of datasets could be complementary in one model but neutral or even conflicting in another. Conversely, Figure 3b, pairwise correlation across evaluation tasks, shows a consistency across models, suggesting that tasks requiring similar reasoning skills (e.g., Math tasks) remain closely grouped. A paired t-test on the lower-triangle distributions of Figure 3c shows that the correlations across evaluation tasks significantly exceeds that of training datasets $( p \ < \ 0 . 0 1 )$ , confirming that the effects of training datasets is more diverse than evaluation tasks (Figure 3d). Overall, these findings underscore that while some training datasets offer consistent improvements, the degree of benefit often depends on the model. Furthermore, although fine-tuning effects on evaluation tasks are similar across models, those on training datasets are highly model-specific.
Figure 4: Analysis of training data properties that affect downstream performance. We compare perplexity (a), and token length ${ \bf ( b ) }$ with the average performance changes of benchmark tasks for the SFT models, highlighting that lower perplexity is a strong predictor of higher performance.
# 4.2 RQ2. Which Properties of Training Data Matter Most?
Next, we investigate which characteristics of training data most influence performance. Our focus includes perplexity, average token length, and semantic similarity to clarify which factors truly drive effective SFT.
As shown in Figure 4a, there is a clear positive correlation in many tasks and models between lower perplexity (w.r.t. the base model) and improved downstream performance. This implies that data lying in a domain or language distribution already “understood” by the model can be leveraged more effectively in SFT.
Figure 4b reveals modest correlation between the mean token length of a dataset and downstream performance, suggesting that simply using shorter or longer texts does not strongly drive better results. A prior study has reported that longer texts could be important for improved performance (Zhao et al., 2024a), and our findings partially support a straightforward link between text length and outcome quality.
Finally, we compare semantic embedding-based similarity between training and evaluation benchmark against performance improvement. Surprisingly, direct semantic similarity is not as strong a predictor as perplexity. Although we observe domain-specific gains (e.g., math data helps on
Math tasks, code data helps on coding tasks), a broader trend indicates that linguistic and structural closeness (as reflected in perplexity) may be more decisive than topical resemblance alone. See Appendix G for the details.
In sum, perplexity relative to the base model emerges as a strong predictor of downstream gains, surpassing factors like token length or broad semantic alignment.
# 4.3 RQ3. Layer-wise weight changes, their relationship to performance, and the effect of SFT on representational dimensionality.
We then explore how model parameters shift during fine-tuning by analyzing layer-wise weight updates across multiple models. Our goal is to identify which layers are most critical in translating SFT into performance gains.
Figure 5a plots two curves: the blue line is the Pearson correlation between weight-delta magnitude and overall accuracy gain, whereas the orange line shows the raw weight-delta magnitude itself. The orange line grows toward upper layers, yet the blue line peaks in the middle, indicating that the largest edits are not the most consequential ones. Rather, we find that the middle layers exhibit the strongest positive correlation with performance gains.
Figure 5b compares the similarity of these layer
b 8 Lama3-Swalow-B Chinese-Lama3-8B ra17 Chinese-Mista7 : P : 0.25 . -05 . O . . 1 -0.25 0.130.140.150.16 0.12 0.140.16 0.18 0.130.140.150.160.17 0.11 0.12 0.13 0.12 0.13 0.14 RLMo-7 Y158 Qwen2.5-7B 0.-3-7B Sarashina2-7B 81 0.0 : · C 0.5 . : : : 0.0 0.0 0 000.2 -0.5 0.1 0.12 0.01 0.16 818 0.20 . -0.2 Layer position Weight change Weight change Weight change Weight change Weight change C d Layer po iop = 0.6 18 Fine-tuned 1.0 Pretrained Mistral-7B 14 -0.0刃 8阳相 0.5 0 18 0.0 0.1 0.6 Layer position
wise change patterns across different models. Even though models differ at the architectural level, their mid-layer updates under SFT can follow surprisingly similar trajectories. Still, some modelspecific nuances remain.
Figure 5c extends this idea across models: it correlates, for different layer, the weight-change vector of one model with the corresponding vector of every other model. The strongest agreement again lies in the mid-layers, suggesting that SFT enforces a shared instruction-following mechanism across models.
Figure 5d complements the weight-change analysis by quantifying how SFT alters the geometry of the training corpus in embedding space. For every layer we computed the intrinsic dimensionality (ID) of the sentence-level embeddings produced before and after SFT (methodological details and additional results in Appendix H). The difference between the fine-tuned and pretrained ID curves is minimal in the lower half of the network, but from layer-position $= 0 . 6$ onward the dimensionality increases sharply and remains elevated through the output layers. The inflection point coincides with the correlation peaks in Figure 5a, implying that mid-layer updates do more than reduce loss—they actively expand the model’s representational subspace.
Our findings indicate that changes in the midlayers show the strongest correlation with improved results, suggesting they play a pivotal role in capturing the benefits of SFT.
# 4.4 RQ4. Other Factors
Finally, we consider additional aspects of SFT, including LoRA versus full-parameter tuning, the effect of sample size, and cross-lingual transfer—each potentially influencing the final performance.
To disentangle the multiple factors in SFT, we mapped the 757 fine-tuned models—covering 10 base architectures $\times 1 0$ training datasets and spanning LoRA vs. full-parameter updates, 1–10 training epochs, and sample sizes of 1k or 20k—into a common latent space using log-likelihood-vector projection (Oyama et al., 2025). For every model we computed a 1,950-dimensional vector of tokenlevel log-likelihoods by randomly sampling 150 questions from each of the 13 evaluation tasks. tSNE then embedded these vectors into two dimensions, giving five complementary views in Fig. 6.
Model families dominate. When points are coloured by model (Fig. 6a) the clusters group almost perfectly by architecture, whereas colouring by training data produces only weak separation.
Figure 6: t-SNE visualization of log-likelihood vector. a Colour $\mathbf { \tau } = \mathbf { \tau }$ model; b colour $\mathbf { \tau } = \mathbf { \tau }$ training data; c epoch trajectories for three models; d colour $\mathbf { \tau } = \mathbf { \tau }$ sample size; e shape $\mathbf { \tau } = \mathbf { \tau }$ tuning method (circle $\mathbf { \tau } = \mathbf { \tau }$ full, triangle $\mathbf { \tau } = \mathrm { L o R A } ,$ ).
Thus the inductive biases of the base model outweigh the specific SFT corpus in determining the final representation.
Epoch-wise trajectories converge. For the three checkpointed models (Qwen, LLM-jp, OLMo) we plot epochs 1–10 (Fig. 6c). Irrespective of dataset, trajectories spiral toward a common sub-region, suggesting that SFT gradually aligns the representations toward a shared “instruction-following” direction.
Small sample size is often sufficient. Colouring by training-set size separates models trained on $2 0 \mathrm { k }$ samples from those trained on 1k samples. The 20k-sample–trained points occupy the outer rim of the manifold more often, whereas the 1ksample–trained points cluster nearer the core. Thus a compact 1k instruction set already supplies sufficient signal for effective instruction-tuning, while scaling up to 20k samples can sometimes pull the representation away from the optimum. Indeed, our quantitative evaluations showed no consistent accuracy advantage for the $2 0 \mathrm { k }$ -sample models over their 1k-sample counterparts.
LoRA vs. full-parameter fine-tuning. Shapecoding full-parameter models as circles and LoRA models as triangles reveals minimal separation; LoRA points are only slightly more peripheral. Quantitatively, full-parameter tuning still excels on reasoning-heavy maths tasks, but LoRA enjoys a small mean advantage on open-ended QA benchmarks.
Cross-lingual transfer persists. We also examined the effect of SFT effects on Japanese and Chinese MMLU variants (full results and plots are in Appendix F). While we only used English training datasets, performance gains on MMLU are strongly correlated on those of MMLU-jp and MMLU-zh. This supports the hypothesis that content overlap between benchmarks, rather than surface-level language similarity, governs cross-lingual transfer in SFT. See Appendix F for the details. | Supervised fine-tuning (SFT) is a critical step in aligning large language
models (LLMs) with human instructions and values, yet many aspects of SFT
remain poorly understood. We trained a wide range of base models on a variety
of datasets including code generation, mathematical reasoning, and
general-domain tasks, resulting in 1,000+ SFT models under controlled
conditions. We then identified the dataset properties that matter most and
examined the layer-wise modifications introduced by SFT. Our findings reveal
that some training-task synergies persist across all models while others vary
substantially, emphasizing the importance of model-specific strategies.
Moreover, we demonstrate that perplexity consistently predicts SFT
effectiveness--often surpassing superficial similarity between trained data and
benchmark--and that mid-layer weight changes correlate most strongly with
performance gains. We will release these 1,000+ SFT models and benchmark
results to accelerate further research. | [
"cs.CL"
] |
# 1. INTRODUCTION
Understanding and modeling human driving behavior is fundamental to the development of intelligent transportation systems, ADAS technologies, and autonomous vehicles (Li et al., 2021). Driving actions arise from a complex interplay between internal decision-making processes and external traffic environments (Wang et al., 2022), often exhibiting diverse, context-dependent patterns. However, most existing car-following models adopt simplified or fixed behavioral assumptions, limiting their ability to capture the stochasticity and adaptability observed in naturalistic driving. To address this gap, we advocate for a regime-switching framework that models driving behavior as a sequence of latent behavioral modes and contextual scenarios, each governed by interpretable dynamics.
Traditional car-following models, such as the IDM (Treiber et al., 2000), typically assume a deterministic formulation in which a fixed set of parameters maps directly to the driver’s longitudinal control behavior. The IDM computes acceleration as a function of speed, relative speed, and spacing:
$$
\begin{array} { c } { { \mathrm { I D M } ( x _ { t } ; \pmb { \theta } ) = a _ { \mathrm { m a x } } \left( 1 - \left( \frac { v _ { t } } { v _ { f } } \right) ^ { \delta } - \left( \frac { s ^ { \ast } } { s _ { t } } \right) ^ { 2 } \right) , } } \\ { { s ^ { \ast } = s _ { 0 } + v _ { t } T + \displaystyle \frac { v _ { t } \Delta v _ { t } } { 2 \sqrt { a _ { \mathrm { m a x } } b } } , } } \end{array}
$$
where $\boldsymbol { \mathbf { \mathit { x } } } _ { t } = [ v _ { t } , \Delta v _ { t } , s _ { t } ] ^ { \top }$ represents the state variables (speed, relative speed, and gap), and $\pmb { \theta } = \{ v _ { f } , s _ { 0 } , T , a _ { \mathrm { m a x } } , b \} \in$ $\mathbb { R } ^ { 5 }$ denotes the model parameters governing driver behavior.
While effective in controlled or homogeneous traffic conditions, classical IDM-type models are built on a deterministic assumption: they posit a one-to-one mapping between the state of the driver-vehicle system (e.g., speed, spacing, and relative speed) and the driver’s acceleration response. However, real-world driving is inherently stochastic and context-sensitive, often exhibiting one-to-many mappings. That is, the same traffic state may correspond to multiple plausible acceleration responses, depending on the driver’s latent intention or situational interpretation. Conversely, the same observed action may result from distinct underlying causes. For instance, a driver approaching a slower vehicle may choose to decelerate gently, maintain speed momentarily, or accelerate to change lanes, each being a valid response under the same traffic conditions but reflecting different behavioral modes. Similarly, a small acceleration might reflect a relaxed adjustment in free-flow, a hesitant reaction in uncertain conditions, or a defensive maneuver in dense congestion. This behavioral ambiguity is especially prominent in naturalistic data, where only a small fraction of actions are clearly purposeful or reactive; the majority occur in ambiguous or transitional states (Zhang et al., 2023). When such data are used to calibrate deterministic models via root mean squared error (RMSE) or Gaussian likelihoods—metrics that treat all data points as equally informative, the resulting model tends to regress toward the mean. This leads to an “averaged” behavior that fails to reproduce the variability and sharp transitions observed in real driving. Consequently, such models suffer from non-identifiability: multiple parameter settings may explain the data equally well, yet lack meaningful behavioral interpretation (Zhang and Sun, 2024). This compromises both the interpretability and fidelity of driver modeling, especially in downstream applications such as behavior prediction, risk estimation, and simulation-based safety evaluation.
To address this challenge, it is essential to adopt a regime-switching scheme that recognizes driving as a composition of context-dependent behavioral modes. By segmenting the driving process into discrete regimes, each governed by its own interpretable set of behavioral parameters, such a framework allows for a one-to-many mapping from observed data to latent driving intentions and contexts. This structure enables the model to assign ambiguous observations to the most plausible regime given the surrounding traffic conditions, rather than fitting a single, fixed response. As a result, regime-switching models mitigate the tendency toward averaged behavior, preserve sharp behavioral transitions, and enhance both interpretability and predictive consistency.
To operationalize this regime-switching perspective, we first develop a hybrid probabilistic model, HMM-IDM, by integrating the classical Intelligent Driver Model with a Hidden Markov Model (HMM) (Rabiner and Juang, 1986). In this formulation, each latent state corresponds to a distinct driving regime, characterized by its own set of IDM parameters. The model captures how drivers dynamically transition among regimes such as aggressive acceleration, cruising, and deceleration, thereby accommodating temporal variability and regime-dependent responses. To further disentangle the influence of intrinsic behavioral modes from external driving contexts, we extend this model to a Factorial Hidden Markov Model (Ghahramani and Jordan, 1995) with IDM dynamics, namely, FHMM-IDM. FHMM-IDM introduces a structured latent state space with two independent components: the driving regime process, which encodes internal behavioral intent, and the traffic scenario process, which represents surrounding contextual conditions such as free-flow, congestion, or stop-and-go dynamics. Each joint latent state governs a separate set of IDM parameters and gives rise to distinct acceleration behaviors depending on both driver Regime and environmental context. This factorization not only improves behavioral interpretability but also enhances the model’s capacity to reflect real-world variability in a principled and data-driven manner. We validate the proposed framework using the HighD naturalistic driving dataset, demonstrating that FHMM-IDM effectively uncovers interpretable behavioral structures and captures realistic regime-switching patterns. Detailed case studies show how the model successfully disentangles intrinsic driver behaviors from contextual traffic variations, providing a richer and more faithful representation of human driving for simulation, prediction, and behavior analysis tasks.
Conceptually, both the HMM-IDM and FHMM-IDM hybrid approaches embody a strategy of assembling multiple driving primitives $^ { 1 }$ to approximate globally nonlinear driving behavior. Rather than relying on a single, overly complex model to capture all behavioral variability, these models decompose the driving process into a collection of simpler and granular components, i.e., driving regimes. This is analogous to approximating a nonlinear function using multiple local linear segments. Each regime-specific IDM instance corresponds to a driving primitive that governs the vehicle’s response under a particular regime, such as aggressive car-following, relaxed cruising, or defensive braking, conditioned on specific traffic scenarios. The probabilistic structure of the HMM or FHMM governs transitions among these primitives, enabling the model to respond to evolving conditions by switching between regimes. The driver’s overall behavior is thus modeled as a piecewise sequence, where each segment reflects the output of a distinct IDM parameterization determined by the current latent state. This structure is illustrated in Fig. 1, where the driver’s observed response trajectory (top) is interpreted as the outcome of latent driving regimes and traffic scenarios, each evolving over time via independent Markov chains. The blue curve represents the ground-truth driving action (e.g., acceleration), while the red curve shows the regime-specific model output (linear models for illustration purposes). Dashed lines indicate latent behavioral trends not captured by any single primitive, further motivating the need for switching among specialized regimes. Shaded regions delineate the segmentation imposed by the latent states, revealing how the model adaptively partitions the trajectory into interpretable behavioral modes and contextual scenarios. This modular scheme captures both the stochastic and adaptive nature of real-world driving. As traffic conditions evolve, the model dynamically adjusts its active primitive, for example, transitioning from free-flow to stop-and-go conditions. By combining interpretable latent states with data-driven transitions, the HMM-IDM and FHMM-IDM frameworks provide a flexible yet structured approach to modeling human driving behavior with both realism and transparency.
Figure 1 – Conceptual illustration of the FHMM-IDM model. The top panel depicts the evolution of driver response (e.g., acceleration), segmented into discrete latent driving regimes, while the bottom panel shows the corresponding traffic scenarios. Both latent processes evolve via Markov switching. The blue curve represents the observed behavioral trajectory, and the red straight lines show the model’s output (linear models for illustration purposes) within each driving regime using regime-specific models. Dotted lines indicate latent behavioral trends or variability beyond what each individual regime can capture, motivating the need for switching between multiple regimes. Note that Regime A–D and Scenario A–C are illustrative placeholders only; the data-driven regimes and scenarios inferred by the model appear in Figs. 8-11.
This work makes the following key contributions and advantages:
1. A novel modeling framework: We introduce a Markov regime-switching framework for car-following behavior that explicitly separates intrinsic driving regimes from external traffic scenarios. This addresses the long-standing challenge of one-to-many mappings in naturalistic data, providing a principled solution to the problem of behavioral non-identifiability in deterministic models.
2. A hybrid probabilistic model with rigorous inference procedure: We instantiate the framework through FHMM-IDM, a novel integration of the FHMM with the IDM. In FHMM-IDM, each latent driving regime corresponds to a unique set of IDM parameters, while the factorial structure captures the interplay between driver intention and traffic context via two independent latent Markov processes. We develop a full Bayesian inference pipeline using MCMC methods, ensuring robust parameter calibration and uncertainty quantification from real-world trajectory data.
3. An interpretable and modular representation: By disentangling behavioral and contextual components, our model enables interpretable attribution of driving behavior to internal (driver) and external (traffic) factors. This decomposition facilitates regime-aware analysis and enhances the explanatory power of car-following models. Empirical results on the HighD dataset show that FHMM-IDM uncovers meaningful regime structures and realistically captures dynamic transitions across driving behaviors and traffic scenarios.
The remainder of this paper is organized as follows: Section 2 reviews related work on probabilistic modeling of car-following behavior. Section 3 introduces the proposed HMM-IDM and FHMM-IDM frameworks, including their mathematical formulation and Bayesian inference algorithms. Section 4 describes the experimental setup, presents the learned interpretable latent states, and provides case studies using the HighD dataset to validate the effectiveness of the models. Finally, Section 5 concludes with discussions and outlines potential directions for future research.
# 2. RELATED WORKS
# 2.1. Probabilistic Models and Behavioral Regimes
Deterministic car-following models such as the IDM (Treiber et al., 2000) assume a single fixed set of parameters governing driver behavior in all scenarios. This restricts their ability to capture the variability, uncertainty, and abrupt regime changes present in real-world driving (Zhang and Sun, 2024; Chen et al., 2024). Notably, classic models like Wiedemann and Fritzsche hard-code regime boundaries via perceptual thresholds, yielding a multiregime structure but requiring extensive manual tuning and lacking adaptability beyond their original calibration context (Wiedemann, 1974; Fritzsche and Ag, 1994). As a result, these deterministic and threshold-based models tend to underfit behavioral heterogeneity, struggle to model transitions, and suffer from limited interpretability in heterogeneous or context-dependent traffic.
To address these limitations, probabilistic modeling approaches have emerged, treating driving as a stochastic process and enabling the discovery of latent behavioral regimes. HMMs have become foundational in this context (Rabiner and Juang, 1986; Wang et al., 2014), as they encode both latent driver states and the transitions between them. HMMs enable modeling of short-term regimes, such as aggressive acceleration, steady cruising, or cautious braking, as latent states, naturally accommodating regime shifts and sequential dependencies (Vaitkus et al., 2014; Aoude et al., 2012; Gadepally et al., 2013). Gaussian Mixture Models (GMMs) have also been adopted, e.g., with delay-embedding (Chen et al., 2023) and matrix decomposition (Zhang et al., 2024a) to capture multi-modal distributions of driver behavior. It also could be set as emission models for HMMs (Wang et al., 2018b). These probabilistic frameworks, by maintaining distributions over regimes or actions rather than deterministic assignments, increase model flexibility and better reflect the stochastic nature of human driving.
However, most prior work in regime modeling has relied on domain knowledge or heuristic thresholds to define the behavioral regimes themselves, limiting generalizability and transferability (Wang et al., 2014; Vaitkus et al., 2014). There remains a need for data-driven methods that can discover and adaptively segment regimes without manual intervention.
# 2.2. Advances in Bayesian and Factorial Approaches
Building on basic HMM and GMM models, Bayesian extensions have been developed to better represent behavioral complexity and uncertainty. One notable extension is the Hidden Semi-Markov Model (HSMM), which explicitly models the dwell time (state duration) in each regime. Standard HMMs assume geometric state durations, which may not reflect how long drivers naturally stay in a given behavior. HSMMs address this by providing a state-specific duration distribution. For example, Taniguchi et al. (2014) employed an HSMM with a Hierarchical Dirichlet Process (HDP) prior, allowing the model to learn both the duration of maneuvers and the appropriate number of distinct behavioral states from the data. Such a nonparametric Bayesian HMM (using HDP) does not require the researcher to pre-specify the number of driving regimes; instead, the model infers it automatically (Fox, 2009). This is especially useful when the set of driving patterns is not known in advance or varies between drivers. Zhang et al. (2021) demonstrated the power of this approach by applying a sticky HDP-HMM to naturalistic driving data, which automatically discovered recurrent interaction patterns (i.e., primitive maneuvers) without any pre-defined labels. This represents a significant advance over earlier HMM studies that assumed a fixed set of driver modes, as the model could flexibly reveal new regime types (and their durations) directly from complex multi-vehicle datasets.
FHMMs (Ghahramani and Jordan, 1995) further increase modeling expressiveness by combining multiple interacting latent processes, for example, one chain for the driver’s intrinsic regime and another for the surrounding traffic scenario. This structure enables the model to disentangle overlapping influences, capturing cases where, for example, a usually relaxed driver becomes aggressive due to external congestion. Though FHMMs remain underutilized in the driving literature, their capability to separate internal and external factors aligns with the motivation for our proposed approach. Bayesian inference methods (e.g., Expectation-Maximization, MCMC) are commonly used to estimate parameters and latent trajectories, providing uncertainty quantification and adaptivity as new data is observed (Bishop and Nasrabadi, 2006).
# 2.3. Regime-Switching Car-Following Models
Within car-following modeling, regime-switching has traditionally been implemented through deterministic if-then rules or fixed thresholds, as in multi-regime Wiedemann- or Fritzsche-type models (Wiedemann, 1974; Fritzsche and Ag, 1994). More recently, data-driven regime-switching has been integrated with car-following models using probabilistic frameworks. For instance, Zaky et al. (2015) proposed a two-stage Markov switching model to classify car-following regimes and estimate regime-specific parameters, allowing for the dynamic detection of abnormal or rare events and more precise behavioral segmentation. Similarly, Zou et al. (2022) applied HMM-based models (including GMM-HMM and HDP-HSMM) to large-scale car-following data, showing that flexible, nonparametric models can automatically identify meaningful regimes (e.g., close following, reactive braking) without manual regime definitions. Recent works, such as Zhang et al. (2023), also integrated IDM with regime-switching frameworks. It proposes distinct action-oriented driving regimes (e.g., interactive/non-interactive driving), with regime transitions governed by an interactive-switching control module. Each regime is characterized by unique IDM parameterizations, allowing the model to dynamically adapt to varying interactive intentions and traffic contexts, significantly improving model fidelity and interpretability. Recent studies have also begun to incorporate regime-switching ideas into deep learning frameworks. Recent advances have also introduced hybrid deep learning frameworks that incorporate discrete regime-switching into car-following prediction. For instance, Zhou et al. (2025) proposed a regime-embedded architecture that combines Gated Recurrent Units (GRUs) for driving regime classification with Long Short-Term Memory (LSTM) networks for continuous kinematic prediction. Their model targets intra-driver heterogeneity by integrating discrete behavioral modes (e.g., acceleration, cruising, steady-state following) into continuous trajectory forecasting, achieving substantial gains in predictive accuracy. However, such models rely on pre-segmented regime labels and deep architectures that, while powerful, lack the principled probabilistic structure and interpretability.
Table 1 – Comparison of related approaches for modeling driver behavior.
IDM: Treiber et al. (2000); Treiber and Helbing (2003); Treiber et al. (2006); Punzo et al. (2021); Bayesian IDM: Zhang and Sun (2024); Zhang et al. (2024b); GMM: Chen et al. (2023); Zhang et al. (2023, 2024a); HMM: Sathyan et al. (2008); Aoude et al. (2012); Gadepally et al. (2013); Vaitkus et al. (2014); HMM-GMM: Wang et al. (2018b,a); HDP-HMM: Taniguchi et al. (2014); Zhang et al. (2021); Zou et al. (2022); Neural Networks: Wang et al. (2017); Zhu et al. (2018); Mo et al. (2021); Yao et al. (2025); Zhou et al. (2025);
1Can the model dynamically adjust to changing behavior? 2Type of latent representation: discrete (mode switches) or continuous (trajectory embeddings). 3Whether the number of latent modes is fixed a priori or inferred. 4How model parameters are estimated: EM, gradient descent, MCMC, etc. 5Can latent states or parameters be interpreted as meaningful driving behavior? 6Whether traffic context (e.g., relative speed, gap) is explicitly used in latent modeling. 7Ability to capture driver-specific variation (e.g., hierarchical priors, class mixture). 8Model’s ability to fit and learn from diverse and high-dimensional driving datasets. $^ { 9 }$ Overall training/inference complexity: data requirements, convergence cost, parallelism.
Despite these advances, most existing approaches still require manual regime boundaries, external calibration, or multi-step procedures. Our work bridges this gap by embedding a Markov switching process directly within the IDM framework, enabling the model to discover, segment, and calibrate regimes in a unified and data-driven manner. This approach is motivated by and extends the probabilistic regime-switching and Bayesian learning literature, aiming to achieve greater realism, interpretability, and context-awareness in microscopic traffic simulation.
# 2.4. Positioning FHMM-IDM among Existing Methods
To situate our proposed FHMM-IDM framework within the broader spectrum of driver behavior modeling approaches, we summarize and compare representative methods in Table 1. The comparison spans classical deterministic models (e.g., IDM), probabilistic and Bayesian models (e.g., GMM, HMM, HDP-HMM), and more recent learningbased techniques (e.g., LSTM-based deep models), across key modeling characteristics. These include adaptivity, behavioral mode representation, latent state dimensionality, stochasticity, estimation procedures, interpretability, contextual awareness, heterogeneity modeling, and computational complexity.
FHMM-IDM distinguishes itself by explicitly modeling both internal driving regimes and external traffic scenarios through a factorial latent structure. This design allows it to disentangle driver intent from environmental influences—a capability absent in most existing approaches, which either assume a fixed parameterization or rely on indirect context encoding through observed features. Moreover, by adopting a full Bayesian inference framework, FHMM-IDM enables robust parameter estimation and principled uncertainty quantification, which are critical for applications such as behavior prediction, risk assessment, and safety validation.
Compared to existing models, FHMM-IDM strikes a balance between data-driven flexibility and structured interpretability. While deep learning models can learn complex patterns, they often lack transparency and require large-scale training data. In contrast, FHMM-IDM offers interpretable, probabilistically grounded behavioral components that can generalize across scenarios with limited data. This makes it a strong candidate for modeling ealistic and context-sensitive driving behaviors in naturalistic traffic environments.
# 3. MARKOV REGIME-SWITCHING FRAMEWORK FOR CAR-FOLLOWING
Building on the motivation to model heterogeneous and context-dependent driving behaviors, we develop a probabilistic regime-switching framework that captures the interplay between intrinsic driver actions and external traffic scenarios. Our approach introduces two hybrid models: HMM-IDM and FHMM-IDM, which augment the classical IDM with latent Markovian dynamics. The HMM-IDM captures univariate regime-switching behaviors by associating each latent state with a distinct set of IDM parameters. To further disentangle intrinsic behavioral variability from environmental context, we extend this formulation to a factorial structure, FHMM-IDM, wherein two independent latent Markov chains separately encode driving behaviors and traffic scenarios. This section presents the mathematical formulation, model assumptions, and Bayesian inference procedures used to estimate the latent states and regime-specific parameters.
# 3.1. Formulations of HMM-IDM and FHMM-IDM
# 3.1.1. Hidden Markov Model with Intelligent Driver Model (HMM-IDM)
As discussed in Section 1, modeling car-following behavior with a fixed parameter set, such as in the deterministic IDM, fails to account for the contextual variability and temporal shifts observed in naturalistic driving. The same input state (e.g., gap, speed, relative speed) can lead to different driver actions depending on latent factors such as intention, caution level, or situational awareness. This ambiguity, or one-to-many mapping, motivates the need for a regime-switching framework that allows behavioral parameters to evolve over time.
To address this, here we develop a hybrid model that combines the interpretability of IDM with the temporal segmentation power of HMM. The HMM introduces a discrete latent state variable that captures shifts in driving regimes, such as transitions between cruising, closing in, or defensive braking. Each latent state is associated with a distinct set of IDM parameters, enabling the model to account for time-varying human driving behavior while maintaining a physically grounded formulation.
Let $z _ { t } ^ { ( d ) }$ , $\pmb { x } _ { t } ^ { ( d ) }$ , and $y _ { t } ^ { ( d ) }$ denote the latent state, inputs, and outputs for driver $d$ at time $t$ , respectively. For simplicity, we omit the superscript $d$ hereafter unless specifically clarified. The two key components of HMM are defined as follows:
1. Transition matrix $\pi \in \mathbb { R } ^ { K \times K }$ : where each entry $\pi _ { j k } : = p ( z _ { t } = k \mid z _ { t - 1 } = j )$ denotes the probability of transitioning from state $j$ to state $k$ according to the Markov property (Li et al., 2025). Thus, we express $\pmb { \pi } = [ \pmb { \pi } _ { 1 } , \dots , \pmb { \pi } _ { K } ] ^ { \top }$ .
2. Local evidence $\psi _ { t } \in \mathbb { R } ^ { K }$ : the probability of observing $y _ { t }$ given the inputs ${ \bf \mathcal { x } } _ { t }$ and parameters $\theta _ { k }$ , defined as $\psi _ { t } ( k ) : = p ( y _ { t } \mid \pmb { x } _ { t } , \pmb { \theta } _ { k } )$ .
Formally, the HMM-IDM framework can be summarized by the following equations:
$$
\begin{array} { r } { \begin{array} { r l } & { z _ { t } \mid z _ { t - 1 } \sim \mathrm { C a t } ( \pi _ { z _ { t - 1 } } ) , } \\ & { y _ { t } \mid x _ { t } , \Theta , z _ { t } \sim \mathcal { N } ( \mathrm { I D M } ( x _ { t } ; \theta _ { z _ { t } } ) , \sigma _ { z _ { t } } ^ { 2 } ) . } \end{array} } \end{array}
$$
where $\mathrm { C a t } ( \cdot )$ represents the categorical distribution, and $\sigma _ { z _ { t } } ^ { 2 }$ denotes the variance of the observation noise. Each latent state $z _ { t }$ corresponds uniquely to a driving behavior characterized by specific IDM parameters, denoted as $\theta _ { z _ { t } }$ The complete set of these IDM parameters across all states is indicated by $\Theta = \{ \pmb { \theta } _ { k } \} _ { k = 1 } ^ { K }$ . The overall probabilistic structure of the HMM-IDM model is illustrated in Fig. 2.
# 3.1.2. Factorial Hidden Markov Model with Intelligent Driver Model (FHMM-IDM)
The FHMM-IDM is an extension of the HMM-IDM framework that incorporates multiple latent processes, referred to as factors. Each factor represents an independent component of driving behaviors, and these factors collectively determine the observed driving behaviors. FHMM-IDM is designed to model the joint effect of these independent latent processes on the observed outputs.
To separate driving behaviors (noted by superscript [B]) and traffic scenarios (noted by superscript [S]) as two factors, we denote $z _ { t } ^ { [ \mathrm { B } ] }$ and $z _ { t } ^ { [ \mathrm { S } ] }$ to represent the latent state of the two factors at time $t$ , respectively. The joint latent state vector at time $t$ is represented as $z _ { t } : = \left( z _ { t } ^ { [ \mathrm { B } ] } , z _ { t } ^ { [ \mathrm { S } ] } \right)$ . Each factor has a number of $K ^ { \mathrm { \lfloor B \rfloor } }$ and $K ^ { \mathrm { [ S ] } }$ latent states, respectively. Thus, the joint latent state space $\mathcal { L }$ is the Cartesian product of the state spaces of both components $\mathcal { Z } = \{ 1 , \ldots , K ^ { [ \mathrm { B } ] } \} \times \{ 1 , \ldots , K ^ { [ \mathrm { S } ] } \}$ .
Figure 2 – Probabilistic graphical model of HMM-IDM and FHMM-IDM.
In FHMM, the latent states of the two factors evolve jointly over time, defined by a state transition matrix $\pi \in \mathbb { R } ^ { | \mathcal { Z } | \times | \mathcal { Z } | }$ , where
$$
\begin{array} { r l } & { \pi _ { ( k ^ { \prime } , k ) } : = p ( z _ { t } = k \mid z _ { t - 1 } = k ^ { \prime } ) } \\ & { \qquad = p \left( z _ { t } ^ { \left[ \mathrm { B } \right] } = k ^ { \left[ \mathrm { B } \right] } , z _ { t } ^ { \left[ \mathrm { S } \right] } = k ^ { \left[ \mathrm { S } \right] } \mid z _ { t - 1 } ^ { \left[ \mathrm { B } \right] } = k ^ { \prime \left[ \mathrm { B } \right] } , z _ { t - 1 } ^ { \left[ \mathrm { S } \right] } = k ^ { \prime \left[ \mathrm { S } \right] } \right) , } \end{array}
$$
for all $\pmb { k } = ( k ^ { [ \mathrm { B } ] } , k ^ { [ \mathrm { S } ] } )$ and $\pmb { k } ^ { \prime } = ( k ^ { \prime [ \mathrm { B } ] } , k ^ { \prime [ \mathrm { S } ] } ) \in \mathcal { Z }$ . Also, it should be with
$$
\sum _ { k \in \mathcal { Z } } \pi _ { \left( k ^ { \prime } , k \right) } = 1 , \quad \forall k ^ { \prime } \in \mathcal { Z } .
$$
Then, we define the observation model in FHMM-IDM with separate emission functions for the two factors:
1. Driving-Behavior Local Evidence $\boldsymbol { \psi } _ { t } ^ { \mathrm { [ B ] } } \in \mathbb { R } ^ { k ^ { \left[ \mathrm { B } \right] } }$ : The observed output $y _ { t }$ is independently influenced by the latent states $z _ { t } ^ { [ \mathrm { B } ] }$ and the covariates ${ \bf { \mathcal { x } } } _ { t }$ . The emission is modeled as:
$$
y _ { t } \mid x _ { t } , \Theta , z _ { t } ^ { [ \mathrm { B } ] } \sim \mathcal { N } \left( \mathrm { I D M } \left( x _ { t } ; \pmb { \theta } _ { z _ { t } ^ { [ \mathrm { B } ] } } \right) , \sigma _ { z _ { t } ^ { [ \mathrm { B } ] } } ^ { 2 } \right) ,
$$
where $\mathrm { I D M } \left( \boldsymbol { x } _ { t } ; \boldsymbol { \theta } _ { z _ { t } ^ { [ \mathrm { B } ] } } \right)$ is the predicted output based on the IDM, and $\sigma _ { z _ { t } ^ { [ \mathrm { B } ] } } ^ { 2 }$ is the variance of the noise for state $z _ { t } ^ { [ \mathrm { B } ] }$
2. Traffic-Scenario Local Evidence $\psi _ { t } ^ { [ \mathrm { S } ] } \in \mathbb { R } ^ { K ^ { [ \mathrm { S } ] } }$ : For the traffic scenario, we model the relationship between the covariates $\scriptstyle { \mathbf { x } } _ { t }$ and the latent state $z _ { t } ^ { \mathrm { [ S ] } }$ as
$$
\begin{array} { r } { \pmb { x } _ { t } \mid z _ { t } ^ { \left[ \mathrm { S } \right] } , \pmb { \mu } _ { x } , \pmb { \Lambda } _ { \pmb { x } } \sim \mathcal { N } \left( \pmb { \mu } _ { \pmb { x } , z _ { t } ^ { \left[ \mathrm { S } \right] } } , \pmb { \Lambda } _ { \pmb { x } , z _ { t } ^ { \left[ \mathrm { S } \right] } } ^ { - 1 } \right) , } \end{array}
$$
where µ [S] and are the mean and precision matrix of the scenario-driven input. We represent the collecti these pzatrameters by $\pmb { \mu _ { x } } = \{ \pmb { \mu _ { x , k ^ { [ \mathrm { S } ] } } } \} _ { k ^ { [ \mathrm { S } ] } = 1 } ^ { K ^ { [ \mathrm { S } ] } }$ and $\Lambda _ { \pmb { x } } = \left\{ \Lambda _ { \pmb { x } , k ^ { \left[ \mathrm { S } \right] } } \right\} _ { k ^ { \left[ \mathrm { S } \right] } = 1 } ^ { K ^ { \left[ \mathrm { S } \right] } }$ .
Therefore, the joint local evidence is given as
$$
\begin{array} { r l } & { p ( y _ { t } , x _ { t } \mid z _ { t } , \Theta , \mu _ { x } , \Lambda _ { x } ) = \underbrace { p \left( y _ { t } \mid x _ { t } , \Theta , z _ { t } ^ { [ \mathrm { B } ] } \right) } _ { : = \psi _ { t } ^ { [ \mathrm { B } ] } \left( z _ { t } ^ { [ \mathrm { B } ] } \right) } \cdot \underbrace { p \left( x _ { t } \mid z _ { t } ^ { [ \mathrm { B } ] } , \mu _ { x } , \Lambda _ { x } \right) } _ { : = \psi _ { t } ^ { [ \mathrm { B } ] } \left( z _ { t } ^ { [ \mathrm { B } ] } \right) } } \\ & { \qquad = \mathcal { N } \left( y _ { t } ; \mathrm { I D M } \left( x _ { t } ; \theta _ { z _ { t } ^ { [ \mathrm { B } ] } } \right) , \sigma _ { z _ { t } ^ { [ \mathrm { B } ] } } ^ { 2 } \right) \cdot \mathcal { N } \left( x _ { t } ; \mu _ { x , z _ { t } ^ { [ \mathrm { B } ] } } , \Lambda _ { x , z _ { t } ^ { [ \mathrm { B } ] } } ^ { - 1 } \right) , } \end{array}
$$
Thus, the joint likelihood for the entire sequence of observations $\{ y _ { t } , \pmb { x } _ { t } \} _ { t = 1 } ^ { T }$ is:
$$
\begin{array} { l } { { \displaystyle p \left( { \boldsymbol y } _ { 1 : T } , { \boldsymbol x } _ { 1 : T } \mid { \boldsymbol z } _ { 1 : T } , { \boldsymbol \Theta } , { \boldsymbol \mu } _ { { \boldsymbol x } } , { \boldsymbol \Lambda } _ { \boldsymbol x } \right) = \prod _ { t = 1 } ^ { T } p ( { \boldsymbol y } _ { t } , { \boldsymbol x } _ { t } \mid { \boldsymbol z } _ { t } , { \boldsymbol \Theta } , { \boldsymbol \mu } _ { { \boldsymbol x } } , { \boldsymbol \Lambda } _ { \boldsymbol x } ) } \ ~ } \\ { { \displaystyle ~ = \prod _ { t = 1 } ^ { T } \left[ N \left( { \boldsymbol y } _ { t } ; \mathrm { I D M } \left( { \boldsymbol x } _ { t } ; { \boldsymbol \theta } _ { { \boldsymbol z } _ { t } ^ { \mathrm { I R } } } \right) , { \boldsymbol \sigma } _ { { \boldsymbol z } _ { t } ^ { \mathrm { I R } } } ^ { 2 } \right) \cdot \mathcal { N } \left( { \boldsymbol x } _ { t } ; { \boldsymbol \mu } _ { { \boldsymbol x } , { \boldsymbol z } _ { t } ^ { \mathrm { I S } } } , { \boldsymbol \Lambda } _ { { \boldsymbol x } , { \boldsymbol z } _ { t } ^ { \mathrm { I S } } } ^ { - 1 } \right) \right] } . } \end{array}
$$
where $\pmb { y } _ { 1 : T } = \left\{ \ b { y } _ { t } \right\} _ { t = 1 } ^ { T }$ , $\pmb { x } _ { 1 : T } = \left\{ \pmb { x } _ { t } \right\} _ { t = 1 } ^ { T }$ , and $z _ { 1 : T } = \left\{ z _ { t } \right\} _ { t = 1 } ^ { T }$ . To simplify the notation, we define the joint local
$$
\Psi _ { t } ( k ) = \psi _ { t } ^ { \left[ \mathrm { B } \right] } \left( k ^ { \left[ \mathrm { B } \right] } \right) \cdot \psi _ { t } ^ { \left[ \mathrm { S } \right] } \left( k ^ { \left[ \mathrm { S } \right] } \right) , \quad \forall k \in \mathcal { Z } ,
$$
to be represented by a vector $\Psi _ { t } \in \mathbb { R } ^ { | \mathcal { Z } | }$ .
# 3.2. Prior Distributions
3.2.1. Prior for Joint Transition Matrix: $p ( \pi )$
A natural prior for $p ( \pi )$ is the Dirichlet distribution, ensuring each row of the transition matrix sums to $^ { 1 }$ . For each row $\pmb { k } ^ { \prime }$ of $p ( \pi )$ , we set
$$
\begin{array} { r } { \pi _ { ( k ^ { \prime } , : ) } \sim \mathrm { D i r } ( c _ { k ^ { \prime } } ) , \quad \forall k ^ { \prime } \in \mathcal { Z } , } \end{array}
$$
where $\mathrm { D i r } ( \cdot )$ denotes a Dirichlet distribution, and $\pmb { c } _ { \pmb { k } ^ { \prime } } = [ c _ { \pmb { k } ^ { \prime } \pmb { k } } ]$ are the concentration parameters for transitions from state $\pmb { k } ^ { \prime }$ to all states $\boldsymbol { k } \in \mathcal { Z }$ .
# 3.2.2. Prior for Latent States: $p ( z _ { 1 : T } )$
The prior distribution over the latent states is:
$$
p ( z _ { 1 : T } ) = p ( z _ { 1 } ) \prod _ { t = 2 } ^ { T } p ( z _ { t } \mid z _ { t - 1 } ) ,
$$
where $p ( z _ { t } \mid z _ { t - 1 } ) = \pi _ { z _ { t - 1 } , z _ { t } }$ , and the prior probabilities $p ( z _ { 1 } )$ is assigned a Dirichlet prior over the joint state space $\mathcal { Z }$ :
$$
z _ { 1 } \sim \mathrm { D i r } ( c _ { z _ { 1 } } ) ,
$$
where $c _ { z _ { 1 } }$ are concentration parameters.
3.2.3. Prior for IDM Parameters: $p ( \Theta )$ and $p ( { \pmb \mu } , { \pmb \Lambda } )$ We assign a log-normal prior on $\pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } }$ and a log-normal-Wishart conjugate prior on its parameters, as follows
$$
\begin{array} { r l } & { \ln \left( \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } \right) \mid \pmb { \mu } , \pmb { \Lambda } ^ { - 1 } \sim \mathcal { N } \left( \ln ( \pmb { \mu } ) , \pmb { \Lambda } ^ { - 1 } \right) , \quad k ^ { \left[ \mathrm { B } \right] } = 1 , \ldots , K ^ { \left[ \mathrm { B } \right] } , } \\ & { \qquad \ln ( \pmb { \mu } ) \mid \pmb { \Lambda } \sim \mathcal { N } \left( \ln ( \pmb { \mu } _ { 0 } ) , ( \kappa _ { 0 } \pmb { \Lambda } ) ^ { - 1 } \right) , } \\ & { \qquad \pmb { \Lambda } \sim \mathcal { W } ( \nu _ { 0 } , \pmb { W } _ { 0 } ) , } \end{array}
$$
where $\mathcal { W }$ denotes a Wishart distribution.
Figure 3 – Illustration of the filtering, smoothing, and prediction problem in HMM.
3.2.4. Prior for Observation Noise Variance: $p ( \sigma ^ { 2 } )$ The variance of the observation noise $\sigma _ { k ^ { [ \mathrm { B } ] } } ^ { 2 }$ for each joint state $k \in { \mathcal { Z } }$ is assigned an inverse-Gamma prior:
$$
\sigma _ { k ^ { [ \mathrm { B } ] } } ^ { 2 } \mid \gamma _ { a } , \gamma _ { b } \sim \mathcal { I G } ( \gamma _ { a } , \gamma _ { b } ) , \quad k ^ { [ \mathrm { B } ] } = 1 , \ldots , K ^ { [ \mathrm { B } ] } ,
$$
where $\mathcal { Z } \vec { \mathcal { G } }$ represents an inverse-Gamma distribution.
3.2.5. Prior for Traffic Scenario Emission Parameters: $p ( \mu _ { x } , \Lambda _ { x } )$ We then put a normal-Wishart conjugate prior on $\mu _ { x , k ^ { \left[ \mathrm { S } \right] } }$ and $\mathbf { \Lambda } _ { \mathbf { x } , k ^ { \left[ \mathrm { S } \right] } }$ as
$$
\begin{array} { r l r l } & { \mu _ { x , k ^ { [ \mathrm { S } ] } } \mid \Lambda _ { x , k ^ { [ \mathrm { S } ] } } \sim \mathcal { N } \left( \mu _ { x , 0 } , ( \kappa _ { x , 0 } \Lambda _ { x , k ^ { [ \mathrm { S } ] } } ) ^ { - 1 } \right) , \qquad } & & { k ^ { [ \mathrm { S } ] } = 1 , \ldots , K ^ { [ \mathrm { S } ] } , } \\ & { \Lambda _ { x , k ^ { [ \mathrm { S } ] } } \sim \mathcal { W } ( \nu _ { x , 0 } , W _ { x , 0 } ) , } & & { k ^ { [ \mathrm { S } ] } = 1 , \ldots , K ^ { [ \mathrm { S } ] } . } \end{array}
$$
# 3.2.6. Joint Priors for Parameter set: $p ( \Omega )$
Here we summarize the prior distribution on the parameters $\Omega : = \{ \pi , \sigma ^ { 2 } , \Theta , \mu _ { x } , \Lambda _ { x } , \mu , \Lambda \}$ as:
$$
\begin{array} { r l } & { p ( \Omega ) = p ( \pi ) \cdot p ( \sigma ^ { 2 } ) \cdot p ( \Theta ) \cdot p ( \mu _ { x } , \Lambda _ { x } ) \cdot p ( \mu , \Lambda ) } \\ & { \qquad = \displaystyle \prod _ { k ^ { \prime } \in { \cal Z } } \operatorname { D i r } ( \pi _ { ( k ^ { \prime } , \lfloor \cdot ) } ; c _ { k ^ { \prime } } ) \cdot \prod _ { k \in { \cal Z } } [ { \cal N } ( \ln ( \theta _ { k \lfloor \mathrm { B } ) } ) ; \mu , \Lambda ^ { - 1 } ) \cdot { \cal Z } \mathcal { G } \big ( \sigma _ { k ^ { \prime } \mathrm { B } } ^ { 2 } ; \gamma _ { a } , \gamma _ { b } \big ) ] \cdot \mathcal { N } ( \mu ; \mu _ { 0 } , \big ( \kappa _ { 0 } \Lambda \big ) ^ { - 1 } ) } \\ & { \qquad \cdot \mathcal { W } ( \Lambda ; \nu _ { 0 } , W _ { 0 } ) \cdot \displaystyle \prod _ { k ^ { \prime } \mid \mathrm { = 1 } } ^ { K ^ { | \mathrm { S } | } } [ { \cal N } ( \mu _ { \mathrm { x } , k ^ { \prime } \mathrm { S } } ; \mu _ { \mathrm { x } , 0 } , ( \kappa _ { \mathrm { x } , 0 } \Lambda _ { \mathbf { x } , k ^ { \prime } \mathrm { S } } ) ) ^ { - 1 } ) \cdot \mathcal { W } \big ( \mathbf { \Lambda } _ { \mathbf { x } , k ^ { \prime } \mathrm { S } } ; \nu _ { \mathrm { x } , 0 } , W _ { \mathbf { x } , 0 } \big ) ] . } \end{array}
$$
# 3.3. Inference with MCMC
The posterior distribution for the FHMM-IDM model is then proportional to the product of the likelihood, the prior on the latent states, and the prior on the parameters. It is intractable to find an analytical solution for estimating the posteriors. Therefore, we develop an MCMC sampling algorithm (see Algorithm 1) to learn the posteriors of the model parameters and infer the latent states. Note that as shown in Fig. 3, the three fundamental inference tasks in HMM—filtering, smoothing, and prediction—differ in the set of observations used to estimate the latent state.
In the filtering task (the left panel), the objective is to estimate the current latent state $z _ { t }$ given observations up to time $t$ , i.e., $p ( \boldsymbol { z } _ { t } \mid \boldsymbol { y } _ { 1 : t } )$ . In smoothing (the middle panel), the goal is to retrospectively estimate a past latent state $z _ { t }$ using the entire sequence of observations, $p ( \boldsymbol { z } _ { t } \mid \boldsymbol { y } _ { 1 : T } )$ , thereby incorporating future evidence to improve estimation accuracy. In contrast, prediction (the right panel) aims to estimate future states and observations, such as $p ( \boldsymbol { z } _ { t + 1 } \mid \boldsymbol { y } _ { 1 : t } )$ or $p ( \boldsymbol { y } _ { t + 1 } \mid \boldsymbol { y } _ { 1 : t } )$ , based on current and past observations.
The figure emphasizes the distinct computational characteristics of these tasks: filtering operates in a causal (forward) manner, smoothing is acausal (utilizing both past and future observations), and prediction is inherently forward-looking. In this work, we focus primarily on the smoothing problem, which enables more accurate inference of the latent states. Nonetheless, our framework can be readily extended to address filtering and prediction tasks depending on the specific application context.
Input : Driving behavior observation $\stackrel { \widehat { \pmb { y } } } { _ { 1 : T } }$ ; number of burn-in iterations $m _ { 1 }$ and number of samples $m _ { 2 }$ for estimation; hyperparameters. Output : Transition matrix $\pi$ , states assignment $z _ { \mathrm { 1 : } T }$ , IDM variances $\pmb { \sigma }$ , IDM parameters $\Theta$ , mean $\pmb { \mu } _ { x }$ , and precision matrix $\Lambda _ { x }$ . 1 Initialize $\pi ^ { ( 1 ) }$ , $\pmb { \sigma } ^ { ( 1 ) }$ , $\Theta ^ { ( 1 ) }$ , $z _ { 1 : T } ^ { ( 1 ) }$ , $\mu _ { x } ^ { ( 1 ) }$ , $\pmb { \Lambda } _ { \pmb { x } } ^ { ( 1 ) }$ , $\pmb { \mu } ^ { ( 1 ) }$ , and $\mathbf { A } ^ { ( 1 ) }$ ; 23 for iDtrearawt $\{ \pi _ { ( \boldsymbol { k } ^ { \prime } , : ) } ^ { ( i ) } \} _ { \boldsymbol { k } \in \mathcal { Z } }$ $i = 1$ to $m _ { 1 } + m _ { 2 }$ by $\pi _ { ( k ^ { \prime } , : ) } ^ { ( i ) } \sim \mathrm { D i r } ( c _ { k ^ { \prime } } + n _ { k ^ { \prime } } ^ { ( i ) } )$ do // Given $z _ { 1 : T } ^ { ( i ) }$ (see Eq.(27)) $\pmb { k } \in \mathcal { Z }$ do 5 Compute $\{ \Psi _ { t } ^ { ( i ) } \} _ { t \in \mathcal { T } _ { k } }$ ; // Given y1:T , x1:T , z(1i:)T , Θ(i) (see Eqs.(9) and (7)) 6 Compute $\{ \gamma _ { t } ^ { ( i ) } \} _ { t = 1 } ^ { T }$ using Algorithm 2 ; // Give $\texttt { n } \alpha _ { 1 } , \beta _ { T } , \pi ^ { ( i ) } , p ( z _ { 1 } ) , \{ \boldsymbol { \Psi } _ { t } ^ { ( i ) } \} _ { t = 1 } ^ { T }$ 7 Draw $z _ { 1 : T } ^ { ( i ) }$ by $ { \boldsymbol { z } } _ { t } ^ { ( i ) } \sim { \mathrm { C a t } } ( { \boldsymbol { \gamma } } _ { t } ^ { ( i ) } )$ ; // Given $\{ \gamma _ { t } ^ { ( i ) } \} _ { t = 1 } ^ { T }$ (see Eq.25) 8 Draw θ(i) using Algorithm 3 ; // Calibrate IDM given µ(i), Λ(i), θ(i[)B] , z(i) 9 for iteration $k ^ { \mathrm { [ B ] } } = 1$ to $K ^ { [ \mathrm { B } ] }$ do 10 Draw $\sigma _ { k ^ { [ \mathrm { B } ] } } ^ { ( i ) }$ by $\sigma _ { k ^ { [ \mathrm { B } ] } } ^ { 2 ( i ) } \sim \mathcal { T } \mathcal { G } ( \gamma _ { a } ^ { \star } , \gamma _ { b } ^ { \star } )$ ; // Given θ (i[)B] , z(i) (see Eq.(35)) 112 for iDtrearawt µ(xi,)k[ $\mu _ { x , k ^ { \left[ \mathrm { S } \right] } } ^ { \left( i \right) } , \Lambda _ { x , k ^ { \left[ \mathrm { S } \right] } } ^ { \left( i \right) }$ $k ^ { \mathrm { [ S ] } } = 1$ to by $K ^ { [ \mathrm { S } ] }$ $\mathcal { N W }$ do ; // Given $z _ { 1 : T } ^ { ( i ) }$ (see Eq.(36)) 13 Draw $\mu ^ { ( i ) }$ and $\mathbf { A } ^ { ( i ) }$ by $\mathcal { N W }$ ; // Given $\Theta ^ { ( i ) }$ (see Eq.(33)) 14 if $i > m _ { 1 }$ then 15 Collect π(i), σ(i), Θ(i), $z _ { 1 : T } ^ { ( i ) }$ , $\mu _ { x } ^ { \left( i \right) }$ , $\pmb { \Lambda } _ { \pmb { x } } ^ { ( i ) }$ , $\pmb { \mu } ^ { ( i ) }$ and $\mathbf { \Lambda } \Lambda ^ { ( i ) }$ ; 16 return π, σ, Θ, $z _ { 1 : T }$ , $\mu _ { x }$ , $\mathbf { \Delta } \Lambda _ { x }$ , $\pmb { \mu }$ , and $\pmb { \Lambda }$ .
# 3.3.1. Sample Latent States $z _ { 1 : T }$
In the following, we introduce the Forward-Backward Algorithm (see Algorithm 2) to sample $z _ { 1 : T } ^ { [ \mathrm { B } ] }$ and $z _ { 1 : T } ^ { [ \mathrm { S } ] }$ Firstly, we define
$$
\begin{array} { r l } & { \alpha _ { t } \left( z _ { t } \right) : = p \left( { \pmb y } _ { 1 : t } , { \pmb x } _ { 1 : t } , { \pmb z } _ { t } \right) , } \\ & { \beta _ { t } \left( z _ { t } \right) : = p \left( { \pmb y } _ { t + 1 : T } , { \pmb x } _ { t + 1 : T } \mid { \pmb z } _ { t } \right) , } \\ & { \gamma _ { t } \left( z _ { t } \right) : = p \left( { z } _ { t } \mid { \pmb y } _ { 1 : T } , { \pmb x } _ { 1 : T } \right) . } \end{array}
$$
Then we can obtain
$$
p \left( { { y } _ { 1 } } _ { : T } , { { x } _ { 1 : T } } , { { z } _ { t } } \right) = p \left( { { y } _ { 1 : t } } , { { x } _ { 1 : t } } , { { z } _ { t } } \right) \cdot p \left( { { y } _ { t + 1 : T } } , { { x } _ { t + 1 : T } } \mid { { y } _ { 1 : t } } , { { x } _ { 1 : t } } , { { z } _ { t } } \right) = { \alpha } _ { t } \left( { { z } _ { t } } \right) \cdot { { \beta } _ { t } } \left( { { z } _ { t } } \right) ,
$$
and
$$
\gamma _ { t } \left( z _ { t } \right) = \frac { p \left( y _ { 1 : T } , x _ { 1 : T } , z _ { t } \right) } { p \left( y _ { 1 : T } , x _ { 1 : T } \right) } = \frac { \alpha _ { t } \left( z _ { t } \right) \cdot \beta _ { t } \left( z _ { t } \right) } { p \left( y _ { 1 : T } , x _ { 1 : T } \right) } = \frac { \alpha _ { t } \left( z _ { t } \right) \cdot \beta _ { t } \left( z _ { t } \right) } { \sum _ { z _ { t } } \alpha _ { t } \left( z _ { t } \right) \cdot \beta _ { t } \left( z _ { t } \right) } .
$$
In the following, we will derive the iterative form of $\alpha _ { t } ( z _ { t } )$ , $\beta _ { t } ( z _ { t } )$ , and therefore $\gamma _ { t } ( z _ { t } )$ . For the forward passes, $\forall z _ { t } \in \mathcal { Z }$ we have
$$
\begin{array}{c} \begin{array} { l } { \displaystyle \alpha _ { 1 } \left( z _ { 1 } \right) = \Psi _ { 1 } ( z _ { 1 } ) \cdot p \left( z _ { 1 } \right) , } \end{array} \qquad \mathrm { ~ } \qquad t = 1 , \\ { \displaystyle \alpha _ { t } \left( z _ { t } \right) = \Psi _ { t } ( z _ { t } ) \sum _ { z _ { t - 1 } } \alpha _ { t - 1 } \left( z _ { t - 1 } \right) \cdot \pi _ { z _ { t - 1 } , z _ { t } } , \qquad \qquad t = 2 , \ldots , T . } \end{array}
$$
To simplify the notations, here we organize $\boldsymbol { \alpha } _ { t } \in \mathbb { R } ^ { | \mathcal { Z } | }$ as a vector. Then Eq. (20b) can be expressed in a more efficient form as
$$
\begin{array} { r } { \pmb { \alpha } _ { t } = \pmb { \pi } \left( \pmb { \alpha } _ { t - 1 } \odot \pmb { \Psi } _ { t } \right) , } \end{array}
$$
where $a \odot b$ represents the Hadamard product.
# Algorithm 2: Forward-Backward Algorithm
For the backward passes, $\forall z _ { t } \in \mathcal { Z }$ we can derive
$$
\begin{array} { r l } & { \beta _ { t } \left( z _ { t } \right) = \displaystyle \sum _ { z _ { t + 1 } } \beta _ { t + 1 } \left( z _ { t + 1 } \right) \cdot \Psi _ { t + 1 } ( z _ { t + 1 } ) \cdot \pi _ { z _ { t } , z _ { t + 1 } } , \qquad t = 1 , \ldots , T - 1 , } \\ & { } \\ { \beta _ { T } \left( z _ { T } \right) = 1 , \qquad } & { t = T . } \end{array}
$$
Similarly, we organize $\beta _ { t } \in \mathbb { R } ^ { | \mathcal { Z } | }$ as a vector as well, then we can obtain the following form based on Eq. (22a), written as
$$
\beta _ { t } = \pi ^ { \top } \left( \beta _ { t + 1 } \odot \Psi _ { t + 1 } \right) .
$$
Therefore, we have
$$
\gamma _ { t } = \frac { \alpha _ { t } \odot \beta _ { t } } { \alpha _ { t } ^ { \top } \beta _ { t } } \in \mathbb { R } ^ { | \mathcal { Z } | } .
$$
For each time $t$ , we can sample the joint latent state $( z _ { t } ^ { [ \mathrm { B } ] } , z _ { t } ^ { [ \mathrm { S } ] } )$ from the posterior:
$$
\left( z _ { t } ^ { \left[ \mathrm { B } \right] } , z _ { t } ^ { \left[ \mathrm { S } \right] } \right) \sim \mathrm { C a t } \left( \gamma _ { t } \right) .
$$
Repeat this process for $t = 1 , \ldots , T$ to obtain the sequence $z _ { 1 : T }$ .
# 3.3.2. Sample Transition Matrix $\pi$
For each row $\pi _ { ( k ^ { \prime } , : ) }$ , we define the sufficient statistics as the counts of state transitions from state $\pmb { k } ^ { \prime }$ to state $\boldsymbol { k }$ over the entire sequence:
$$
n _ { k ^ { \prime } , k } = \sum _ { t = 2 } ^ { T } \mathbb { I } ( z _ { t - 1 } = k ^ { \prime } , z _ { t } = k ) ,
$$
where $\mathbb { I }$ is the indicator function.
Given the counts $n _ { k ^ { \prime } , k }$ , we sample $\pi _ { ( k , : ) }$ from the Dirichlet distribution:
$$
\pi _ { ( k ^ { \prime } , : ) } \sim \operatorname * { D i r } ( c _ { k ^ { \prime } } + n _ { k ^ { \prime } } ) ,
$$
where ${ \boldsymbol { n } _ { k ^ { \prime } } \in \mathbb { R } ^ { | \mathcal { Z } | } }$ collects the transition counts for the $\pmb { k } ^ { \prime }$ -th row of $\pi$ .
3.3.3. Sample the IDM Parameters $\boldsymbol \theta _ { k ^ { [ \mathrm { B } ] } }$ (Metropolis-Hastings Sampling)
We define a proposal distribution $q ( \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { \prime } \mid \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { ( i ) } )$ as a Gaussian centered at the current state θ(i[)B] , such that the proposed parameters $\pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { \prime }$ is sampled according to
$$
\begin{array} { r } { \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { \prime } \sim \mathcal { N } \left( \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { ( i ) } , \pmb { \Sigma } _ { q } \right) , } \end{array}
$$
where $\Sigma _ { q }$ is the covariance matrix of the proposal.
# Algorithm 3: Metropolis-Hasting (MH) Sampling (one step) for IDM Calibration.
Input : Driving behavior observation $\hat { y } _ { 1 : T }$ ; latent state assignments $z _ { 1 : T }$ ; IDM parameter set $\Theta ^ { ( i ) }$ ; proposal covariance matrix $\Sigma _ { q }$ ; local evidence $\psi _ { 1 : T }$ ; prior $p ( \pmb \theta )$ . Output : Updated IDM parameter set $\Theta ^ { ( i + 1 ) }$ . 1 for iteration $k ^ { \left[ \mathrm { B } \right] } = 1$ to $K ^ { [ \mathrm { B } ] }$ do 2 Draw $\pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { \prime } \sim \mathcal { N } ( \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { ( i ) } , \pmb { \Sigma } _ { q } )$ ; // Propose a candidate (see Eq.(28)) 3 Compute the acceptance rate $A \left( \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } ^ { \prime } , \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } ^ { ( i ) } \right)$ using Eq. (30), given Eqs. (29) and (31); 4 Draw a random number $p \sim \mathrm { U n i f o r m } ( 0 , 1 )$ ; 5 if $p < A ( \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { \prime } , \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { ( i ) } )$ θ(ki[)B]) then 6 $\underline { { \boldsymbol \theta } } _ { k ^ { [ \mathrm { B } ] } } ^ { ( i + 1 ) } = \boldsymbol \theta _ { k ^ { [ \mathrm { B } ] } } ^ { \prime }$ // Accept candidate (see Eq.(32)) 7 else 8 θ(ki[B+]1) (i[)B] ; // Reject candidate (see Eq.(32)) 9 return Θ(i+1).
According to Eq. (5), we have
$$
\begin{array} { r l } & { p ( \pmb { y } _ { 1 : T } \mid \pmb { x } _ { 1 : T } , \pmb { z } _ { 1 : T } , \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ) \propto \displaystyle \prod _ { t \in \mathcal { T } _ { k ^ { [ \mathrm { B } ] } } } \mathcal { N } \left( \pmb { y } _ { t } ; \mathrm { I D M } \left( \pmb { x } _ { t } ; \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } \right) , \sigma _ { k ^ { [ \mathrm { B } ] } } ^ { 2 } \right) } \\ & { \qquad \quad = \displaystyle \prod _ { t \in \mathcal { T } _ { k ^ { [ \mathrm { B } ] } } } \psi _ { t } ^ { [ \mathrm { B } ] } ( z _ { t } ^ { [ \mathrm { B } ] } ) , \quad \forall \pmb { k } \in \mathcal { Z } , } \end{array}
$$
where $\mathcal { T } _ { k ^ { [ \mathrm { B } ] } } : = \{ t \mid z _ { t } ^ { [ \mathrm { B } ] } = k ^ { [ \mathrm { B } ] } \}$ .
The acceptance probability $A$ for the proposed parameters $\theta _ { k ^ { \mathrm { [ B ] } } } ^ { \prime }$ is given by
$$
A \left( \pmb \theta _ { k ^ { \mathrm { [ B ] } } } ^ { \prime } , \pmb \theta _ { k ^ { \mathrm { [ B ] } } } ^ { ( i ) } \right) = \operatorname* { m i n } \left( 1 , \frac { p ( \pmb \theta _ { k ^ { \mathrm { [ B ] } } } ^ { \prime } ) \cdot p ( \pmb y _ { 1 : T } \mid \pmb x _ { 1 : T } , \pmb z _ { 1 : T } , \pmb \theta _ { k ^ { \mathrm { [ B ] } } } ^ { \prime } ) } { p \left( \pmb \theta _ { k ^ { \mathrm { [ B ] } } } ^ { ( i ) } \right) \cdot p ( \pmb y _ { 1 : T } \mid \pmb x _ { 1 : T } , \pmb z _ { 1 : T } , \pmb \theta _ { k ^ { \mathrm { [ B ] } } } ^ { ( i ) } ) } \right) ,
$$
where according to Eq. (13a),
$$
p \left( \pmb \theta _ { k ^ { \left[ \mathrm { B } \right] } } \right) = \mathcal { L } \mathcal { N } \left( \ln \left( \pmb \theta _ { k ^ { \left[ \mathrm { B } \right] } } \right) ; \ln ( \pmb \mu ) , \pmb \Lambda ^ { - 1 } \right) ,
$$
and $\mathcal { L N } ( \cdot )$ represents the log-normal distribution. Note that the ratio $q \left( \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } ^ { ( i ) } \mid \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } ^ { \prime } \right) / q \left( \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } ^ { \prime } \mid \pmb { \theta } _ { k ^ { \left[ \mathrm { B } \right] } } ^ { ( i ) } \right)$ of symmetric proposal probabilities simplifies as it cancels out for forward and reverse moves. The next sample $\pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } ^ { ( i + 1 ) }$ is then determined by
$$
\pmb \theta _ { k ^ { [ \mathrm { B } ] } } ^ { ( i + 1 ) } = \left\{ \pmb \theta _ { k ^ { [ \mathrm { B } ] } } ^ { \prime } , \quad \mathrm { w . p . ~ } A , \right.
$$
The MH sampling processes are summarized in Algorithm 3.
# 3.3.4. Sample $\pmb { \mu }$ and $\pmb { \Lambda }$
Due to the normal-Wishart conjugacy, we derive the posteriors as:
$$
\begin{array} { r } { \ln ( \pmb { \mu } ) \mid \Lambda , \{ \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } \} \sim \mathcal { N } \left( \ln ( \pmb { \mu } ^ { \prime } ) , ( \kappa ^ { \prime } \pmb { \Lambda } ) ^ { - 1 } \right) , } \\ { \Lambda \mid \{ \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } \} \sim \mathcal { W } ( \nu ^ { \prime } , \pmb { W } ^ { \prime } ) , \qquad } \end{array}
$$
where
$$
\begin{array} { l } { { \displaystyle \nu ^ { \prime } = \nu _ { 0 } + K ^ { [ \mathbb { R } ] } , } } \\ { { \displaystyle \kappa ^ { \prime } = \kappa _ { 0 } + K ^ { [ \mathbb { R } ] } , } } \\ { { \displaystyle \ln ( \tilde { \mu } ) = \frac { 1 } { K ^ { [ \mathbb { R } ] } } \sum _ { k } \ln ( \theta _ { k } \mathbb { m } ) ] , } } \\ { { \displaystyle S = \sum _ { k } ( \ln ( \theta _ { k } | \mathbf { s } ) ) - \ln ( \tilde { \mu } ) ( \ln ( \theta _ { k } | \mathbf { s } ) ) - \ln ( \tilde { \mu } ) ) ^ { \top } , } } \\ { { \displaystyle W ^ { \prime } = W _ { 0 } + S + \frac { \kappa _ { 0 } K ^ { [ \mathbb { R } ] } } { \kappa _ { 0 } + K ^ { [ \mathbb { R } ] } } ( \ln ( \tilde { \mu } ) - \ln ( \mu _ { 0 } ) ) ( \ln ( \tilde { \mu } ) - \ln ( \mu _ { 0 } ) ) ^ { \top } , } } \\ { { \displaystyle \ln ( \mu ^ { \prime } ) = \frac { \kappa _ { 0 } \ln ( \mu _ { 0 } ) + K ^ { [ \mathbb { R } ] } \ln ( \tilde { \mu } ) } { \kappa _ { 0 } + K ^ { [ \mathbb { R } ] } } . } } \end{array}
$$
# 3.3.5. Sample Observation Noise Variance $\sigma ^ { 2 }$
Given the normal-inverse-Gamma conjugacy, we have the posterior as
$$
\sigma _ { k ^ { [ \mathrm { B } ] } } ^ { 2 } \mid \{ y _ { t } \} _ { t \in \mathcal { T } _ { k ^ { [ \mathrm { B } ] } } } , \pmb { \theta } _ { k ^ { [ \mathrm { B } ] } } \sim \mathcal { T } \mathcal { G } ( \gamma _ { a } ^ { \star } , \gamma _ { b } ^ { \star } ) ,
$$
# 3.3.6. Sample $\pmb { \mu } _ { \pmb { x } }$ and $\Lambda _ { x }$
We define $\mathcal { T } _ { k ^ { [ \mathrm { S } ] } } : = \{ t | z _ { t } ^ { [ \mathrm { S } ] } = k ^ { [ \mathrm { S } ] } \}$ . The posterior distribution of $\pmb { \mu } _ { \pmb { x } , k } [ \mathrm { S } ]$ and $\mathbf { \Lambda } _ { \mathbf { \boldsymbol { x } } , k [ \mathrm { \boldsymbol { S } } ] }$ is derived using the normalWishart conjugacy
$$
\begin{array} { r l } & { \mu _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } \mid \Lambda _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } , \{ x _ { t } \} _ { t \in \mathcal { T } _ { k ^ { [ \mathrm { S } ] } } } \sim \mathcal { N } \left( \mu _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } ^ { \prime } , \big ( \kappa _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } ^ { \prime } \pmb { \Lambda } _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } \big ) ^ { - 1 } \right) , } \\ & { \qquad \Lambda _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } \mid \{ x _ { t } \} _ { t \in \mathcal { T } _ { k ^ { [ \mathrm { S } ] } } } \sim \mathcal { W } \big ( \nu _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } ^ { \prime } , W _ { \pmb { x } , k ^ { [ \mathrm { S } ] } } ^ { \prime } \big ) , } \end{array}
$$
where
$$
\begin{array} { r l } & { \nu _ { x , k ^ { [ s ] } } ^ { \prime } = \nu _ { x , 0 } + \left| \mathcal { T } _ { k ^ { [ s ] } } \right| , } \\ & { \kappa _ { x , k ^ { [ s ] } } ^ { \prime } = \kappa _ { x , 0 } + \left| \mathcal { T } _ { k ^ { [ s ] } } \right| , } \\ & { \quad \mathcal { \bar { X } } _ { k ^ { [ s ] } } = \frac { 1 } { \left| \mathcal { T } _ { k ^ { [ s ] } } \right| } \prod _ { t \in \mathcal { T } _ { k ^ { [ s ] } } } x _ { t } , } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \end{array}
$$
and $\vert \mathcal { T } _ { k [ \mathrm { S } ] } \vert$ , $\boldsymbol { \mathbf { \mathit { x } } } _ { k ^ { [ \mathrm { S } ] } }$ , and $S _ { k ^ { \mathrm { [ S ] } } }$ are the number, the sample mean, and the sample covariance of the data points assigned to component $k ^ { \mathrm { [ S ] } }$ .
# 3.4. Computational Cost Analysis
The FHMM-IDM model involves a joint latent space of driving regimes $( K ^ { \left\lfloor \mathrm { B } \right\rfloor } )$ and traffic scenarios $( K ^ { \mathrm { | S | } }$ ), with total joint states $| \mathcal { Z } | = K ^ { { { [ B ] } } } \times K ^ { { { [ S ] } } }$ . Learning is performed via MCMC, which requires repeated inference on multiple sequences of total length $T$ and over $M = m _ { 1 } + m _ { 2 }$ iterations. And $| \pmb \theta |$ is the dimension of the IDM parameters, typically $| \pmb \theta | = 5$ .
Table 2 summarizes the dominant computational costs per MCMC iteration, per trajectory. While the method is computationally intensive, it remains feasible using modern computing resources and can be parallelized over trajectories.
Table 2 – Per-iteration computational cost of the MCMC inference for FHMM-IDM.
# 4. IDENTIFICATION OF INTERPRETABLE DRIVING REGIMES
# 4.1. Dataset and Preprocessing
Experiments are performed on the HighD dataset that contains high-resolution naturalistic vehicle trajectories extracted from drone videos of German highways. Compared to the commonly used NGSIM dataset, the HighD dataset benefits from more reliable data capture methods and advanced computer vision techniques. It features 60 recordings captured at different times of the day, ranging from 8:00 to 17:00, and has a resolution of 25 Hz. In our experiment, the original dataset is downsampled to a smaller set with a sampling frequency of 5 Hz, achieved by uniformly selecting every 5-th sample. The HighD dataset provides detailed information on vehicle trajectories, velocities, and accelerations, which is essential for developing and evaluating car-following models that accurately capture real-world traffic scenarios. In this study, we follow the same data processing procedures as in Zhang and Sun (2024) to transform the data into a new coordinate system. we selected 100 leader-follower pairs where the car-following duration lasted for more than 50 seconds. By using pairs with longer car-following duration, we aim to capture more realistic driving behaviors and enable our model to handle complex and dynamic traffic situations better.
# 4.2. Modeling Setup and Assumptions
In our FHMM-IDM framework, several standard Bayesian choices—such as Dirichlet priors for the transition matrix, and Normal-Wishart priors for Gaussian emissions—are adopted for analytical tractability and empirical robustness. We now assess whether these assumptions are suitable for the observed driving data and behavior dynamics.
• Dirichlet Prior on Transition Matrix $\pi$ : Each row of the joint transition matrix $\pi$ is assigned an independent Dirichlet prior with symmetric concentration parameters. This encourages sparse transitions, reflecting empirical observations where drivers remain in the same latent mode over time. While effective for capturing persistence, this assumption does not model structured preferences among state transitions. More flexible priors, such as hierarchical Dirichlet or logistic-normal distributions, could encode such asymmetries, but the standard Dirichlet provides a good balance between simplicity and expressiveness in our setting (Zhang et al., 2021).
• Normal–Wishart Prior on Scenario Emissions: For each traffic scenario, the model assumes $\begin{array} { r } { \pmb { x } _ { t } \ = } \end{array}$ $[ v _ { t } , \Delta v _ { t } , s _ { t } ]$ follows a multivariate Gaussian distribution parameterized by $\pmb { \mu } _ { x }$ and $\boldsymbol { \Lambda } _ { x } ^ { - 1 }$ , with a Normal–Wishart prior (Chen et al., 2023). Although variables like $s _ { t }$ and $\Delta v _ { t }$ may be skewed in raw form, we apply standardization across the dataset, resulting in approximately symmetric and unimodal distributions within each regime. Therefore, Gaussian emissions are a reasonable assumption. Nonetheless, future work could explore heavy-tailed or skewed distributions to better capture extreme events.
• Gaussian Noise in Acceleration Residuals $y _ { t }$ : Given the driver behavior state $z _ { t } ^ { \mathrm { [ B ] } } = k$ , we model the acceleration as $y _ { t } \sim \mathcal { N } ( \mathrm { I D M } ( \pmb { x } _ { t } ; \pmb { \theta } _ { k } ) , \sigma _ { k } ^ { 2 } )$ . This Gaussian residual assumption implies that the model treats the deviations from deterministic IDM responses as temporally uncorrelated noise (i.e., independent and identically distributed). However, prior works (e.g., Zhang and Sun (2024); Zhang et al. (2024b)) have shown that residual acceleration errors in car-following behaviors can exhibit non-negligible temporal autocorrelations, especially under stop-and-go or high-density traffic conditions. While our current formulation assumes independence across time for computational efficiency and clarity, incorporating temporally correlated noise—for example, via a latent residual process or a GP-modulated emission—could enhance realism and improve performance in long-horizon trajectory prediction. This represents a promising direction for extending the FHMM-IDM framework.
Table 3 – Clarification of key terminologies used in this study. Latent States are latent model variables inferred by the FHMM from trajectory data, and each state jointly characterizes a unique combination of an external Traffic Scenario and a driver’s internal Driving Regime. A Traffic Scenario refers to the external contextual conditions (e.g., congestion, free-flow) under which drivers operate, while a Driving Regime represents a short-term behavioral mode or specific driving action (e.g., aggressive acceleration, cautious following). Real-world trajectory Cases are empirical examples selected from the highD dataset to demonstrate representative behaviors and validate the model’s capability to capture interactions between scenarios and regimes.
• Log-Normal Prior on IDM Parameters $\pmb { \theta } _ { k }$ : To capture the positive and skewed nature of IDM parameters, we place log-normal priors on $\pmb { \theta } _ { k }$ . This choice is supported by empirical distributions from the literature (e.g., Treiber et al. (2000); Zhang and Sun (2024)). Posterior samples across different behavior states stay within realistic and interpretable ranges.
The modeling assumptions in FHMM-IDM are well-aligned with empirical characteristics of naturalistic driving data. While conjugate priors and Gaussian likelihoods offer tractability and adequate performance, the framework could be extended with more flexible or robust alternatives—such as non-conjugate priors, heavy-tailed noise models, or structured transition dependencies—to better capture rare or extreme behaviors.
For the implementation details of the priors, we set $\mathrm { D i r } ( 1 / K ^ { [ \mathrm { b } ] } , \cdot \cdot \cdot , 1 / K ^ { [ \mathrm { b } ] } )$ for Dirichlet distribution to imply sparse state assignments. The hyperparameters for the prior of IDM is set with $\mu _ { 0 } = [ 3 3 , 2 , 1 . 6 , 1 . 5 , 1 . 6 7 ]$ (as suggested by Treiber et al. (2000)), $\kappa _ { 0 } = 0 . 0 1$ , $\nu _ { 0 } = 7$ , and $\begin{array} { r } { W _ { 0 } = \mathrm { d i a g } ( [ 0 . 1 , 0 . 1 , 0 . 1 , 0 . 1 , 0 . 1 ] , 0 . 1 ] , } \end{array}$ ) as the covariance matrix. We set $\gamma _ { a } = 1 0 0$ and $\gamma _ { b } = 1$ for the inverse Gamma prior to suppress the noise variances. For the conjugate normal-Wishart priors, we set $\nu _ { { \bf { x } } , 0 } = 5$ , $\kappa _ { \pmb { x } , 0 } = 0 . 0 1$ , $\mu _ { x , 0 } = [ 0 , 0 , 0 ]$ (with standardization) and $W _ { x , 0 } = \mathrm { d i a g } ( [ 0 . 1 , 0 . 1 , 0 . 1 ] )$ . For each chain, the burn-in iteration is set as $m _ { 1 } = 6 0 0 0$ , and we collect $m _ { 2 } = 2 0 0 0$ samples to estimate the posteriors. The code will be released at https://github.com/Chengyuan-Zhang/Markov_Switching_IDM upon acceptance of the paper. It is implemented purely with NumPy, without relying on integrated probabilistic programming frameworks such as PyMC or Stan.
# 4.3. Interpretable Latent States
In the following, we demonstrate the experiments results with ${ \bf \cal K } ^ { \mathrm { [ B ] } } = 2 , { \cal K } ^ { \mathrm { [ S ] } } = 2 ,$ ) and $( K ^ { [ \mathrm { B } ] } = 5 , K ^ { [ \mathrm { S } ] } = 5 )$ , respectively. For each $k ^ { [ \mathrm { B } ] } \in [ 1 , \dots , K ^ { [ \mathrm { B } ] } ]$ , we analyze the corresponding driving regime, and for each $k ^ { [ \mathrm { S } ] } \in [ 1 , \dots , K ^ { [ \mathrm { S } ] } ]$ , we show the corresponding traffic scenario. A summary of the terminology used to distinguish among latent states, driving regimes, traffic scenarios, and case studies is provided in Table 3.
Table 4 outlines the learned IDM parameters along with the corresponding standard deviation $\sigma _ { k }$ for each driving regime $k ^ { \left[ \mathrm { B } \right] }$ . The standard deviation $\sigma _ { k }$ reflects the uncertainty in the parameter estimates for each driving regime, highlighting the model’s ability to capture the variability in driver behavior across different regimes. These results demonstrate how the FHMM-IDM framework effectively identifies and characterizes multiple driving regimes based on the underlying patterns in car-following behavior. When $K ^ { \mathrm { [ B ] } } = 1$ , the model reduces to a conventional single-regime IDM (i.e., the pooled Bayesian IDM (Zhang and Sun, 2024)), producing an “Averaged Behavior” that aggregates across all driving conditions. While this baseline provides a coarse fit, it fails to account for the diversity and temporal variability present in real-world trajectories.
It is interesting to observe that when the model is configured with $K ^ { \mathrm { [ B ] } } = 2$ and $K ^ { [ \mathrm { S } ] } = 2$ , the FHMM-IDM yields a binary segmentation of both driving regimes and traffic scenarios (see Table 4 and Table 5). In this setting, Regime #1 corresponds to a High-Speed Seeking behavior, characterized by a high free-flow speed, short desired time headway, and moderate acceleration and braking capabilities. This regime reflects proactive and assertive driving under relatively unconstrained conditions. In contrast, Regime #2 reflects a Congested Cruising mode, with low speed preference, large desired spacing, long headway, and minimal responsiveness, indicative of passive, slow-paced behavior commonly seen in stop-and-go traffic. Readers interested in similar outcomes may find Zhang et al. (2023) to be a useful reference. The two inferred traffic scenarios similarly reflect a coarse partition into high-speed/large-gap and low-speed/small-gap environments, capturing the broad contextual distinctions in which these driving patterns occur.
Table 4 – Learned IDM parameters $( \pmb \theta _ { k } )$ ) and noise standard deviation ( $\sigma _ { k }$ ) for each driving regime.
Table 5 – Learned parameters of each traffic scenario latent state. Each scenario is characterized by the mean speed $\mu _ { v }$ , relative speed $\mu \Delta v$ , and spacing $\mu _ { s }$ , forming the mean vector $\mu _ { x , k ^ { [ \mathrm { S } ] } }$ . The interpretation column describes the typical traffic condition reflected by each state, inferred from statistical patterns and their behavioral context.
Although this coarse binary partitioning captures a basic dichotomy between fast, gap-closing behavior and conservative, gap-maintaining behavior, it inevitably oversimplifies the diversity of driving actions observed in naturalistic trajectories. For instance, it fails to distinguish between transitional regimes such as steady-state following, acceleration bursts, or braking responses, which are critical for understanding the dynamics of car-following interactions. To more faithfully represent these variations, we increase the number of latent states to $K ^ { [ \mathrm { B } ] } = 5$ and $K ^ { \mathrm { [ S ] } } = 5$ , which enables the model to uncover a more nuanced and granular structure, revealing five distinct driving regimes and traffic scenarios that better capture the range of human driving behaviors and their contextual dependencies.
To better understand the behavioral distinctions uncovered by the model, we examine the characteristics of each inferred driving regime based on the calibrated IDM parameters listed in Table 4. Regime #1 (Cautious Following) represents cautious driving with moderate desired speed, relatively large desired gap, long headway, and gentle acceleration capabilities, indicative of defensive and careful gap management. Regime #2 (Aggressive Following with Abrupt Deceleration) characterizes assertive driving with short headways, high acceleration capability, and notably large deceleration capacity, indicative of aggressive gap management combined with readiness for abrupt braking events. Regime #3 (Congested Cruising) corresponds to cautious driving behaviors in heavy congestion, characterized by very low desired speed, large spacing, long time headway, and minimal acceleration. Regime #4 (Steady-State Following) captures balanced and stable tracking behavior, marked by moderate desired speed, moderate headway, and balanced acceleration and deceleration, suitable for stable car-following under moderate conditions. Finally, Regime #5 (High-Speed Seeking) represents confident driving aiming for high-speed operation, characterized by very high desired speed, short headway, and high acceleration capability, reflecting proactive, high-speed cruising behavior. Together, these five regimes span a diverse spectrum of driver actions, substantially enhancing the behavioral realism and interpretability of the FHMM-IDM framework.
Figure 4 – The indexing mechanism and the state transition matrix.
Figure 5 – Visualization of the covariance matrices Λx−,1k[S] and the corresponding correlation matrices.
With $K ^ { \mathrm { [ B ] } } = 5$ and $K ^ { \mathrm { [ S ] } } = 5$ , Fig. 4 illustrates the indexing mechanism and the state transition matrix of the FHMM-IDM model. In this formulation, the latent state is factorized into two independent components: the driving regime factor, $z _ { t } ^ { [ \mathrm { B } ] }$ , which reflects intrinsic patterns of driver action (e.g., acceleration, deceleration, cruising), and the traffic scenario factor, $z _ { t } ^ { [ \mathrm { S } ] }$ , which encodes external conditions such as speed and spacing. Each joint latent state is represented as a pair $\big ( k ^ { \mathrm { [ B ] } } , k ^ { \mathrm { [ S ] } } \big )$ and mapped to a unique index in a vectorized state space, allowing the construction of a unified state transition matrix $\pi \in \mathbb { R } ^ { | \mathcal { Z } | \times | \mathcal { Z } | }$ . Each entry $\pi ( \boldsymbol { k } ^ { \prime } , \boldsymbol { k } )$ denotes the probability of transitioning from the joint state indexed by $\pmb { k } ^ { \prime }$ to that indexed by $\boldsymbol { k }$ , thereby modeling the temporal evolution and interaction between driver action regimes and contextual traffic environments.
Table 5 summarizes the learned mean vectors $\mu _ { x , k ^ { \mathrm { [ S ] } } } = [ \mu _ { v } , \mu _ { \Delta v } , \mu _ { s } ]$ for each latent traffic scenario under varying model complexities. When $K ^ { \mathrm { [ S ] } } = 1$ , the model reduces to a context-agnostic formulation, producing an “averaged traffic” scenario that blends behaviors across all regimes and fails to distinguish between qualitatively different traffic conditions. As the number of latent traffic scenarios increases, the model uncovers progressively finer distinctions. With $K ^ { [ \mathrm { { S } ] } } = 2$ , the model differentiates between Congested and Dense Traffic and High-Speed Cruising, capturing the broad dichotomy between low-speed/high-density and free-flowing conditions. However, this binary categorization remains too coarse to reflect transient or intermediate states. The five-scenario model ( $K ^ { \mathrm { [ S ] } } = 5$ ) provides a more expressive segmentation, revealing nuanced traffic contexts such as approaching behavior in stop-and-go waves (Scenario #1), gradual dissipation phases with large spacing and decaying congestion (Scenario #2), and steady-state following where drivers maintain consistent gaps and speed differentials (Scenario $\# 3$ ). In contrast, Scenario #4 corresponds to highly congested, short-gap conditions, while Scenario #5 captures smooth high-speed cruising. These patterns align with observed macroscopic flow phenomena and highlight the model’s ability to extract interpretable
Figure 6 – The histogram of driving regimes for each scenario.
latent structure from raw car-following data.
Fig. 5 provide the learned covariance matrices $\Lambda _ { x , k ^ { \mathrm { [ S ] } } } ^ { - 1 }$ , with Table 5 which jointly show how the FHMM-IDM model distinguishes scenario-specific relationships among speed $v$ , gap $s$ , and speed difference $\Delta v$ under each latent driving-scenario state $k ^ { \mathrm { [ S ] } }$ . In Fig. 5, each column corresponds to a unique latent scenario $k ^ { \left[ \mathrm { S } \right] }$ . In the top row, the color scales capture the covariance between these variables, while in the bottom row, the normalized correlation matrices highlight the same relationships bounded within $[ - 1 , 1 ]$ . Notably, different states exhibit distinct off-diagonal elements, revealing that each latent scenario reflects a characteristic pattern of co-movements and variability among $\{ v , s , \Delta v \}$ . As a result, the FHMM-IDM framework effectively uncovers traffic regimes in which drivers may exhibit strong correlations in speed and gap in one scenario, but a contrasting pattern in another.
• Scenario #1 ( $k ^ { [ \mathrm { S } ] } = 1$ ). This state distinctly represents an approaching scenario commonly observed in congested or stop-and-go traffic conditions. As shown in Table 5, this scenario is characterized by a moderate vehicle speed ( $\mu _ { v } = 5 . 7 1 \mathrm { { m } / \mathrm { { s } } }$ ), a substantial positive relative speed ( $\mu _ { \Delta v } = 0 . 7 3 \mathrm { m } / \mathrm { s }$ ), and a relatively large headway ( $\mu _ { s } = 1 9 . 0 4 \mathrm { m }$ ). These statistics describe a traffic context in which the lead vehicle is nearly stationary or moving slowly, while the following vehicle continues to approach at a significantly higher speed—resulting in a rapidly closing gap despite active deceleration.
The correlation matrices in Fig. 5 further support this interpretation. A strong positive correlation between speed and gap suggests that vehicles traveling at higher speeds initially maintain longer headways. Meanwhile, a moderate negative correlation between gap and relative speed implies that drivers experience increasing closing rates as the gap shrinks, consistent with anticipatory deceleration in response to a slow or stopped lead vehicle.
Moreover, the histogram in Fig. 6 indicates that Scenario #1 frequently co-occurs with Regime #3 (congested cruising, $k ^ { [ \mathrm { B } ] } = 3$ ), as summarized in Table 4. This regime is marked by low desired speed, large spacing preferences, and cautious acceleration and braking parameters. The predominance of this pairing suggests that drivers tend to adopt conservative and anticipatory behaviors when approaching slower traffic—gradually reducing speed to maintain safety margins and avoid abrupt maneuvers. This finding highlights the model’s ability to capture meaningful interactions between contextual traffic scenarios and immediate driver action patterns.
• Scenario #2 ( $k ^ { \left[ \mathrm { S } \right] } = 2$ ). This scenario represents a distinctive car-following context characterized by relatively low mean speed ( $\mu _ { v } = 6 . 2 0 \mathrm { { m } / \mathrm { { s } } }$ ), a notably large gap ( $\mu _ { s } = 3 8 . 9 6 \mathrm { m }$ ), and a slightly negative relative speed ( $\mu _ { \Delta v } = - 0 . 3 4 \mathrm { m } / \mathrm { s }$ ), as reported in Table 5. The combination of low speed and generous spacing suggests a transitional state in which traffic is beginning to recover from congestion. Drivers in this scenario likely engage in gradual congestion dissipation or exhibit cautious behavior in a dense yet slowly improving traffic environment.
Fig. 5 shows a moderate positive correlation between gap (s) and relative speed $( \Delta v )$ , indicating that as vehicles maintain or slightly increase their headways, the magnitude of negative relative speed decreases. This pattern reflects drivers’ gentle adjustments in speed to preserve spacing and mitigate abrupt maneuvers, consistent with defensive behavior in transitional flow.
From the histogram in Fig. 6, Scenario #2 also frequently co-occurs with Regime #3 (congested cruising, $k ^ { [ \mathrm { B } ] } = 3$ ). This pairing further reinforces the interpretation that drivers in this scenario tend to adopt conservative, safety-oriented strategies by maintaining comfortably long gaps and adjusting their speeds cautiously in response to recovering traffic dynamics.
• Scenario #3 ( $k ^ { \mathrm { [ S ] } } = 3$ ). This scenario corresponds to moderate traffic conditions, characterized by a relatively low average speed $\mu _ { v } = 4 . 8 9 \mathrm { m / s }$ ), a moderate headway ( $\mu _ { s } = 1 2 . 6 7 \mathrm { m }$ ), and an almost neutral relative speed $\mu _ { \Delta v } = 0 . 0 2 \mathrm { m } / \mathrm { s }$ ), as shown in Table 5. The near-zero relative speed indicates that the following vehicle maintains a speed closely matched to that of the leader, resulting in a stable gap. This pattern is indicative of steady-state car-following behavior, where drivers operate under moderately dense but stable traffic flow.
The histogram in Fig. 6 reveals that Scenario #3 frequently co-occurs with Regime #1 (cautious following, $k ^ { \mathrm { [ B ] } } = 1$ ), Regime #3 (congested cruising, $k ^ { [ \mathrm { B } ] } = 3$ ), and Regime #4 (steady-state following, $k ^ { \mathrm { [ B ] } } = 4$ ). These regimes, according to Table 4, span a range of cautious to responsive behaviors, including large spacing preferences, low acceleration capacity, and moderate headway maintenance. This distribution suggests that drivers in this scenario engage in smooth, adaptive tracking of the lead vehicle, with limited acceleration or deceleration, depending on their behavioral disposition. The convergence of multiple regimes further highlights the versatility of steady-state following, as drivers with varying levels of conservatism or responsiveness consistently stabilize their speed in relation to the leader under moderate traffic conditions.
• Scenario #4 ( $k ^ { \mathrm { [ S ] } } = 4$ ). This scenario reflects a congested or dense traffic condition, with low average speed $\dot { \mu } _ { v } = 3 . 6 6 \mathrm { m } / \mathrm { s }$ ), small headway ( $\mu _ { s } = 6 . 9 0 \mathrm { m }$ ), and a slightly negative relative speed ( $\mu _ { \Delta v } = - 0 . 2 0 \mathrm { m } / \mathrm { s }$ ). The tight spacing and low speed indicate that the follower is closely trailing the leader and making frequent small adjustments to maintain a safe distance, a hallmark of car-following in congested flow.
Figure 6 reveals that Scenario #4 co-occurs with multiple driving regimes, including Regime #1 (cautious following), Regime #4 (steady-state tracking), and Regime #5 (high-speed cruising with quick adaptation). This wide range of co-occurring regimes suggests that drivers with different behavioral tendencies all converge on similarly dense following behaviors under constrained traffic conditions. Whether adopting which regime, drivers consistently manage close headways and subtle adjustments in speed to ensure safe following during congestion.
• Scenario #5 ( $k ^ { [ \mathrm { S } ] } = 5$ ). This scenario captures a higher-speed car-following state, characterized by the highest mean speed across all scenarios $\mu _ { v } = 1 0 . 2 2 \mathrm { m } / \mathrm { s }$ ), a moderate gap ( $\mu _ { s } = 1 6 . 5 4 \mathrm { m }$ ), and a slightly negative relative speed $\bar { \mu } _ { \Delta v } = - 0 . 1 7 \mathrm { m / s }$ ). The slight deceleration trend in the relative speed indicates that the follower maintains a stable cruising distance behind the lead vehicle, with gentle corrections to preserve spacing.
According to Fig. 6, Scenario #5 co-occurs with all five identified driving regimes, highlighting its prevalence across diverse driver behaviors. This widespread occurrence suggests that high-speed conditions with moderate spacing are a common operational state encountered by both assertive and cautious drivers alike. The consistency in this scenario’s co-occurrence pattern underscores its role as a baseline driving context in which individuals with varying action tendencies regulate headway and speed in a similarly stable manner.
Figure 7 demonstrates how the FHMM-IDM framework disentangles short-term driving regimes, traffic scenarios, and actual speed profiles from naturalistic vehicle trajectories on Lanes 3 and 4 of the HighD dataset. In the top row, each trajectory segment is colored by its inferred driving regime $z _ { t } ^ { [ \mathrm { B } ] }$ , revealing frequent transitions among driving regimes. The middle row displays the corresponding traffic scenario assignments $z _ { t } ^ { [ \mathrm { S } ] }$ , which capture contextual states ranging from Approaching (Stop-and-Go) through Gradual Dissipation and Dense Traffic, to High-Speed Cruising. Finally, the bottom row shows the raw vehicle speed ${ \boldsymbol { v } } _ { t }$ over time, allowing a direct visual comparison between the latent assignments and the true kinematic behavior.
A closer inspection of the layered panels reveals consistent co-occurrence patterns that validate the interpretability of the model. For example, Regime #3 (Congested Cruising) often coincides with Scenario #1 (Approaching) during deceleration phases; Regime #4 (Steady-State Following) aligns with dense traffic in low-speed, tightly spaced flow; and Scenario #5 (High-Speed Cruising) consistently matches the segments where the speed trace is highest. These results illustrate that FHMM-IDM successfully separates internal driver intent from external traffic context, and that the latent scenario labels correspond closely to observed speed dynamics.
# 4.4. Case Study
Figures 8, 9, 10, and 11 illustrate four representative trajectories in which the FHMM-IDM framework jointly infers short-term driving regime states $( k _ { t } ^ { \left[ \mathrm { B } \right] }$ ) in the upper panels and traffic scenario states $( k _ { t } ^ { [ \mathrm { S } ] } )$ in the lower panels. In each upper plot, the human driver’s measured acceleration (black line) is overlaid with the model’s regime-specific prediction (red line, with $K ^ { \mathrm { [ B ] } } = 5$ ), while the colored background denotes the inferred driving regime—ranging from Regime #1: Cautious Following to Regime #5: High-Speed Seeking. A grey line is included to represent the prediction from a single averaged IDM model (with $K ^ { \mathrm { [ B ] } } = 1$ ), highlighting the discrepancy caused by the one-to-one mapping assumption and underscoring the benefit of regime switching. In the corresponding lower plot, the vehicle’s speed ( $v$ , blue), gap (s, green), and relative speed ( $\Delta v$ , red) are shown, with the shaded background indicating the inferred traffic scenario states. Together, these visualizations demonstrate how the FHMM-IDM dynamically adapts to changing contexts, assigning interpretable regime and scenario labels while closely tracking the driver’s control inputs.
Figure 7 – Samples of time–space trajectories for vehicles in Lane $\mathcal { B }$ (left) and Lane 4 (right) from the HighD dataset. First row (Driving Regime Coloring): Each vehicle’s longitudinal position $\scriptstyle { x _ { t } }$ (in meters) is plotted against the frame index $\mathit { ( 2 5 ~ H z ) }$ , with pastel colors indicating the rounded posterior mean of the driving-regime state $z _ { t } ^ { [ \mathrm { B } ] }$ . Whenever the inferred regime changes, a new line segment is drawn with the corresponding color from the Paired colormap (see legend in the Lane 4 panel). This view illustrates how drivers switch among discrete behavioral modes (e.g., aggressive, defensive, relaxed) as they travel. Second row (Scenario Coloring): The same trajectories are recolored according to the rounded posterior mean of the traffic-scenario state $z _ { t } ^ { [ \mathrm { S } ] }$ . Again, changes in the inferred scenario trigger a new line segment, with pastel colors representing different regimes (Scenario #1–#5). This highlights how vehicles transition among traffic contexts (e.g., free-flow vs. congested) over time. Third row (Speed Coloring): The trajectories are colored by actual speed $\boldsymbol { v } _ { t }$ (in $m / s ,$ ) using a continuous colormap (blue $\approx$ high-speed, red $\approx$ low-speed). The vertical colorbar at right indicates the speed scale. Comparing all three rows reveals how driving regimes and traffic-scenario assignments correspond to underlying speed patterns—e.g., Scenario #4 often aligns with lower-speed segments, while Scenario #5 consistently aligns with higher-speed segments; Scenario #1 often co-occurs with Regime #3 in the approaching situations.
In Case I (Fig. 8), frequent transitions among Regime #1, Regime #3, and Regime #4 occur over successive segments of the trajectory, roughly covering the intervals 0–15 seconds, 15–35 seconds, and 35–60 seconds. These regime switches correspond closely to the sequence of scenario transitions: Scenario $\# 5 \to \# 3 \to \# 4 \to \# 3 \to \# 2$ $ \# 1$ . Scenario #5 represents high-speed conditions with moderate gaps, indicative of smooth, free-flowing traffic. Transitioning to Scenario #3 reflects moderate-speed conditions with relatively stable gaps, typical of steady-state traffic. Scenario #4 represents dense traffic characterized by reduced speeds and tighter spacing, requiring greater interaction and adaptation. A brief return to Scenario #3 indicates temporary relief from congestion. Scenario #2 introduces low-speed conditions with large gaps, reflecting cautious, transitional driving behavior during gradual flow dissipation. Finally, Scenario #1 captures highly congested, stop-and-go traffic marked by minimal speeds and short spacing. This chain of scenario transitions illustrates how the driver progressively adapts from free-flow to congested environments, with the model accurately capturing both behavioral responses and contextual changes.
Figure 8 – Case I: Comparison of human-driver acceleration (solid black) and FHMM-IDM predicted acceleration (solid red) with inferred driving regimes (background shading, top), and corresponding speed (v, blue), gap (s, green), and relative speed ( $\Delta v$ , red) with inferred traffic scenarios (background shading, bottom).
Figure 9 – Case II: Comparison of human-driver acceleration (solid black) and FHMM-IDM predicted acceleration (solid red) with inferred driving regimes (background shading, top), and corresponding speed (v, blue), gap (s, green), and relative speed ( $\Delta v$ , red) with inferred traffic scenarios (background shading, bottom).
Case II (Fig. 9) predominantly highlights Regime #1 (Cautious Following), Regime #3 (Congested Cruising), and Regime #4 (Steady-State Following), corresponding to the scenario sequence: Scenario $\# 5 \to \# 1 \to \# 3 \to \# 4 \to \# 3 \to$ $\# 1 \to \# 3 \to \# 4$ . The sequence begins with Scenario #5, reflecting high-speed cruising with moderate spacing, before shifting abruptly to Scenario #1, which denotes approaching behavior in highly congested conditions, marked by minimal gaps and low speeds. The subsequent transitions between Scenarios #3 and $\# 4$ indicate alternation between steady-state car-following and dense traffic, while brief returns to Scenario #1 capture intermittent stop-and-go phases. These dynamic scenario changes are mirrored by shifts among cautious, responsive, and congestion-aware driving regimes. The frequent reappearance of Regime #1 during congested intervals suggests that drivers adopt defensive behaviors to maintain safety under uncertain and variable traffic conditions.
In Case III (Fig. 10), aggressive driving regimes—Regime #2 (Aggressive Braking) and Regime #5 (High-Speed
Figure 10 – Case III: Comparison of human-driver acceleration (solid black) and FHMM-IDM predicted acceleration (solid red) with inferred driving regimes (background shading, top), and corresponding speed (v, blue), gap (s, green), and relative speed ( $\Delta v$ , red) with inferred traffic scenarios (background shading, bottom).
Cruising)—dominate throughout the observed period. These regimes occur primarily under alternating conditions between Scenario #5 (high-speed cruising) and Scenario #4 (congested and dense traffic). This relatively simple transition pattern (Scenario $\# 5 \to \# 4$ ) reflects a driver who persistently adopts assertive control strategies to manage speed and spacing. The consistent preference for aggressive regimes under both free-flow and denser traffic conditions suggests a strong intent to maintain efficiency and assertiveness in car-following behavior.
Figure 11 – Case IV: Comparison of human-driver acceleration (solid black) and FHMM-IDM predicted acceleration (solid red) with inferred driving regimes (background shading, top), and corresponding speed (v, blue), gap (s, green), and relative speed (∆v, red) with inferred traffic scenarios (background shading, bottom).
Lastly, Case IV (Fig. 11) illustrates a more intricate interplay among Regime #1 (Cautious Following), Regime #3 (Congested Cruising), and Regime #4 (Steady-State Following). The associated traffic scenarios transition through the sequence Scenario $\# 1 \to \# 3 \to \# 4 \to \# 3$ . Initially, Scenario #1 captures a congested, stop-and-go context, followed by a transition into Scenario #3, denoting moderate speed with stable spacing indicative of steady-state following. As traffic becomes denser and spacing narrows, Scenario $\# 4$ arises, prompting a shift to more responsive driving adjustments. The final return to Scenario #3 suggests relief from congestion and restoration of steady-state conditions. These transitions underscore the driver’s adaptive strategy, modulating between cautious, stable, and responsive regimes in response to the evolving traffic environment.
Taken together, these two layers of latent states, driving regime $( k ^ { \left[ \mathrm { B } \right] } )$ and traffic scenario $( k ^ { [ \mathrm { S } ] } )$ , demonstrate how the FHMM-IDM framework captures the nuanced interplay between intrinsic driver behavior and the surrounding traffic environment. The detailed case analyses underscore the interpretability of the proposed model, showing how drivers dynamically transition between distinct behavioral modes in response to evolving traffic contexts. By disentangling internal decision patterns from external conditions, the FHMM-IDM enhances the realism and behavioral richness of microscopic traffic simulations. | Accurate and interpretable car-following models are essential for traffic
simulation and autonomous vehicle development. However, classical models like
the Intelligent Driver Model (IDM) are fundamentally limited by their
parsimonious and single-regime structure. They fail to capture the multi-modal
nature of human driving, where a single driving state (e.g., speed, relative
speed, and gap) can elicit many different driver actions. This forces the model
to average across distinct behaviors, reducing its fidelity and making its
parameters difficult to interpret. To overcome this, we introduce a
regime-switching framework that allows driving behavior to be governed by
different IDM parameter sets, each corresponding to an interpretable behavioral
mode. This design enables the model to dynamically switch between interpretable
behavioral modes, rather than averaging across diverse driving contexts. We
instantiate the framework using a Factorial Hidden Markov Model with IDM
dynamics (FHMM-IDM), which explicitly separates intrinsic driving regimes
(e.g., aggressive acceleration, steady-state following) from external traffic
scenarios (e.g., free-flow, congestion, stop-and-go) through two independent
latent Markov processes. Bayesian inference via Markov chain Monte Carlo (MCMC)
is used to jointly estimate the regime-specific parameters, transition
dynamics, and latent state trajectories. Experiments on the HighD dataset
demonstrate that FHMM-IDM uncovers interpretable structure in human driving,
effectively disentangling internal driver actions from contextual traffic
conditions and revealing dynamic regime-switching patterns. This framework
provides a tractable and principled solution to modeling context-dependent
driving behavior under uncertainty, offering improvements in the fidelity of
traffic simulations, the efficacy of safety analyses, and the development of
more human-centric ADAS. | [
"stat.AP",
"cs.LG",
"cs.RO"
] |
# 1 Introduction
Human-written texts vary widely in terms of length, style, communicative intent, lexical/syntactical choices, and numerous other dimensions (Giulianelli et al., 2023; Liu and Zeldes, 2023; Rezapour et al., 2022; Baan et al., 2023). Such variation poses a significant challenge in the evaluation of summarization systems (Lloret et al., 2018; Celikyilmaz et al., 2021). Traditional summarization metrics typically rely on comparing system outputs to one or more references, treating these references as a “gold standard”. Although the limitations of reference-based metrics have long been acknowledged (Rankel et al., 2013; Louis and Nenkova, 2013; Reiter, 2018; Peyrard, 2019; Fabbri et al., 2021; Goyal et al., 2023), they remain widely popular due to their simplicity, low compute requirements, relative ease of adaptation to different languages, and reproducibility. In 2024, out of 21 ACL papers mentioning “summarization” in their title, 19 $( 9 0 \% )$ include a reference-based metric in their evaluation, with ROUGE (Lin, 2004) being the most common $( 7 1 \% )$ , followed by BERTScore (Zhang et al. $2 0 2 0 , 5 2 \%$ .
Figure 1: Human-written summaries are diverse. Using a human-written reference instead of another makes evaluation metrics fluctuate, and affects model ranking.
The assumption behind the use of referencebased metrics is that systems outputs that are more similar to the reference(s) are better, due to their “human-likeness” (Gehrmann et al., 2023). However, the significant variation in human-written summaries implies that evaluating system outputs against a single or limited set of references has inherent drawback. Previous research has extensively looked at correlation between metrics and human judgments in summarization, further exploring the use of multiple references to improve such correlation (Lin, 2004; Belz and Reiter, 2006; Fabbri et al., 2021; Tang et al., 2024). However, a much less studied question is the extent to which automatic metrics are sensitive to the choice of human-written reference summaries, as shown in Figure 1. In other words, are these metrics stable across different plausible gold-standard references? If metric scores vary significantly with the selected reference(s), this variation calls into question the reliability of many evaluation practices in the field.
In this work, we quantify the impact of reference choice on automatic evaluation metrics for summarization. Our contributions are as follows:
[1] We investigate how different reference sets affect system rankings. Our results show that system rankings based on n-grammatching metrics (e.g., ROUGE) strongly depend on the choice of the reference(s), undermining the reliability of model comparisons. However, rankings based on more semantically-oriented metrics exhibit greater stability.
[2] We examine the robustness of widely-used reference-based metrics at the instance and dataset level. Our analysis reveals that the variation in scores introduced by the choice of reference on a dataset often exceeds the variation observed in state-of-theart (SOTA) summarization models.
[3] We collect new human judgment scores on Large Language Model (LLM) outputs for the genre-diverse GUMSum (Liu and Zeldes, 2023) dataset. We use these data to reassess the correlation between automatic metrics and human judgments, complementing earlier SummEval evaluation (Fabbri et al., 2021) limited to pre-LLM models and newswire data. We find that correlations tend to increase with the number of references, and that the metric with the highest correlation varies depending on the evaluation dimension and the number of references.
Our analysis reveals that few metrics tend to show reasonable correlation with human judgments and robustness to the reference sets, especially when scoring LLM outputs.
# 2 Related Work
Summarization Evaluation. Recent advances in Natural Language Generation (NLG) have significantly enhanced the development of automatic summarization systems. However, their evaluation remains an open problem (Celikyilmaz et al., 2021; Goyal et al., 2023). Summarization evaluation metrics are broadly categorized into reference-based and reference-free (Lloret et al., 2018). Reference-based metrics compare system outputs to human-written reference summaries, relying on methods such as n-gram overlap (Lin, 2004; Papineni et al., 2002), embedding similarity ( $\mathrm { N g }$ and Abrecht, 2015; Zhao et al., 2019; Zhang et al., 2020), or model-based evaluation techniques (Peyrard et al., 2017; Scialom et al., 2019; Yuan et al., 2021). In contrast, referencefree summarization metrics do not assume a gold standard (Yuan et al., 2021; Vasilyev et al., 2020; Gao et al., 2020). Lastly, growing research starts to leverage LLMs as evaluators, with or without references (Liu et al., 2023; Song et al., 2024; Li et al., 2024).
Metrics Meta-Evaluation. Meta-evaluation of summarization metrics typically focuses on the extent to which they can be used as a proxy for human evaluation. Reiter and Belz (2009) examined the validity of automatic scores for NLG tasks, while Rankel et al. (2013) focused on ROUGE and its correlation with human judgments. Peyrard (2019) showed that metrics with reasonable correlation on lower-quality outputs tend to diverge when output quality increases. Caglayan et al. (2020) demonstrated various idiosyncrasies of automatic evaluation metrics, noting that high correlation with human judgments is not sufficient to characterize their reliability. Fabbri et al. (2021) performed a large-scale meta-evaluation of summarization metrics, and found that most metrics have low correlation with human judgments on coherence, while relevance is weakly or moderately correlated.
While most existing research focused on correlation to human scores, Tang et al. (2024) addressed the challenge of evaluation when a limited number of references is available. They proposed leveraging LLMs to diversify the references, expanding the evaluation coverage and improving the correlation with humans. Their results show that increasing the number of references significantly enhances the reliability of existing evaluation metrics in terms of correlation. However, since LLM outputs tend to show less variability and follow distinct patterns compared to humanproduced content (Giulianelli et al., 2023; Guo et al., 2024; Shur-Ofry et al., 2024; Reinhart et al., 2025), relying on them to replace human references might introduce biases.
# 3 Experimental Setup
To quantify the impact of different human-written references on the scores of automatic metrics, we exam multiple elements. For datasets, we use SummEval (Fabbri et al., 2021), GUMSum (Liu and Zeldes, 2023), and DUC2004 (Dang and Croft, 2004), which contain multiple humanwritten summaries (§3.1), to assess how different reference summaries affect metric performances. Next, to assess summarization models, we use the existing outputs provided by Fabbri et al. (2021) for SummEval. As these outputs predate LLMs, we additionally collect outputs using LLMs (§3.2) for all three datasets. Lastly, to compute the correlations with humans, we use the human judgments available in SummEval and gather new human ratings for GUMSum on both human and LLM-generated summaries (§3.3). We prioritize GUMSum over DUC2004, as it includes multiple genres beyond news data. We comply with the license of the existing datasets. For newly collected model outputs and human judgments, we follow the license of the corresponding underlying datasets. Our metric selection is outlined in $\ S 3 . 4$ .
in the form of short sentences or lists of keywords.2 DUC2004 references are thus extremely concise (only up to 75 characters). The dataset has played a significant role in summarization research and was part of the annual TREC conference evaluation.
# 3.2 Model Outputs
Fabbri et al. (2021) collected model outputs for SummEval from 24 extractive and abstractive summarization systems, which were SOTA between 2017 and 2019. We focus on the 16 models for which they provided human judgments.
For all datasets, we also include summaries generated by contemporary LLMs. This is crucial given that prior studies have demonstrated that evaluation metrics often show lower correlation with high-quality outputs (Peyrard, 2019; AlvaManchego et al., 2021). We see a similar pattern for LLMs (§4.4). For consistency purposes, we follow Lin and Zeldes (2025) and use Llama3- 3B-Instruct (Hermann et al., 2015), Qwen-2.5- 7B-Instruct (Qwen et al., 2025), Claude-3.5 (Anthropic, 2024), and GPT-4o (OpenAI, 2024). For each LLM, we generate a single summary. This way, we emphasize LLM variety over multiple generations. Details on the generation parameters and prompts etc. are reported in Appendix A.
# 3.1 Human-written Summaries
Table 1 provides an overview of the three datasets.
SummEval (Fabbri et al., 2021) is built on top of CNN/DM (Hermann et al., 2015; Nallapati et al., 2016), containing news articles and human-written highlights. Authors selected 100 instances from the test set; for these, in addition to the highlight-based summary in CNN/DM, ten references were crowd-sourced (Kryscinski et al., 2019).
GUMSum (Liu and Zeldes, 2023) contains summaries created following general and genrespecific guidelines1 to function as a substitute for the source (Nenkova and McKeown, 2011). We focus on the 48 documents in the dev and test sets, which contain five human-written summaries each (Lin and Zeldes, 2025), evenly distributed across 12 genres.
DUC2004 Task1 (Dang and Croft, 2004) consists of 489 news documents, most with four references. The guidelines allow the summaries to be
# 3.3 Human Judgments
SummEval (Fabbri et al., 2021) contains expert judgments that assess summaries based on four criteria: coherence, consistency, fluency, and relevance, using a Likert scale of 1-5 (Likert, 1932).
To measure how well automatic metrics align with human judgments in different genres, and to study whether findings on pre-LLM models align with those on LLM outputs, we conduct a human evaluation on the 48 GUMSum documents. We recruited three Master’s students in Computational Linguistics and instructed them to evaluate four LLM outputs (§3.2) and five human references, following Fabbri et al. (2021)’s criteria. LLM-generated and human-written summaries were anonymized and shuffled. We also asked the evaluators to pick one best and one worst summary for each document. Table 2 reports the results. We note that overall, Claude scored the best. GPT-4o gets the highest consistency but the lowest coherence and relevance, and thus the least picked LLM output. Interestingly, LLM outputs typically receive higher scores than human-written references.
Table 1: Multi-reference summarization datasets. #sums indicates the number of human-written references per instance. We generate outputs using four LLMs and collect a new set of human judgments for GUMSum.
Table 2: Human judgments on GUMSum for system-generated versus human-written summaries. best and worst indicate the percentage of evaluators who voted the summarizer as best/worst averaged over documents.
# 3.4 Evaluation Metrics
We examine the following reference-based metrics chosen due to their widespread use. All metric ranges fall in 0-100. Appendix B provides details.
ROUGE (Lin, 2004) is the most popular metric for summarization. ROUGE-N computes n-gram overlap between a hypothesis and the references. ROUGE-L leverages the longest common subsequence, accounting for the word order. When evaluating with multiple references, ROUGE considers either the maximum or the mean of the n-gram overlap $( \mathrm { R O U G E } _ { \operatorname* { m a x } }$ and $\mathrm { R O U G E } _ { \mathrm { a v g } . }$ ). We report the F1-score.
BLEU (Papineni et al., 2002) is an n-gram overlap metric primarily used to assess translations. It is precision-based and incorporates a brevity penalty. When multiple references are provided, the n-gram count is clipped at the maximum count of n-grams in a single reference and the length of the reference closest in size to the hypothesis is considered.
METEOR (Banerjee and Lavie, 2005) incorporates multiple linguistic aspects, including synonym matching, stemming, and word order, making it more robust in capturing semantic equivalence. While primarily designed for translation, it has also been used to assess summaries. With multiple references, the maximum score is considered.
BERTScore (Zhang et al., 2020) leverages pretrained contextual embeddings and considers the cosine similarity between the embeddings of the hypothesis and the reference tokens. With multiple references, the final score is the maximum among the individual scores. We report the F1 score.
BLEURT (Sellam et al., 2020) is a modelbased metric that leverages a pre-trained BERT fine-tuned on human judgments. The metric is not designed to handle multiple references; thus, we compute individual scores for each reference and consider the maximum value.
# 4 Reference Variability and Metrics Robustness
Reference-based metrics assume that more human-like outputs deserve higher scores. However, human summaries are very diverse. This section examines how metrics fluctuate with different human references. By analyzing metric robustness, we aim to understand how conclusions about models, drawn from reference-based metrics, might change when different sets of human-written references are used, thereby undermining evaluation reliability.
# 4.1 Human-written Summaries are Diverse
Human-written summaries show substantial diversity. We assess the variability in the multireference datasets following Giulianelli et al. (2023). For each pair of human-written summaries for the same instance in the three datasets, we report the lexical similarity (the overlapping distinct n-grams between two strings), the syntactic similarity (the overlap of part-of-speech tag n-grams), and the semantic similarity (the cosine and euclidean similarity between the embeddings of the two strings).
Figure 2 shows these variations. At the dataset level, DUC and SummEval show the lowest similarity among human-written summaries across all dimensions. For GUMSum, summaries are more similar to each other. We hypothesize that this is likely due to the constrained annotation guidelines.3 It is also worth noting that the similarities revealed here are between different human-written summaries for a given instance as opposed to summaries across genres, for which we still expect significant variations, as demonstrated by Liu and Zeldes (2023). Overall, summaries tend to be similar at the syntactic level, less so at the semantic and lexical level. We also observe that LLM outputs show lower diversity (Appendix C), consistently with previous work (Giulianelli et al., 2023).
Figure 2: Variation in human-written summaries across datasets, measures inspired by Giulianelli et al. (2023).
# 4.2 Automatic Metrics Fluctuate Substantially at the Instance Level
Given the diversity in human-written summaries, we aim to quantify metric fluctuations at the instance level when using a different set of humanwritten references. For an automatic evaluation metric $M$ and a set of human-written references $R ~ = ~ \{ r _ { 1 } , r _ { 2 } , . . . , r _ { N } \}$ , we compute $M ( r _ { i } , R -$ $\{ r _ { i } \} )$ . In other words, for each document, we score each human-written summary using all the others as the reference set. Figure 3 exemplifies the observed variability at the instance level measured by ROUGE- $\mathrm { \cdot L _ { a v g } }$ on the three datasets. For SummEval, we also highlight the scraped original reference in the CNN/DM dataset with a cross. The quality of these scraped references versus the ten crowd-sourced ones is discussed further in Appendix D.
Scores assigned to human-written summaries are often low. For example, the averaged ROUGE-Lavg scores are $2 8 . 5 2 _ { \pm 5 }$ , $2 7 . 4 6 _ { \pm 3 }$ $2 4 . 8 8 _ { \pm 5 . 3 }$ for SummEval, GUMSum, and DUC2004. Given the assumption that human reference summaries are of high quality (i.e., “gold”), metrics should produce high scores.
Figure 3: Instance-level variation for ROUGE- $\mathbf { \cdot L _ { a v g } }$ . For every document (shown first 30, one per line), we plot the score for every human-written reference against all other references (using the same color per source to aid interpretation). The original CNN/DM reference in SummEval is marked by a cross.
Instead, they do not typically reflect this property.
Human-written references scores vary widely. Figure 4 summarizes the instance-level variability of the individual scores (in Figure 3) for all evaluation metrics on SummEval (corresponding figures for GUMSum and DUC are in Appendix E). For each metric, we compute the range (i.e., the difference between the maximum and the minimum score) when scoring human-written references against all the others $( M ( r _ { i } , R - \{ r _ { i } \} ) )$ . Figure 4 shows the histogram of such ranges. Note that the ranges of variation observed within human-written references are on average very high.
Figure 4: Ranges of variability at the instance level on SummEval. For each instance, we compute the range of the scores of the references against the remaining ones. The trends for $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ and $\mathtt { R O U G E } _ { \mathrm { a v g } }$ are similar.
# 4.3 System Ranking Depends on the Reference(s) for n-gram-based Metrics
While we observed variability at the instance level, summarization metrics are typically designed to evaluate models across entire datasets, rather than individual instances. In this section, we investigate to what degree standard summarization metrics can handle the variability observed in humanwritten references when ranking summarization systems.
Understanding the magnitude of such a range might not be obvious. For instance, an increase of 10 points of BERTScore (which typically returns scores compressed in the high range of the scale) might indicate a much larger improvement in performance than an increase of 10 points of ROUGE-1.4 To contextualize the magnitude of variation for each metric, we also report the range of performance of summarization systems. Thus, for a model $S$ , given its output $o _ { i }$ for instance $i$ , we score it through $M ( o _ { i } , R )$ . Although these values are not directly comparable and should be interpreted with caution due to the use of different reference sets, they help contextualize the magnitude of the results and its potential impact on evaluation. For example, ROUGE- $1 _ { \mathrm { m a x } }$ assigned to human-written references vary by about 35 points on average (the green dashed line in Figure 4). In contrast, the mean range is less than 20 points across all model outputs (orange line), and much lower for LLM-generated outputs (blue line). These findings highlight the significance of the observed variability and suggest that the ranking of summarization models is highly sensitive to the reference set.
Procedure. We sample $k$ humans-written references $( k \in [ 1 , N ]$ , where $N$ is the number of references for each document) from all available references for each instance. We then score the outputs of each summarization system using the same set of references. Given $M$ systems $S _ { a } , S _ { b } , \ldots , S _ { M }$ , the metric induces a ranking $S _ { a } \succ S _ { b } \succ \cdot \cdot \cdot \succ$ $S _ { M }$ . This process is repeated 100 times, yielding 100 resulting rankings. We compute the pairwise Kendall rank correlation coefficient (Kendall, 1938) between such ranks. High correlation indicates that different sets of references lead to similar model ordering, even when different sets of references are used. Figure 5 reports the average correlation for pairs of ranks for each dataset and metric, from using a $k$ human-written summaries as references. $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ is shown in Figure 5, and $\mathtt { R O U G E } _ { \mathrm { a v g } }$ is reported in Figure 11 in Appendix F.
Single Reference. Evaluating with a single reference is common in summarization, as most datasets provide only one human-written summary. Figure 5 (looking at $k \ = \ 1 \mathrm { \cdot }$ shows the stability of different metrics with a single reference across the three datasets. We find that BLEU and ROUGE have very weak to moderate correlation between ranks across different references. In other words, using two different sets of plausible references would likely lead to different conclusions on relative model performance. We also notice a large variability among the individual pairs of rankings, with some showing negative correlation (refer to Table 4 in Appendix F for results on individual metrics and datasets).
Figure 5: Rank stability when increasing the number of references. $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ is presented. Note that we use different ranges for the y axis for each dataset to improve readability.
In contrast, more semantically-oriented metrics show a greater stability. For SummEval, BLEURT shows the highest correlation between ranks, followed by METEOR and BERTScore. BLEURT and METEOR confirm their stability on GUMSum when ranking the LLM outputs. Other metrics (including BERTScore) show low or no correlation on GUMSum, with the exception of ROUGE-1. In all cases, metrics show much higher stability on DUC, for which all average correlations are above 0.7. We speculate that the high stability might be due to an artifact introduced by the short length of the summaries, required by the guidelines.
In summary, n-gram-matching metrics, though simple, are highly reference-dependent, undermining consistent model evaluation, while semantically-oriented ones show greater stability. Therefore, we recommend always using modelbased metrics in benchmarks with a single reference. When cost is a factor, METEOR might offer a good balance of stability and affordability.
Multiple References. When scoring model outputs against a set of $k > 1$ randomly sampled references, we observe that the correlation between rankings obtained with different humanwritten references generally improves when increasing the number of references. This increased stability is expected and in line with similar findings that associate a larger number of references with a higher correlation with humans (Lin, 2004).
However, the stability varies by metric. ROUGE (especially $\mathrm { R O U G E } _ { \operatorname* { m a x } } ,$ ) and BLEU tend to have lower correlation between ranks than other metrics. As an example, the $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ scores require 5-10 references to reach a level of stability that is comparable to that of BERTScore on SummEval with a single reference. ROUGEavg has a better stability than $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ , especially with a larger set of references. For example, on SummEval, ROUGE- $\mathrm { \cdot L _ { a v g } }$ has higher stability than BERTScore for $k > 3$ , while on GUMSum, ROUGE- $2 _ { \mathrm { a v g } }$ is the second most stable metric for $k > 3$ . On all datasets, BLEURT and METEOR remain stable even with a single reference, with METEOR showing remarkable stability despite its simplicity.
In general, trends on SummEval are clearer and simpler to interpret than the other two datasets. We speculate that this is due to the larger number of models used (16 pre-LLM models $+ 4$ LLMs on SummEval vs 4 LLMs on GUMSum and DUC). BLEURT, METEOR, and BERTScore show the highest stability, while n-gram-based metrics show low to average correlation between ranks even when multiple references are used. The cases of GUMSum and DUC2004 are more complex to interpret and might be less meaningful given fewer model outputs (i.e., only four LLM outputs). For GUMSum, BLEURT continues to show high inter-rank correlation, with METEOR being the second most stable. BERTScore, on the other hand, shows poor stability. Similar to the case with $k = 1$ , on DUC2004, all metrics show high stability, likely due to summaries being very short, as dictated by the guidelines.
# 4.4 Correlation with Human Judgments
In addition to stability, automatic metrics should correlate with human judgments. We compute correlations for SummEval and GUMSum, for which we have human judgments,5 at the instance and system level as the number of references $k$ increases.6
Instance-level Correlation. Figure 6 reports the instance-level correlation for SummEval (top) and GUMSum (bottom), respectively, versus number of references. We show $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ ; corresponding figures using $\mathtt { R O U G E } _ { \mathrm { a v g } }$ are in Appendix G.
We notice weak-to-no correlation on both datasets. All correlations are generally higher on SummEval (where we consider outputs from the pre-LLM era) than on GUMSum (where we consider LLMs), in accordance to previous work showing that correlation with human judgments decreases as the quality of the outputs improves (Peyrard, 2019). For SummEval, increasing the number of references consistently leads to better correlation. This effect vanishes on GUMSum, where a larger reference set leads to no effect or slightly lower correlation. For SummEval, BERTScore shows the highest correlation on all dimensions but consistency, for which METEOR and $\mathtt { R O U G E } _ { \mathrm { a v g } }$ are a better proxy. Notice how the best metric in terms of correlation with human judgment depends on the considered criterion and the available number of references: BLEURT, for example, typically has low correlation when considering one reference only, performing worse than ROUGE. However, its performance improves when more references are considered, surpassing the scores of n-gram-based metrics.
System-level Correlation. System-level correlation is generally higher than instance-level correlation on SummEval; however, many criteria still show weak to moderate correlation when one or very few references are included. In most cases, such correlation tends to improve with the number of references. This is not the case for $\mathrm { R O U G E } _ { \operatorname* { m a x } }$ , especially when considering consistency. The full results are provided in Figure 13 in the Appendix G. GUMSum is excluded from this analysis due to the small number of systems available. | Human language production exhibits remarkable richness and variation,
reflecting diverse communication styles and intents. However, this variation is
often overlooked in summarization evaluation. While having multiple reference
summaries is known to improve correlation with human judgments, the impact of
using different reference sets on reference-based metrics has not been
systematically investigated. This work examines the sensitivity of widely used
reference-based metrics in relation to the choice of reference sets, analyzing
three diverse multi-reference summarization datasets: SummEval, GUMSum, and
DUC2004. We demonstrate that many popular metrics exhibit significant
instability. This instability is particularly concerning for n-gram-based
metrics like ROUGE, where model rankings vary depending on the reference sets,
undermining the reliability of model comparisons. We also collect human
judgments on LLM outputs for genre-diverse data and examine their correlation
with metrics to supplement existing findings beyond newswire summaries, finding
weak-to-no correlation. Taken together, we recommend incorporating reference
set variation into summarization evaluation to enhance consistency alongside
correlation with human judgments, especially when evaluating LLMs. | [
"cs.CL"
] |
# 1 Introduction
Artificial intelligence-assisted radiology informatics (AIRI) remains challenging to deploy due to the complexity of radiographs (e.g., Xrays with non-distinctive features among overlapping diseases) and the subjective nature of radiologists’ reports [12]. Some examples of AIRI include image segmentation [2], image classification [1], and report generation from radiographs [20]. To process radiographs, early work used convolutional neural network (CNN) variants, like CNN-RNN, to extract image features [19]. Vision transformers (ViTs), introduced by Dosovitskiy et al. [8], are more popular and can capture global features more effectively than CNNs. Meanwhile, radiology reports are processed by language models like BERT [21] to extract semantic text features and classify reports into disease categories. However, many studies have demonstrated that combining image and textual features in a contrastive alignment outperforms unimodal approaches [3, 4, 28]. Thus, vision-language models (VLMs) have been proposed. One limitation inherent to the contrastive nature of VLMs is that they require large amounts of training data to learn effective image–text pair representations [15]. Although VLMs excel at tasks involving distinct and well-separated multi-class classification, their performance can degrade when classifying limited data with complex or closely related classes (e.g., distinguishing ‘Pneumonia’ from ‘Consolidation’ in biomedical datasets) [3, 22]. Biomedical datasets exhibit complex relationships, multi-label dependencies, and extreme class imbalance. Moreover, rare diseases remain underrepresented, leading to low detection by automated systems, i.e., machine learning models [24]. To address this issue, researchers have proposed domain-specific pretraining of VLMs—training them on tailored, domain-specific datasets—to enhance performance on such tasks [6]. However, this specialized pretraining may compromise domain generalization, as models optimized for a particular domain might perform less effectively on out-of-distribution (OOD) data [26]. Therefore, we pose the following research question:
Can large, pretrained vision–language models accurately classify images in a multi-label, imbalanced, out-of-distribution biomedical dataset?
Based on this, we define two research objectives:
(1) To quantitatively analyze the inter- and intra-class distances in the learned embeddings of the vision–language model.
(2) To evaluate the model’s performance limitations on a highly imbalanced, out-of-distribution, multi-label medical dataset using a multi-faceted set of performance metrics.
We hypothesize that:
Although zero-shot inference with large pretrained vision–language models provides a strong baseline on multi-label, imbalanced, out-of-distribution biomedical datasets, moderate adaptation strategies (e.g. linear probing) will yield further performance gains at a reasonable computational cost—substantially lower than that required for full end-to-end fine-tuning
To justify our point, we experiment with BiomedCLIP [28], an open-source VLM trained on 15 million medical image–text pairs (the PMC-15M dataset). To date, BiomedCLIP outperforms the radiology-specific BioViL model [5] on the RSNA pneumonia detection benchmark and achieves a mean accuracy of $7 5 . 5 \%$ across five zero-shot classification datasets, a 12-point improvement over general-domain CLIP. It also achieves a top-1 recall of $56 \%$ in crossmodal retrieval over the PMC-15M test set. Finally, on medical VQA (VQA-RAD), it attains $7 5 . 2 \%$ accuracy—surpassing the former best of $6 8 . 5 \%$ —further confirming its broad, state-of-the-art performance across classification, retrieval, and reasoning tasks.
We demonstrate the overall workflow in Fig. 1. We evaluate BiomedCLIP on the IU-xray dataset [7], a 14-label multi-class benchmark that is highly imbalanced $( 2 , 4 0 0 ~ ^ { \circ } \mathrm { N o }$ Finding” vs. 20 “Pneumothorax” samples). Morever, BiomedCLIP is not pretrained on this dataset, which renders it as OOD. We assess its performance under three model adaptations: zero-shot inference, full fine-tuning, and linear probing. These three adaptation span a continuum of computational cost and performance trade-offs [9]. Zero-shot is the go-to method that does not require in-domain training due to its massive pretrained knowledge representations. It is computationally less expensive than full fine-tuning, which requires more computational resources (e.g. GPU memory), to retrain the weights of the entire network [16]. Conversely, linear probing freezes the encoder and trains only a lightweight classification head, offering a low-compute adaptation that often yields substantial accuracy gains while preserving the quality of the pretrained representations by the pretrained model.
For each settings (or adaptations), we compute per-class precision, recall, and F1, as well as overall multi-label metrics (macro-F1, exact-match accuracy, LRAP, coverage error, ranking loss). We also quantify embedding-space separability via the inter/intra-class Euclidean-distance ratio, and we visually inspect its explanations via Grad-CAM [18]. These evaluation metrics are standard for assessing both detection quality and ranking performance in multilabel classification [27]. Additionally, we obtained 15 radiologistannotated radiographs, enabling direct visual comparison with our Grad-CAM heatmaps. This represents a crucial step toward validating model interpretability in real-world clinical settings.
We observe two notable findings from our experiments:
(1) Full fine-tuning exhibits a higher inter-/intra-class distance ratio than zero-shot inference and linear probing, which is counterintuitive – one would normally expect end-to-end tuning to yield superior class separability. Interestingly, linear probing achieves a comparable ratio to zero-shot inference.
(2) Zero-shot BiomedCLIP produces significant false positives and low precision for rare diseases (i.e., rare disease. While full fine-tuning improves classification of well-represented diseases, linear probing enhances detection of rare-class features; notably, its overall performance is on par with that of full fine-tuning.
The paper is organized as follows: we present recent related literature in Section 2. Then, we describe the data properties of the dataset used in this research in Section 3, and elaborate our methodology to conduct the experiments in Section 4. We discuss the results and findings in Section 5. Finally, we share our thoughts and future directions in Section 6.
# 2 Related Work
Vision–Language Models (VLMs) in Biomedicine. Recent foundation models such as CLIP showed that aligning images and text in a shared embedding space can yield remarkable performance on vision tasks without task-specific training, enabling capabilities like zero-shot image classification and cross-modal retrieval. Researchers have extended VLMs to specialized domains such as biomedicine, where datasets are multimodal but often limited and lack ground-truth labels [13].
Efforts to adapt VLMs to biomedical data have focused on selfsupervised learning from medical images and associated text, such as radiology reports. For example, ConVIRT [30] trained dual image/text encoders on paired chest X-rays and radiology reports using bidirectional contrastive learning, achieving improved transfer learning for medical image classification. GLoRIA [11] introduced a global–local alignment mechanism: in addition to matching wholeimage and report embeddings, it uses cross-attention to link image sub-regions with semantic phrases in the report, thereby capturing pathology-specific visual details and improving interpretability. These domain-specific pretraining approaches demonstrated that medical image representations benefit from joint text supervision, yielding higher downstream accuracy than vision-only counterparts [23].
Large-scale biomedical VLMs such as BioViL [3] combined a radiology-tuned language encoder with a vision backbone in a contrastive framework, using millions of hospital image–report pairs. BioViL achieved state-of-the-art results on multiple chest X-ray benchmarks (e.g., abnormality classification and natural-language inference) by tailoring the text encoder to clinical language. Similarly, MedCLIP [23] leveraged unpaired medical images and text: it decoupled image–text corpora to create synthetic pairings and introduced a semantic matching loss to handle noisy correspondences. MedCLIP proved remarkably data-efficient—using only $1 0 \%$ of the usual pretraining samples, it surpassed prior radiology VLMs like GLoRIA on zero-shot label prediction and supervised classification.
These biomedical VLMs have been evaluated on a wide range of medical imaging tasks, often outperforming conventional methods. For image classification, BiomedCLIP set new state-of-the-art results on standard radiology tasks. While these models demonstrate the feasibility of multimodal diagnostic reasoning, a sizable gap remains between their performance and that of human radiologists [25], underscoring the need for domain-specialized VLMs and careful evaluation on clinically realistic tasks.
Adaptation Strategies: Zero-Shot, Fine-Tuning, and Linear Probing. A crucial question in applying VLMs to biomedical tasks is how best to adapt the pretrained model to target data. At one extreme, VLMs can be used in a zero-shot manner—treating each disease as a text prompt (e.g., “X-ray showing pneumonia”) and selecting the label whose embedding best matches the image embedding [23]. However, zero-shot accuracy often lags behind supervised methods, especially for subtle or rare findings, due to phrasing mismatches and limited exposure to specific visual manifestations.
At the other extreme, full fine-tuning on in-domain labels usually yields the highest accuracy, as shown by BiomedCLIP and other models fine-tuned for chest X-ray classification or segmentation [29]. Yet, full fine-tuning of a large multimodal model is computationally expensive and risks overfitting when datasets are small. As a middle ground, linear probing—training only a lightweight classifier on frozen image embeddings—has emerged as an efficient adaptation: it often recovers much of the performance gap to full fine-tuning at a fraction of the computational cost [17]. Overall, the consensus is that naive zero-shot inference is suboptimal in medicine, and minimal task-specific adaptation—via fine-tuning, linear probing, or learned prompts—is typically required to capture the fine-grained, domain-specific nuances of biomedical datasets.
Limitations of Existing Biomedical VLMs. Despite rapid progress, biomedical VLMs still face key challenges in clinical deployment. One major issue is domain shift: models trained and evaluated on similar datasets (e.g., MIMIC-CXR) can degrade markedly when confronted with out-of-distribution data from different hospitals, patient populations, or imaging modalities.
Another limitation is interpretability. In high-stakes medical contexts, clinicians must understand why a model made a given prediction. Most VLMs function as black boxes, with limited built-in explainability. Some methods, such as GLoRIA, provide attention maps linking words to image regions, and others employ GradCAM post-hoc to highlight salient areas. However, recent audits indicate that these saliency maps are often misaligned with true pathology locations [14], which can undermine clinician trust.
Our Contributions. In light of these gaps, our work pushes the boundary on evaluating and interpreting a biomedical VLM under challenging conditions. Whereas prior studies report only aggregate performance, we conduct a fine-grained analysis on a highly imbalanced, out-of-distribution radiography dataset (IU-Xray) to assess how BiomedCLIP handles both common and rare findings. We compare adaptation regimes—zero-shot, linear probing, and full finetuning—and reveal nuances such as linear classifiers outperforming end-to-end tuning on mid-frequency diseases. Unlike earlier work that treats VLMs as black boxes, we integrate embedding-space analysis and radiologist-validated Grad-CAM heatmaps to deliver a more transparent evaluation, helping bridge the gap between bench-top performance and trustworthy clinical deployment.
# 3 Dataset Description
We evaluate BiomedCLIP on the IU-xray dataset developed by Indiana University [7] in 2017. The dataset has 7,470 radiographs (frontal and lateral) of the chest and 3,955 reports of 3,851 patients from two large hospital systems within the Indiana Network for Patient Care database. All identifiable information of patients is anonymized. IU-xray dataset is one of the benchmarks used in several radiology report analysis and generation tasks. More importantly, it is one of the smaller datasets which is often not used for training VLMs.
The dataset is unlabeled. Therefore, we apply Standford’s pretrained BERT model, CheXbert [21], to label each report as one or more of the 14 chest disease categories. CheXbert outperforms previous labelers for chest radiographs with $8 0 \%$ validated accuracy. The categories extracted (refer to the labels in Fig. 2) from CheXbert are: Enlarged Cardiomediastinum, Cardiomegaly, Lung Opacity, Lung Lesion, Edema, Consolidation, Pneumonia, Atelectasis,
Figure 2: Data sample distribution of 14 disease classes of IU-xray dataset.
Pneumothorax, Pleural Effusion, Pleural Other, Fracture, Support Devices, and No Finding. Fig. 2 demonstrates the highly imbalanced class distribution in the dataset, with ‘No Finding’ being the majority class and ‘Pneumothorax’ being the minority class.
Additionally, to quantify the model’s classification on broader categories, we assign the following domain specific labels (verified correctness by a radiologist): i) Cardiovascular – Enlarged Cardiomediastinum, Cardiomegaly, ii) Skeletal – Fracture, iii) Device – Support Devices, and iii) Pulmonary – Pneumonia, Consolidation, Atelectasis, Pneumothorax, Pleural Other, Pleural Effusion, Edema, Lung Opacity, Lung Lesion.
# 4 Methodology
There are three components to implement and evaluate BiomedCLIP, as presented in Fig. 1. We denote radiograph $( r _ { g } )$ , CheXbert labeled reports $( r _ { r } )$ , and associated diseases $\left( r _ { t } \right)$ as triplets in the IU-Xray dataset as $( r _ { g 1 } , r _ { r 1 } , r _ { t 1 } ) , ( r _ { g 2 } , r _ { r 2 } , r _ { t 2 } ) , . . ( r _ { g n } , r _ { r n } , r _ { t n } ) \in D _ { : }$ , where $n$ is the total number of data points in the IU-Xray dataset, $D$ , and $r _ { t } \subseteq D i s e a s e s$ . We also denote the BiomedCLIP image encoder as $B _ { I }$ and the text encoder as $B _ { T }$ . Furthermore, for linear probing, we add one fully connected multiperceptron classification head, $H _ { T }$ , to the last layer of $B _ { I }$ .
# 4.1 Data Processing
Each radiograph, $r _ { g k } \in D _ { k }$ , is first resized to $2 2 4 \mathrm { x } 2 2 4$ pixels with center cropping, followed by mean-normalization of pixel values. Each label of the corresponding reports, $r _ { r k } \in D _ { k }$ , is tokenized using PubMedBERT tokenizer [10] and padded to 256 tokens. These preprocessing steps are aligned with BiomedCLIP as outlined by Zhang et al. [28].
# 4.2 Model Settings
The preprocessed radiographs, $r _ { g k }$ , and tokenized labels $r _ { r k }$ are used in three model settings: zero-shot, fine-tuning, and linear probing.
(1) Zero-shot: Each radiograph $r _ { g k }$ is processed by $B _ { I }$ , while its corresponding label $r _ { r k }$ is processed by $B _ { T }$ . This yields contextual image embeddings $E ( r _ { g k } )$ and text embeddings $E ( r _ { r k } )$ for each $k \in \{ 1 , 2 , . . . , n \}$ .
(2) Fine-tuning: We freeze $B _ { I }$ and train only the new head ${ \cal { H } } _ { T }$ for one warm-up epoch. We use the binary cross-entropy (BCE) loss and the AdamW optimizer with weight decay $1 0 ^ { - 2 }$ to stabilize the random head initialization. After the warmup, we unfreeze the entire visual encoder $B _ { I }$ and continue to jointly optimize all parameters of $B _ { I }$ and $H _ { T }$ . We employ a cosine-annealing learning rate (LR) schedule, dropping the base LR by a factor of 10 when unfreezing, and apply early stopping based on the validation BCE loss to prevent overfitting.
(3) Linear probing: We keep $B _ { I }$ fully frozen for the entire training run and train only $H _ { T }$ from scratch using BCE loss, AdamW, and a cosine LR schedule. Early stopping is again governed by the validation BCE loss.
# 4.3 Experiment Setup
To reiterate our objectives:
(1) Quantitatively and qualitatively evaluate BiomedCLIP’s classification performance on the imbalanced OOD dataset (i.e., Iu-xray).
(2) Validate linear probing – an alternative to fine-tuning – for its classification performance and explainability.
Therefore, we design three experiments:
(1) Embedding-space analysis. We compute inter- and intraclass distances (and their ratio) on the learned image embeddings to quantitatively assess how well the model separates different disease categories.
(2) Performance evaluation. We evaluate radiograph classification under zero-shot, full fine-tuning, and linear probing using the multi-label metrics (macro-F1, exact-match, LRAP, coverage error, label-ranking loss).
(3) Qualitative attention inspection. We generate Grad-CAM visualizations for a random subset of test images in each setting and compare those heatmaps against radiologist annotations to understand what the model is focusing on and its consistency. We also inspect how these visualizations change if we extract the representations from earlier layers.
Implementation details: All models use the same train/val/test split (70/10/20), a batch size of 24, AdamW optimizer (weight decay 1e-2), and early stopping on validation BCE loss. Image inputs are resized to $2 2 4 \times 2 2 4$ and normalized with BiomedCLIP’s mean/std, and text labels are tokenized and padded to 256 tokens with PubMedBERT.
# 4.4 Evaluation Metrics
Across all three settings – zero-shot, full fine-tuning, and linear probing – we compute the following quantitative metrics:
Embedding-space separability: We report inter-class vs. intraclass mean Euclidean distance of the disease classes, and their ratio in Table 1.
Multi-label classification metrics: We report per-class F1 scores in Table 2. We report exact-match accuracy (fraction of samples where the entire predicted label set exactly matches the ground truth), Label Ranking Average Precision (LRAP), Coverage error (number of top-predicted labels needed to cover all true labels), and Macro-F1 scores in Table 1.
Domain-level metrics: We report Inter-class vs. intra-class mean Euclidean distance of the disease classes, and their ratio in Table 1.
# 5 Results and Discussion
In this section, we present a detailed analysis of our three experiments: (1) embedding-space analysis, (2) classification performance evaluation, and (3) We additionally comment on qualitative GradCAM visualizations.
# 5.1 Experiment 1: Embedding space analysis
# Key takeaways:
Full fine-tuning is the preferred strategy to achieve maximum class discriminability. Linear probing achieves similar or better results than zeroshot with much less computational resources than full finetuning. These demonstrate that even without altering the core visual representations, linear probing recovers most of the performance achieved by full fine-tuning.
Cluster separation. Fine-tuning shrinks (reduces) intra-class variance $2 1 . 1 6 { } 1 2 . 7 6 _ { , } ^ { \prime }$ more than it shrinks inter-class distances, enhancing the separation ratio from 1.51 to 1.78. This indicates that training the entire visual encoder encourages more dense and distinct class clusters. In contrast, linear probing leaves the backbone unchanged (inter ${ \approx } 3 1 . 8 9 \$ , intra ${ \approx } 2 1 . 1 7$ ).
Global classification metrics. By reshaping the embedding space, full fine-tuning doubles the macro- $\cdot \mathrm { F } _ { 1 }$ $( 0 . 1 0 5 { } 0 . 2 3 5 )$ ), achieves nontrivial exact-match accuracy $( 1 3 . 4 \ \% )$ , and dramatically increases Label Ranking AP $( 0 . 2 5 0 { } 0 . 7 7 9 )$ ) while reducing coverage error $( 7 . 7 0 \mathrm { } 2 . 7 5 ) \$ ). Importantly, linear probing, which simply trains a lightweight classification head on top of the frozen BiomedCLIP vision encoder, captures the majority of these gains at a fraction of the compute cost. Its inter/intra ratio (1.51) and coverage error (3.08) remain nearly identical to zero-shot, yet it still increases macro-F1 to 0.183, exact-match to $4 . 3 \%$ , and LRAP to 0.741. Additionally, we record the training time for fine-tuning and linear probing, where the latter takes less than half the time of the former $( 1 5 . 4 7 \mathrm { m i n s } { } 6 . 1 0 \mathrm { m i n s } )$ ).
# 5.2 Experiment 2: Classification performance
# Key takeaways:
• Full fine-tuning yields the highest overall F1 scores on abundant diseases. Linear probing substantially outperforms zero-shot inference, closing most of the gap to full fine-tuning Both adaptation strategies struggle on extremely scarce diseases (e.g., Pneumothorax, Consolidation, Edema), but linear probing tends to generalize better on mid-scarce diseases.
Tables 2 and 3 report F1 scores for each disease and for each disease domain, respectively. Full fine-tuning achieves the highest absolute performance across almost all pathologies and domains;
however, linear probing consistently outperforms zero-shot inference and yields results comparable to full fine-tuning. We categorize our observations according to disease prevalence:
Abundant classes: Classes with relatively more data samples (e.g., ‘No Finding‘ with 2400 samples, ‘Lung Opacity‘ with 516, and ‘Cardiomegaly‘ with 415). Fine-tuning achieves the highest F1 (No Finding 0.803, Lung Opacity 0.354, Cardiomegaly 0.490), while linear probing comes close (No Finding 0.788, Lung Opacity 0.177, Cardiomegaly 0.421). The gap narrows for ‘No Finding’, suggesting that even a frozen encoder with a retrained classification head can achieve similar performance as fine-tuning on very frequent labels. Rare classes: Classes with relatively scarce data samples (e.g., ‘Pneumothorax‘ with 20, ‘Consolidation‘ with 31, and ‘Edema’ with 49). Both fine-tuning and linear probing struggle when only a small set of samples exists (all $\mathrm { F } 1 { \approx } 0$ for Pneumothorax, Consolidation, Edema). However, linear probing slightly outperforms fine-tuning on some mid-frequency pathologies such as Pneumonia (F1 of 0.286 compared to fine-tuning F1 of 0.267), Atelectasis (F1 of 0.410 compared to fine-tuning F1 of 0.091), and Fracture (F1 of 0.087 compared to fine-tuning F1 of 0.000). This suggests that linear probing can generalize better on classes with moderate but not extreme scarcity, perhaps by avoiding overfitting the small fine-tuning set.
At the domain level (Table 3), fine-tuning leads overall, but linear probing improves substantially over zero-shot for Cardiovascular (0.381 vs. 0.238) and Skeletal (0.087 vs. 0.076). Thus, while full fine-tuning delivers the best absolute performance, especially on common labels, linear probing offers a highly efficient alternative, which is computationally more feasible.
# 5.3 Experiment 3: Qualitative Grad-CAM inspection
# Key takeaways:
• Zero-shot BiomedCLIP generates Grad-CAM heatmaps that align very closely with radiologist-annotated regions, demonstrating that its visual encoder already encodes rich ‘where’ information for most diseases without any indomain tuning.
• Fine-tuning produces abstract, non-specific heatmaps that frequently cover irrelevant lung areas.
• Linear probing retains nearly all of zero-shot’s spatial fidelity while delivering measurable accuracy gains and generates Grad-CAM heatmap delineates both regions almost as precisely as zero-shot.
Shallower blocks of the model recover compact, ROI-aligned activations; intermediate blocks overgeneralize across the lungs; and the deepest block produces only sparse representations, often missing large lesion areas altogether.
In Figure 3, we present two of the fifteen samples annotated by a radiologist. We present the Grad-CAM visualizations generated from zero-shot, fine-tuning, and linear probing of the same radiographs. Additionally, we investigate the information representation across the last three odd layers.
Figure 3: Grad-CAM visualizations of BiomedCLIP under zero-shot, full fine-tuning, and linear probing adaptations compared with radiologist-annotated ground-truth regions (blue). We also compare fine-tuned BiomedCLIP’s Grad-CAM outputs when using three different visual encoder depths.
Table 1: Overall evaluation metrics for BiomedCLIP under three settings on test set. ‘zs’, ‘ft’, and ‘lp’ represent zero-shot, fine-tuning, and linear probing, respectively. We highlight the best performance in blue and the worst in red.
Comparing the explainability BiomedCLIP. The Grad-CAM analyses reveal that, in zero-shot, BiomedCLIP exhibits robust spatial priors for thoracic pathology: confidence values in the range of approximately $\approx 0 . 6 5 – 0 . 7 0$ and heatmaps that are tightly colocalized with radiologist-annotated regions, whether for focal lung lesions or combined atelectasis and pleural effusion. These results indicate that the pretrained model’s visual encoder inherently encodes “where” information for a variety of chest abnormalities, without any in-domain parameter updates. Heatmaps from the finetuning often span irrelevant lung fields, and confidence values drop to approximately $0 . 4 7 \mathrm { - } 0 . 5 0$ . By contrast, linear probing yields intermediate accuracy improvements over zero-shot while preserving nearly all of the pretrained spatial fidelity. In the second radiologistannotated sample, for instance, the linear probe heatmaps at fifth last layer delineate both the collapsed lower-lobe region and the effusion interface with comparable precision to zero-shot, whereas fine-tuning produces a broad, indistinct activation pattern.
Table 2: Per-label F1 scores for BiomedCLIP under three settings. ‘zs’, ‘ft’, and ‘lp’ represent zero-shot, fine-tuning, and linear probing, respectively. We highlight the best performance in blue and the worst in red.
Table 3: Per-domain F1 scores for BiomedCLIP under three settings. ‘zs’, ‘ft’, and ‘lp’ represent zero-shot, fine-tuning, and linear probing, respectively. We highlight the best performance in blue and the worst in red.
Examining the block depths of BiomedCLIP. A deeper examination of block depth underscores that earlier convolutional stages retain the most interpretable “where” information after transfer. In the fine-tuned model, activations from the final block (layer -1) are restricted to sparse “pin-pricks,” often omitting large lesion areas; intermediate blocks (layer -3) generate overly uniform saliency across the lungs; but activations from an earlier block (layer -5) recover compact clusters that align closely with ground-truth ROIs. This hierarchy suggests that shallow filters capture spatial localization more robustly, whereas deeper filters become overly specialized to classification when trained on limited data. | In this paper, we construct two research objectives: i) explore the learned
embedding space of BiomedCLIP, an open-source large vision language model, to
analyse meaningful class separations, and ii) quantify the limitations of
BiomedCLIP when applied to a highly imbalanced, out-of-distribution multi-label
medical dataset. We experiment on IU-xray dataset, which exhibits the
aforementioned criteria, and evaluate BiomedCLIP in classifying images
(radiographs) in three contexts: zero-shot inference, full finetuning, and
linear probing. The results show that the model under zero-shot settings
over-predicts all labels, leading to poor precision and inter-class
separability. Full fine-tuning improves classification of distinct diseases,
while linear probing detects overlapping features. We demonstrate visual
understanding of the model using Grad-CAM heatmaps and compare with 15
annotations by a radiologist. We highlight the need for careful adaptations of
the models to foster reliability and applicability in a real-world setting. The
code for the experiments in this work is available and maintained on GitHub. | [
"cs.CV"
] |
# 1 Introduction
In recent years, there has been a significant surge in the capabilities of large language models (LLMs) in generating human-like text and performing a wide range of natural language processing tasks. State-of-the-art models like GPT-4o (Hurst et al., 2024), OpenAI o1/o3 (Contributors et al., 2024), and Google’s Gemini (Team et al., 2023) have achieved superior performance in knowledge QA (Hendrycks et al., 2020; Wang et al., 2024), instruction-following (Chiang et al., 2024; Zhou et al., 2023), and code generation (Zhuo et al., 2024; Jain et al., 2024).
Figure 1: STRUCTEVAL evaluates the LLM’s capability to generate structured outputs, including text-only tasks like JSON, TOML, etc, and visual rendering tasks like HTML, React, Latex, etc.
Despite recent advances, many real-world applications require not only fluency in the content of the output but also precise control over its structure. This includes tasks where the expected output must follow specific formats such as JSON, XML, LaTeX, HTML, or code in frameworks like React or Vue. Additionally, in these tasks, in these tasks, we also want the code to render a page that correctly places elements according to the requirements. These types of structured output are essential in domains like software development, data pipelines, user interface generation, and scientific publishing, where incorrect formatting can lead to disrupted pipelines or non-functional outputs.
However, most existing benchmarks focus on the semantic quality (Wang et al., 2024) or reasoning ability of LLMs (Hendrycks et al., 2021; He
$\textcircled{1}$ Task Prompt Query & Metric Expert Review System: Task P r ompt
You are a prompt‑design Query & Metric assistant… GPT-V4.a1l tiGQoeunenerryat&eMetric
IYnosutraurcetidoens:igning a new LLM Query 2 Rounds query based on the following Please output HTML code Manual input/output types Label Studio Review Metric
Example:
You must generate a new query VQA - Keywords
tehxatmisplsetrbuecltourwe…d like the - AQ: 1F6opnxt size? - Syntax Rules Query & Metric
et al., 2024), with limited emphasis on their ability to produce format-conforming structured outputs. Some recently proposed benchmarks aim to evaluate the quality of structured outputs tend to target specific modalities, such as code generation (Zhuo et al., 2024) or text-only structures (Gu et al., 2024; Tang et al., 2023), rather than offering comprehensive evaluations across diverse structured formats. As existing benchmarks gradually become more saturated, it is still unknown how the current stateof-the-art models perform in structured generation tasks. We argue that effectively evaluating the models’ performance on such tasks is inherently challenging due to the following issues:
(1) Data Collection Challenges: Gathering diverse structured tasks and corresponding examples requires domain expertise across multiple formats, with high-quality annotations demanding significant effort and specialized knowledge.
(2) Evaluation Metric Complexity: Designing reasonable metrics in a unified form for both textonly structures (JSON, YAML) and visual outputs (HTML, SVG) is difficult, as they require different assessment approaches for structural correctness and visual fidelity.
(3) Technical Implementation Barriers: Building a framework that supports execution and evaluation across numerous rendering environments requires complex integration of multiple language interpreters and visualization tools.
To address these challenges, we introduce STRUCTEVAL, a comprehensive benchmark that systematically evaluates LLMs’ abilities to produce highly structured output. Our benchmark encompasses 21 distinct formats and 44 task types organized into two complementary subsets: StructEval$T$ , which assesses the generation of text-only structures such as JSON and TOML, and StructEval-V, which evaluates the quality of visually rendered outputs from code such as HTML and SVG. Both subsets include generation tasks (converting natural language to structured outputs) and conversion tasks (transforming between two structured formats). To ensure robust evaluation across these diverse formats, we have developed a novel assessment framework that integrates syntactic validity checking, keyword matching, and visual question answering, providing a holistic measure of both structural correctness and output fidelity.
Our comprehensive evaluation reveals significant performance gaps across models and tasks. Even state-of-the-art commercial models like o1- mini achieve only an average score of 75.58, while the best open-source model, such as Llama-3-8BInstruct, lags 10 points behind, underscoring the performance gap between commercial and opensource LLMs. We observe that generation tasks generally pose greater challenges than conversion tasks, and producing code capable of rendering correct visual content proves more difficult than generating text-only structured outputs. Task difficulty varies considerably across formats: while some tasks are effectively solved by all LLMs with scores exceeding 0.95 (such as Text $$ Markdown and T $\mathbf { \sigma } _ { \mathrm { { X t } \mathrm { { H T M L } } } }$ ), others remain particularly challenging with all models scoring below 0.5 (including Text $$ Mermaid and Matplotlib $$ TikZ). Through this systematic analysis, we aim to drive progress in structured output generation capabilities that are increasingly crucial for the real-world applications of language models.
# 2 StructEval Dataset
In this section, we first present an overview of our STRUCTEVAL dataset and statistical analysis in subsection 2.1. Next, we elaborate on how we design the whole pipeline for annotation and quality review in subsection 2.2. We will introduce how we design the evaluation metrics for each task in our dataset in section 3.
Table 1: The overall statistics of the STRUCTEVAL dataset. Here "SE" denotes StructEval. "T" and "V" represents the StructEval- ${ \bf \nabla } \cdot { \cal T } $ and StructEval-V subsets respectively. "gen" and "conv" represent the "generation" and "conversion" task types respectively.
# 2.1 Overview
As shown in Table 1, our STRUCTEVAL dataset comprises a total of 2,035 examples, covering 44 unique structure generation tasks across 18 structured output formats. The dataset is organized into two main subsets: StructEval- $T$ and StructEval-V.
• StructEval- $T$ is designed to evaluate an LLM’s ability to generate structured outputs directly from natural language prompts without rendering. Supported formats include JSON, XML, YAML, Markdown, CSV, TOML, among others. These are highly useful formats in many downstream applications.
• StructEval-V assesses an LLM’s ability to generate executable code for visual rendering that fulfills a specified visual requirement. This subset includes formats such as HTML, React, Matplotlib, Canvas, LaTeX, SVG, Mermaid, and more. These are widely adopted formats for various applications.
Each example in the dataset is categorized as either generation or conversion. In generation tasks, the model is required to produce structured output based on a natural language description with detailed specifications. In conversion tasks, the model must translate structured content from one format to another (e.g., JSON to YAML, HTML to React).
Formally, each example is represented as a triplet $( q , \mathbf { K } , \mathbf { Q } ^ { \mathbf { v } } )$ , where $q$ denotes the structure generation question, $\mathbf { K } ~ = ~ \{ k _ { 1 } , \ldots , k _ { | \mathbf { K } | } \}$ is a set of keywords expected to appear in the output, and
# StructEval-T Question, KeyWords
Please output JSON code.
# Task:
Summarize metadata about a fictional scientific article. Feature Requirements:
1. Top-level field "title" is a string containing the article title.
2. Field "authors" is a list of exactly two items.
3. Each element of "authors" contains "name" (string) and "affiliation" (string).
4. Field "publication.year" is an integer.
5. Field "keywords" is a list of strings.
# Keywords:
• title
• authors[0].name
• authors[1].affiliation
• publication.year
• keywords[2]
# Keywords:
• Trip Summary • highlight • <h1> • Export PDF
# VQA Pairs:
• Q: What text is displayed in the <h1> header? A: Trip Summary
• Q: How many rows are in the table? A: 3
• Q: What class is applied to the second table row? A: highlight
• Q: What text is on the button at the bottom? A: Export PDF
Table 2: Supported rule types in our path-based evaluation.
$\mathbf { Q } ^ { \mathbf { v } } = \{ ( q _ { 1 } ^ { v } , a _ { 1 } ^ { v } ) , \dots , ( q _ { | \mathbf { Q } ^ { \mathbf { v } } | } ^ { v } , a _ { | \mathbf { Q } ^ { \mathbf { v } } | } ^ { v } ) \}$ is a set of visual question-answer (VQA) pairs used for evaluating examples in the StructEval- $V$ subset. In contrast, for StructEval- $T _ { \mathbf { \delta } }$ $\mathbf { Q } ^ { \mathbf { v } }$ is empty and not used during evaluation. To ensure comprehensive evaluation, each example in the dataset contains on average 14.7 keywords and $8 . 5 \mathrm { V Q A }$ pairs, as detailed in Table 1.
The dataset encompasses a wide spectrum of structured output formats, ranging from widelyused data serialization types like JSON and YAML to visually-renderable formats such as SVG, Mermaid, and TikZ. This diverse format coverage enables a more holistic evaluation of LLMs’ capabilities in both structured data modeling and visual code generation. Notably, the inclusion of niche yet expressive formats—such as Typst for typesetting, Mermaid for diagram specification, and TikZ for LaTeX-based graphics—broadens the evaluative scope beyond conventional tasks. These formats collectively span domains including web front-end development, data exchange, scientific visualization, and technical documentation. The distribution of tasks across these formats is shown in Table 6, highlighting the balanced composition of generation and conversion tasks across both textual and visual modalities.
# 2.2 Annotation Pipeline
To construct a high-quality and diverse benchmark, we design a multi-stage annotation pipeline consisting of three key components: 1) task curation, 2) LLM-based synthesis, and 3) expert review. This pipeline ensures both the scalability and accuracy of the STRUCTEVAL dataset.
Task Prompt We begin by identifying a broad spectrum of structure generation and conversion tasks that span both text-based and executable visual formats. These tasks are selected to reflect practical use cases and diverse real-world scenarios, covering 18 target formats and 44 distinct task types (also shown in Table 6. Each task specification includes format constraints, input-output expectations, and, where applicable, conversion rules. Please refer to subsection A.4 for a sample task prompt.
Query/Metric Generation Given the high cost of fully manual annotation, we leverage a large language model to synthesize an initial pool of candidate examples. Each example consists of a task query and a set of associated evaluation metrics, including keywords for text outputs and visual question-answer (VQA) pairs for visual outputs. This step allows us to rapidly generate a large and varied collection of plausible instances that serve as drafts for human refinement.
Expert Review To ensure quality and correctness, we employ a two-pass human review process. Annotators first validate and refine the generated task queries and associated metrics. They are allowed to freely modify, add, or remove any part of the synthesized content to ensure task clarity, completeness, and evaluability. In the second pass, a separate reviewer verifies the consistency and correctness of each example. All annotation is conducted using LabelStudio (Tkachenko et al., 2020-2025), an open-source collaborative annotation tool designed for structured data. The final dataset contains 2035 curated examples, carefully reviewed to support robust evaluation across both StructEval-T and StructEval- $V$ settings.
# 3 StructEval Evaluation
Before the evaluation, we feed the LLM with the questions $q$ in the datasets with the corresponding prompt template defined in Table 3. We require the LLM to output the desired structured outputs between "<|BEGIN_CODE ${ \big | } > "$ and "<|END_CODE|>" so we can correctly parse the structured outputs for evaluation. For the StructEval-V, parsed outputs will be additionally sent to our rendering engines to acquire the rendered visual outputs (see examples in subsection A.3). We then evaluate model outputs using an automatic evaluation pipeline that captures both structural correctness and semantic fidelity. Specifically, we have designed core metrics depending on the task format: 1) Syntax Score, 2) Keyword Matching Score, and 3) Visual Question Answering (VQA) Score.
# {StructEval Question}
IMPORTANT: Only output the required output format. You must start the format/code with <|BEGIN_CODE $| >$ and end the format/code with <|END_CODE $| >$ . No other text output (explanation, comments, etc.) are allowed. Do not use markdown code fences.
Syntax Score. The Syntax Score verifies the structural correctness of the generated output. For text-based formats such as JSON, YAML, and CSV, this involves parsing the output using a formatspecific Python parser. For executable visual formats like HTML, LaTeX, or SVG, the code is rendered using a headless renderer to determine whether it executes successfully. A score of 1 is assigned if the output is syntactically valid or successfully rendered; otherwise, the score is 0. See the subsection A.3 for some correctly rendered images, code produced by the tested LLMs.
Keyword Matching Score This metric evaluates whether the generated output contains the required structural elements. Given the reference set of expected keywords $\mathbf { K } = \{ k _ { 1 } , \ldots , k _ { | \mathbf { K } | } \}$ for a given task, we assess their presence using exact matching or regular expression rules.
For the tasks of StructEval- $T$ such as JSON or XML, keyword matching is performed over field names and values using dot-path references to account for nested hierarchies. The score is computed as the proportion of expected keywords correctly matched in the model’s output. Our evaluation supports a variety of path formats as shown in Table 2. The way dot-path rules are created differs depending on the task type.
For generation tasks, each task prompt includes feature requirements stated in natural language. These requirements define target keys and their relationships to one another (e.g., nesting depth, list membership). Annotators translate each requirement into a concrete dot-path rule using the syn
# VQA Prompt Template
You are given an image and a list of question-answer pairs.
• For each pair, verify if the image content supports the expected answer based on the corresponding question. • Base your judgment solely on the visual content of the provided image, and the question. • Do not use any external information or common-sense reasoning beyond what is visible. • Respond with a JSON object mapping each question number to true or false (e.g., {"1": true, "2": false}). • If the image is unclear or does not contain enough information to answer, use null for that question.
Here are the question-answer pairs: {qa_list} tax rules shown in Table 2. For conversion tasks, the input is itself a structured format (e.g., YAML or XML). We use an LLM to parse the structural schema of the input—identifying key names, nesting levels, and list structures—and convert them into target dot-path rules that the generated output must preserve.
This approach ensures that models are not only producing syntactically valid outputs, but also preserving the expected structural relationships.
For the tasks of StructEval-V such as HTML, and Matplotlib, we simply detect whether the annotated keyword is in the structured outputs and give scores accordingly.
VQA Score This score is used exclusively for tasks in the StructEval- $V$ subset, where the output is expected to be visually rendered. After rendering the output, GPT-4.1-mini (Hurst et al., 2024), a vision-language model (VLM), is employed to answer a set of visual questions $\mathbf { Q } ^ { \mathbf { v } } =$ $\{ ( q _ { 1 } ^ { v } , a _ { 1 } ^ { v } ) , \ldots , ( q _ { | \mathbf { Q } ^ { \mathbf { v } } | } ^ { v } , a _ { | \mathbf { Q } ^ { \mathbf { v } } | } ^ { v } ) \}$ . The VLM will be given both the questions and answers and required to decide whether the VQA pair matches this rendered image. The VQA score is computed as the proportion of correctly answered questions.
Final task scores are calculated as weighted combinations of these metrics, with weights adjusted based on whether the task is renderable. Let $s _ { s } , s _ { k } , s _ { v } ~ \in ~ [ 0 , 1 ]$ denotes the syntax, keyword matching, and VQA score respectively. The for
StructEval- $T$ task, the final score $s$ is computed as:
$$
s = 0 . 2 \cdot s _ { s } + 0 . 8 \cdot s _ { k }
$$
For StructEval- $V$ , the final score $s$ in computed as:
$$
s = 0 . 2 \cdot s _ { s } + 0 . 1 \cdot s _ { k } + 0 . 7 \cdot s _ { v }
$$
This evaluation framework provides a unified, finegrained view of model performance across both structured data generation and visual code synthesis tasks, supporting deeper insights into LLM capabilities across modalities.
# 4 Experiments
# 4.1 Experimental Setup
Evaluation Models. We evaluate a range of open-source and commercial large language models (LLMs) using our benchmark. For open-source models, we use Meta-Llama-3-8BInstruct (Grattafiori et al., 2024), Phi-3-mini128k-instruct (Abdin et al., 2024a), Phi-4-miniinstruct (Abdin et al., 2024b), Qwen2.5-7BInstruct (Yang et al., 2024), and Qwen3-4B (Yang et al., 2025). For commercial models, we use Gemini-1.5-pro and Gemini-2.0-flash (Team et al., 2023), GPT-4.1-mini and GPT-4o (Hurst et al., 2024), GPT-4o-mini, and o1-mini (Contributors et al., 2024). All tasks are evaluated in a zero-shot setting using consistent prompts and parameters.
Inference Setup. All model generations are performed using LLM-Engine (Jiang, 2024), a unified inference framework that supports both opensource backends (e.g., VLLM, SGLang, Together), and commercial APIs (e.g., OpenAI, Claude, Gemini). For open-source models, we specifically utilize the vLLM engine for efficiency (Kwon et al., 2023). For close-source models, we simply call the APIs. As shown in Table 4, we use greedy decoding by default. All tasks are evaluated zeroshot using uniform task prompts defined in Table 3. When performing the VQA evaluation, we select GPT-4.1-mini as the VLM due to its superior multimodal abilities (OpenAI, 2025). We apply the VQA prompt template defined in Figure 5 and ask the VLM to decide whether each VQA pair matches the rendered visual image at once.
Evaluation. Output generations are automatically scored using the evaluation pipeline described in section 3, including syntactic validity checking, keyword matching, and VQA accuracy. GPT-4.1- mini (Hurst et al., 2024) is used as the visionlanguage model for all VQA-based evaluations.
Table 4: Inference configuration
# 4.2 Main Results
Overall Performance Table 5 summarizes the performance of all evaluated models across the two main task groups: StructEval-T and StructEval- $V$ , each further divided into generation and conversion subtasks. Overall, GPT-4o achieves the highest average score of $7 6 . 0 2 \%$ among all 12 models. The best-performing open-source model is Qwen3-4B, with a score of $6 7 . 0 4 \%$ , trailing GPT-4o by approximately 10 percentage points. While GPT-4o excels particularly in the generation tasks within the StructEval-V category, Qwen3-4B demonstrates consistently strong performance across all task types among open-source models. This likely reflects Qwen3-4B’s robust reasoning capabilities relative to other open-source alternatives.
In contrast, the lowest-performing model is phi-3-mini- $1 2 8 \mathsf { k }$ -instruct, with an average score of only $4 0 . 7 9 \%$ . Although one might attribute this to its relatively small size of 3.8 billion parameters, model size alone does not fully explain the poor results. For example, phi-3-mini underperforms even compared to similarly sized models such as phi-4-mini-instruct. Notably, it achieves the lowest score in StructEval- $T$ conversion tasks, a category where models with strong reasoning abilities—such as o1-mini $( 8 1 . 8 2 \% )$ and Qwen3-4B $( 8 1 . 1 3 \%$ —tend to perform well.
Error analysis reveals two key failure modes for phi-3-mini- $1 2 8 \mathsf { k }$ -instruct. First, in the TOML-to-YAML conversion task, the model frequently produces malformed closing tags, outputting $| < | \mathsf { E N D \_ C O D E } | >$ instead of the correct $< | \mathsf { E N D \_ C O D E } | >$ , which significantly penalizes its score. Second, in the CSV-to-JSON conversion task, the model fails to capture hierarchical relationships (e.g., parent-child) specified in the CSV headers, leading to structurally incorrect JSON outputs. These recurring structural errors in StructEval-T conversion tasks substantially contribute to the model’s overall low performance.
Table 5: Main evaluation results of STRUCTEVAL
Figure 6: Average score over all models based on the most challenging subtasks
Open-Source vs. Closed-Source Models When comparing open-source models and commercial models, we can see that by ∆ (closeavg - openavg) value, which is the difference between the average score of commercial source model and open model, that commercial model’s score is consistently higher than open-source models, this makes sense given the much larger parameters of commercial models by scaling law. We can see that commercial models exceed open-source models on average the most on generation tasks in StructEvalT setting, and the performance gap is smallest on generation tasks in StructEval-V setting.
Generation vs. Conversion As shown in Figure 7, a comparison between generation and conversion tasks in both StructEval-T and StructEval$V$ settings reveals that, in general, models perform better on conversion tasks than on generation tasks. An exception to this trend occurs in the StructEval$T$ setting, where commercial models tend to outperform on generation tasks, while open-source models show the opposite behavior—achieving higher scores on conversion tasks.
Figure 7: Average score over all models based on the four task types
Under a temperature setting of 1, commercial models attain an average score of $7 5 . 7 8 \%$ on StructEval- $T$ generation tasks. In contrast, open-source models average only $8 . 5 8 \%$ on the same tasks for the TOML format. This considerable disparity in TOML generation performance partly explains why commercial models perform better on StructEval- $T$ generation tasks overall. However, the performance gap is not confined to TOML—commercial models also lead in the other four generation formats within StructEval-T.
In the StructEval-V setting, commercial models significantly outperform open-source counterparts on generation tasks involving complex visual formats such as Mermaid and TikZ. These tasks require advanced visual reasoning capabilities, which are more prevalent in multimodal commercial LLMs like GPT-4o and GPT-4o-mini.
Subtasks Analysis Meanwhile, several tasks in both in generation and conversion types appear to be saturated, with most models achieving scores exceeding $9 0 \%$ . These include generation tasks for common formats such as JSON, HTML, CSV,
Markdown, and YAML, as well as conversion tasks like YAML-to-JSON, React-to-HTML, TOML-toJSON, and Markdown-to-HTML. Such results indicate that LLMs have already mastered many structurally straightforward format transformations.
There remain several challenging tasks where all models struggle significantly (shown in Figure 6), including generation tasks like Text $$ TOML, Text $_ { \mathrm { } \mathrm { S V G } }$ , Text $$ Mermaid, and Text $$ Vega, as well as conversion tasks like YAML ${ \bf { \Lambda } } \mathrm { . { } X M L }$ , ${ \mathrm { C S V } } { } \mathbf { Y } \mathbf { A } \mathbf { M } \mathbf { L }$ , Matplotlib ${ } \mathrm { T i k } Z$ , and Markdown $$ Angular(see scores in subsection A.2). Both closed-source and open-source models achieve low scores on these tasks, which typically require complex structural or visual reasoning. Notably, the performance gap between closed-source and open-source models is even wider on these challenging subtasks, suggesting that proprietary models may have advantages in handling more complex structural representations and transformation logic.
# 5 Related Work
# 5.1 Large Language Models
Large Language Models (LLMs) have demonstrated remarkable capabilities and gained surging popularity in recent years, ever since the release of ChatGPT (OpenAI, 2023). Over the years, open-source models like Llama (Grattafiori et al., 2024), Phi (Abdin et al., 2024b,a), and Qwen (Yang et al., 2024, 2025) developed by companies like Meta, Microsoft, and Alibaba further facilitated a widespread integration of AI into diverse workflows and everyday applications. Leveraging their large parameter sizes and extensive post-training, LLMs are capable of performing a diverse array of Natural Language Processing (NLP) tasks (Wan et al., 2023). One of the key aspects of the generative capabilities of these models is their ability to generate structured data and transform data from one type to another while maintaining strict adherence to specified formats (Guo et al., 2024). In this paper, we design a new and comprehensive benchmark that evaluates the capability of LLMs to understand, generate, and manipulate structured data across a range of complex, real-world tasks.
# 5.2 Evaluation of LLMs
Evaluating structured output has become a focal point for understanding LLM’s limitations (Ning et al., 2025). SoEval (Liu et al., 2024) offers a fast, rule-based check for JSON and XML, but its flat schemas fail to reveal errors in deeper hierarchies. StrucText-Eval (Gu et al., 2024) shifts the task to reasoning over structure-rich text (JSON, YAML, LaTeX) rather than generating the structures themselves, while FOFO (Xia et al., 2024) extends to domains such as law and finance yet covers only a few formats and still relies on human verification. Developer-focused suites like StackEval (Shah et al., 2024) for HTML, CSS, and plotting libraries, and CodeXGLUE (Lu et al., 2021) for multilingual code tasks remain limited to programming artifacts, and Struc-Bench (Tang et al., 2023) concentrates on tabular generation with bespoke metrics. Each benchmark highlights a part of the challenge—be it format adherence, domain coverage, or table fidelity. However, none simultaneously demands broad format coverage, automated grading, and robust transformation capabilities. StructEval addresses these gaps by spanning 18 code and non-code formats, unifying generation, completion, and conversion tasks, and scoring outputs with fully automated structural and visionbased metrics, offering a comprehensive lens on how well LLMs respect and manipulate complex schemas.
# 5.3 Structured Output Generation
The ability to generate structured outputs is central to many real-world applications of LLMs (Gu et al., 2024; Tang et al., 2023). These outputs are not only expected to be semantically coherent but must also adhere strictly to syntactic and structural constraints—violations of which can lead to parsing failures, rendering errors, or broken downstream applications. Common tasks include generating JSON for API responses (Geng et al., 2025), YAML or TOML for configuration files (Peddireddy, 2024), HTML or React for UI components (Si et al., 2024), and LaTeX or Markdown for technical writing (Wen et al., 2024). Moreover, in data science, models are used to transform unstructured descriptions into structured formats like CSV or tables for integration into analysis pipelines (Li et al., 2023; Su et al., 2024). In publishing and education, tools that convert textual prompts into diagrams (e.g., using TikZ, SVG, or Mermaid) help automate visualization generation (Lee et al., 2025; Rodriguez et al., 2025; Ku et al., 2025). Despite its significance, structured output generation remains challenging due to the need for models to internalize both syntax rules and hierarchical schema relationships across a wide variety of formats. Our STRUCTEVAL first conducts a comprehensive evaluation of existing LLMs on both renderable and non-renderable tasks, showing that they still struggle to correctly generate some data formats including TOML, SVG, and Mermaid. | As Large Language Models (LLMs) become integral to software development
workflows, their ability to generate structured outputs has become critically
important. We introduce StructEval, a comprehensive benchmark for evaluating
LLMs' capabilities in producing both non-renderable (JSON, YAML, CSV) and
renderable (HTML, React, SVG) structured formats. Unlike prior benchmarks,
StructEval systematically evaluates structural fidelity across diverse formats
through two paradigms: 1) generation tasks, producing structured output from
natural language prompts, and 2) conversion tasks, translating between
structured formats. Our benchmark encompasses 18 formats and 44 types of task,
with novel metrics for format adherence and structural correctness. Results
reveal significant performance gaps, even state-of-the-art models like o1-mini
achieve only 75.58 average score, with open-source alternatives lagging
approximately 10 points behind. We find generation tasks more challenging than
conversion tasks, and producing correct visual content more difficult than
generating text-only structures. | [
"cs.SE",
"cs.AI",
"cs.CL"
] |
# I. INTRODUCTION
Large Language Models (LLMs) have achieved exceptional capabilities in various Natural Language Processing (NLP) tasks [1]–[3], demonstrating their ability to absorb and retain vast amounts of knowledge. When responding to specific queries, LLMs often provide informative answers, leveraging the extensive range of information they acquired during their training. However, while their capabilities are impressive, LLMs still have several limitations that hinder their overall applications.
A major limitation of LLMs is the rapid growth in the number of parameters required to achieve extensive capabilities. As the training dataset expands, the model needs to capture increasingly complex patterns, which in turn demands a substantial increase in parameters. This exponential growth not only adds complexity to the model but also creates significant deployment challenges, making it difficult to implement the model in real-world applications.
Another limitation of LLMs is their inability to incorporate time-sensitive or non-public information. This limitation arises from the fact that LLMs are trained on static datasets that represent a snapshot of the internet at a particular point in time. As a result, these models often lack access to the recently developed or updated information. This can lead to a critical issue: LLMs may generate ”hallucinations,” where they produce responses that are not grounded in actual or current information. This problem is particularly alarming in applications where accuracy and reliability are crucial, as it can erode trust in the model’s outputs.
A novel approach has recently emerged to tackle these limitations: Retrieval-Augmented Generation (RAG) [4], [5]. The RAG enhances its capabilities by integrating LLMs with external knowledge retrieval. This integration allows the RAG to access and incorporate not only publicly available information but also time-sensitive data or information that is not publicly accessible, thereby expanding its knowledge base.
When a query is given, RAG system uses a retriever to search an external knowledge database and retrieve the most relevant documents related to the query. Next, these documents are combined with the original query to create a prompt for a language model. The language model then generates its output based on the information from the retrieved documents, resulting in a comprehensive response to the query. The work flow of the RAG system is illustrated in Fig. 1.
Fig. 1: RAG system
The RAG system differs from generative-only models in its ability to utilize time-sensitive information or non-public documents, such as internal company documents, to reduce the risk of hallucinations. A key component of RAG is its document retrieval mechanism, which involves comparing a query vector to document vectors in a database based on cosine similarity. The documents are ranked based on its relevance and the top matches are then selected, but this process may still yield some irrelevant documents. To refine the results, RAG employs a re-ranking process, where a secondary model acts as a relevant grader. This model assesses the retrieved documents to determine their suitability for answering the user’s question, ensuring that the final response is relevant and accurate. The work flow of the RAG system with a relevance grader is illustrated in Fig. 2.
Integrating an additional LLM into the RAG pipeline poses significant memory and computational challenges. To reduce these burdens, we propose using a fine-tuned, small language model as a relevant grader. The challenge is to achieve sufficient capability with a relevant grader with a relatively small number of parameters since a language model’s capability is often tied to its number of parameters [19]. Since our baseline model has only 1 billion parameters, which is significantly smaller than that of widely-used LLMs, we anticipated potential performance issues. To mitigate this, we added a binary classification head to the model’s final layer, which is suitable for the binary output of a relevant grader. We then fine-tuned the model under various hyper-parameter configurations to further optimize its capabilities.
Fig. 2: RAG system with a relevant grader
The primary contributions of this paper are as follows:
• Model Enhancement: A fine-tuned small language model (lama-3.2-1b) is used for relevance grading in RAG systems, improving precision from 0.1301 to 0.7756. • Efficiency and Speed: The lightweight model expects to minimize memory and computational requirements, enabling deployment in resource-constrained environments and accelerating the retrieval process. • Dataset and Evaluation: The dataset of 45,000 querydocument pairs was generated for fine-tuning the relevance grading process, supporting the development of more accurate RAG systems.
Overall, this work contributes to the advancement of RAG systems by offering a practical and efficient solution for relevance grading, which enhances the accuracy and performance of information retrieval in the presence of limited computational resources.
# II. RELATED WORK
To identify relevant documents from a knowledge database, searching algorithms are employed in RAG systems. Traditional search algorithms rank documents by the frequency of query terms within them. Among the widely used algorithms are Term Frequency-Inverse Document Frequency (TF-IDF) and Best Matching 25 (BM25) [6]. However, these approaches primarily depend on lexical matching, which can limit their ability to effectively grasp the context of documents.
Unlike traditional search algorithms that rely on exact keyword matches, vector search utilizes vector embeddings to capture the semantics of data, enabling a meaning-based search approach. In this method, both the query and the document are independently transformed into embedding vectors using a semantic encoder. Vector search then assesses the similarity between the query vector and document vectors. This technique allows unstructured data such as images, text, and audio to be represented as vectors in a high-dimensional space, facilitating the efficient identification and retrieval of vectors that closely align with the query vector.
Distance metrics, like Euclidean distance and cosine similarity, are frequently employed to evaluate the similarity between vectors. The Euclidean distance between two embedding vectors, $\mathbf { v } ( \mathbf { s _ { 1 } } )$ and $\bf v ( s _ { 2 } )$ , each with n dimensions representing sentence 1 and sentence 2, is defined as follows:
$$
\begin{array} { r } { d ( s _ { 1 } , s _ { 2 } ) = \lVert \mathbf { v ( s _ { 1 } ) } - \mathbf { v ( s _ { 2 } ) } \rVert _ { 2 } } \\ { = \sqrt { \displaystyle \sum _ { i = 0 } ^ { n - 1 } ( v _ { i } ^ { 1 } - v _ { i } ^ { 2 } ) ^ { 2 } } } \end{array}
$$
Cosine similarity between two vectors $\bf v ( s _ { 1 } )$ and $\bf v ( s _ { 2 } )$ is also defined as follows.
$$
\begin{array} { r l r } & { } & { s i m ( s _ { 1 } , s _ { 2 } ) = \frac { \mathbf { v } \left( \mathbf { s _ { 1 } } \right) \cdot \mathbf { v } \left( \mathbf { s _ { 2 } } \right) } { \left\| \mathbf { v } \left( \mathbf { s _ { 1 } } \right) \right\| _ { 2 } \left\| \mathbf { v } \left( \mathbf { s _ { 2 } } \right) \right\| _ { 2 } } } \\ & { } & { \qquad = \frac { \sum _ { i = 0 } ^ { n - 1 } \left( v _ { i } ^ { 1 } v _ { i } ^ { 2 } \right) } { \sqrt { \sum v _ { i } ^ { 1 } } ^ { 2 } \sqrt { \sum v _ { i } ^ { 2 } } ^ { 2 } } } \end{array}
$$
Hybrid search, a recently introduced method, integrates keyword-based search with vector-based search to capitalize on the strengths of both techniques. This combination has the potential to yield more precise and relevant search results. In a hybrid search system, keyword-based and vector-based searches are performed separately, then their results are subsequently merged. Despite its promise, one of the challenges lies in ranking these results and assigning appropriate weights to effectively combine them.
The ever-increasing volume of accessible information resources has created a significant demand for effective methods of similarity searching. Vector search algorithms are specifically developed to efficiently identify the vectors most similar to a given query vector. Among the widely used vector searching algorithms are K-Nearest Neighbors (KNN) and Approximate Nearest Neighbor (ANN).
KNN [7], often referred to as the Brute Force algorithm, identifies the K nearest vectors to a query vector by measuring the distance—typically the Euclidean distance—between the query and every other vector in the dataset. While it ensures the precise identification of the nearest neighbors, it can be computationally demanding for large datasets.
ANN [8] algorithms permit a slight error margin, providing points that are nearly the closest rather than exactly the nearest. While this approach sacrifices some precision, it offers a substantial increase in speed over exact nearest neighbor search methods. Among the various ANN algorithms, Hierarchical Navigable Small World (HNSW) algorithm is the most widely used.
HNSW [9] creates a hierarchical graph structure, where each node corresponds to a data point and edges link nearby points in the dataset. This graph is composed of multiple layers, each representing a different level of detail or resolution. These layers are arranged hierarchically, with broader, coarser layers at the top and more detailed, finer layers at the bottom. The algorithm’s main advantage lies in its ability to efficiently narrow down the search space by navigating through these layers to find the most likely candidates for nearest neighbors. This process begins at the top layer and progressively moves down to the lower layers, using the edges to steer the search towards the most similar data points. As a result, HNSW effectively balances the trade-off between search speed and accuracy.
Within RAG pipelines, re-ranking [20]–[22] plays a crucial role in refining initial search results to better align with user intent and context. By doing so, it enhances user satisfaction by delivering more precise and contextually relevant outcomes. This, in turn, leads to increased conversion rates and improved engagement metrics. Ultimately, re-ranking enables LLMs to leverage the most relevant and high-quality information available, resulting in more accurate and effective results.
Cross-encoders play a crucial role in re-ranking processes within RAG pipelines. The re-ranking process is illustrated in Fig. 3. Their functionality involves taking a concatenated query and document as input, generating a relevance score as output. Although cross-encoders excel at capturing the nuanced interactions between queries and documents, their computational requirements are substantial. This is largely due to the fact that LLMs are often utilized as cross-encoders, which demands significant memory and computational resources.
Construction. These queries were carefully crafted to cover various categories, including R&D, Technology, Regulations, Market, Manufacturing, Hiring, Sustainability, Business-toBusiness (B2B), Security, Industry, Leadership, Economy, and Finance. For example, one such query was ”How will the expanding specialized drug market impact pharmaceutical R&D strategy and manufacturing capabilities?”. In total, we had 160 unique queries.
Fig. 3: Re-ranking process
To identify relevant news articles for each query, we embedded the query sentences using the same bge-small-en-v1.5 model and calculated the cosine similarity between the query vector and the news article vectors. We utilized the HNSW algorithm for efficient vector searching, which enabled us to find the top five most similar vectors for each query. This process was repeated daily over a 90-day period for all 160 queries, resulting in the collection of 45,000 query-article pairs.
# B. Relevant Grading
The development of lightweight language models as crossencoders seeks to strike a balance between accuracy and efficiency. With their faster processing speeds and smaller memory requirements, these models are well-suited for realtime applications. However, they often struggle to match the accuracy and contextual relevance of their larger counterparts. To address this limitation, our research focuses on developing a fine-tuned, lightweight language model that functions as a relevant grader. The goal of this model is to provide search results that are comparable in accuracy and relevance to those produced by larger, more complex language models.
To guarantee that user intent and context are aligned, we combined the query with the document as input and assessed their relevance using Llama-3.1-405B-Instruct [11]. At the time of the writing of this paper, this model is the largest and most advanced openly accessible foundation model [23]. We utilized the following system prompt, incorporating chainof-thought phrases: ”Please analyze the contents of DOCUMENTS and determine whether it is relevant in answering the QUESTION”
# III. DATASET
# A. Data Preparation
Fig. 4: Distribution of cosine similarity with relevant grading
To evaluate the accuracy of search results, we used 45,000 pairs of user queries and corresponding recent news articles. Our approach involved two main steps. First, we used a vector database of news articles which are collecting articles daily from multiple news sources [24] and embedding them using the bge-small-en-v1.5 semantic encoding model [10]. The embedding vectors have 384-dimension. In the second step, we developed a set of 20 query questions across eight distinct fields: Pharmacy, Venture Capital, Information Technology (IT), Legal, Banking, Healthcare, Automotive, and Residential
Fig. 4 illustrates the distribution of cosine similarity along with the evaluation outcomes for relevant grading. It reveals that merely $1 2 . 3 \%$ of the cases are approved by the relevant grader, highlighting the essential function of the relevant grader within the RAG pipeline. Additionally, the distribution displays a bimodal pattern. The second peak in this distribution corresponds to relevance, while the first peak appears to be misaligned with the relevant search. This misalignment could be attributed to the HNSW’s nature as an approximate search method, which may compromise accuracy, or to an imprecise embedding model.
We evaluated its relevance outcomes against other LLMs, including GPT4o-mini [12], Llama-3.1-70B-Instruct [13], Llama-3.1-8B-Instruct [14], Llama-3.2-3B-Instruct [15], and Llama-3.2-1B-Instruct [16]. The relevance grading results from Llama-3.1-405B-Instruct were used as the Ground-True, and we calculated Accuracy, Precision, Recall, and F1-score based on the confusion matrix according to Table I, Eq 3 - Eq 6. The results are presented in Table II.
TABLE I: Confusion matrix
$$
A c c u r a c y = { \frac { T P + T N } { T P + F N + F P + F N } }
$$
$$
P r e c i s i o n = \frac { T P } { T P + F P }
$$
$$
R e c a l l = { \frac { T P } { T P + F N } }
$$
$$
F _ { 1 } = \frac { 2 } { \frac { 1 } { P r e c i s i o n } + \frac { 1 } { R e c a l l } }
$$
TABLE II: Model comparison for relevance grading
The dataset is imbalanced, comprising a majority of negative labeled data. This imbalance can result in a high false positive rate in a model’s predictions. To effectively evaluate the model’s performance in this context, Precision is a particularly useful metric, as it helps assess the accuracy of the model’s positive predictions. As anticipated, a model with a large number of parameters, like Llama-3.1-70B, achieves the highest Precision score of 0.8341. In contrast, a model with fewer parameters, such as Llama-3.2-1B, has the lowest Precision score of 0.1312, in line with scale’s law [19]. Despite Llama3.2-1B having the poorest Precision score among the other models, it is suitable for efficient deployment in RAG systems due to its lightweight design, which requires less memory and computing operations. Our objective in this work is to fine-tune Llama-3.2-1B to enhance its Precision, enabling it to function effectively as a relevant grader.
# IV. TASK-SPECIFIC FINE-TUNING
Fine-tuning a language model on specialized data allows it to leverage its extensive pre-learned knowledge and adapt to a specific task. By modifying its parameters through finetuning, the model can better align with the demands of the task, resulting in improved performance and applicability within that domain. This approach is particularly effective when we want to optimize the model’s performance for a single, welldefined task, ensuring that the model excels in generating taskspecific content with precision and accuracy.
Fig. 5: Model Configuration for Fine-tuning
Our work began with the Llama-3.2-1B model as our foundation. We aimed to fine-tune this baseline model to perform as a relevant grader, a task that requires assessing the relevance between a user’s query and a set of documents. Specifically, the model would take a user’s query and a related document as input and output a determination of whether the query and the document are relevant. To avoid overfitting, we divided the dataset of 45,000 user query and document pairs into $80 \%$ for training and $20 \%$ for testing. The training and testing datasets preserve the same proportion of positive to negative labels.
The fine-tuning process involves adjusting the parameters of a model to better suit a specific task. The degree of modification can vary greatly depending on the task’s requirements. Model configurations for fine-tuning are illustrated in Fig.5.
# A. Full fine-tuning
Full fine-tuning involves a comprehensive adjustment of a model, where all parameters of its layers are modified using data that is specifically tailored to a particular task. In our case, we fine-tuned every layer of Llama-3.2-1B-Instruct using a training dataset consisting of 36,000 pairs of user query and document.
TABLE III: Model comparison on test dataset
We utilized the AdamW optimizer [17] with a cosine learning rate schedule. The schedule started with an initial learning rate of 2e-5 and gradually decreased to a final learning rate that was $10 \%$ of the peak rate. Cross-entropy was used as the loss function. Since the training dataset was skewed, with a predominance of negative labels, we implemented both oversampling and under-sampling techniques to achieve a more balanced distribution of positive and negative labels, thereby mitigating the impact of class imbalance on our model’s performance.
# B. Transfer learning with Classification head
Our study employed transfer learning, a technique that harnesses knowledge gained from one task or pre-existing knowledge obtained through pre-training on a large dataset to enhance performance on a specific task. To implement this approach, we leveraged a pre-trained Llama model and attached a classification head, a specialized layer designed for classification tasks, to its end. The classification head plays a crucial role in predicting the final label by processing the model’s output. Specifically, it takes the hidden state with a dimension of 2048 and converts it into a logit with a dimension of 2, corresponding to the number of labels. The logit then undergoes softmax and argmax processing to yield the final label. A significant benefit of this transfer learning approach is the substantial reduction in computational operations required during training. By utilizing a pre-trained model, we avoided the need to train a large model with 1.236 billion parameters, instead training only a single classification layer with 4096 parameters, resulting in considerable computational savings.
# C. Full fine-tuning with Classification head
In the previous section IV-B, we explored a method where a pre-trained LLM was used as a fixed feature extractor, with a classification head appended to its end for a specific classification task. The pre-trained LLM provided comprehensive representations, which were then tailored to the task at hand by training the final layers on a relevance grading dataset, while keeping the rest of the model unchanged. This approach allowed for efficient fine-tuning of the LLM. However, despite observing an improvement in precision, the results did not fully meet our expectations. To further improve performance, we also experimented with fully fine-tuning the model, including the addition of a classification head, which involved training the entire model parameters on task-specific data. Unlike the previous approach, which only modified the final layers, full fine-tuning adjusted all model layers during training.
After fine-tuning the model, we evaluated its performance on the test dataset by measuring Accuracy, Precision, Recall, and F1-score, and compared these metrics with other language models, as shown in Table III. Fully fine-tuned Llama-3.2-1B (Configuration A) demonstrated an improvement in Precision, increasing from 0.1331 to 0.1655, although it still lags behind the Precision of Llama-3.2-70B. Fully fine-tuned llama-3.2- 1b with a classification head (Configuration C) achieved a Precision of 0.7750, which is significantly higher than that of llama3.1-8b and GPT4o-mini, but slightly below that of llama3.1-70b.
Fig. 6: Precision of relevance grading on test dataset
The relationship between model complexity and precision is illustrated in Fig. 6, which shows that models with a larger number of parameters generally tend to achieve higher precision on the test dataset. Our full fine-tuned llama-3.2-1b model with a classification head, demonstrated particularly impressive results. Notably, it exceeded the typical performance expectations outlined by the scale’s law, suggesting that our approach can lead to exceptional outcomes. | Retrieval-Augmented Generation (RAG) addresses limitations of large language
models (LLMs) by leveraging a vector database to provide more accurate and
up-to-date information. When a user submits a query, RAG executes a vector
search to find relevant documents, which are then used to generate a response.
However, ensuring the relevance of retrieved documents with a query would be a
big challenge. To address this, a secondary model, known as a relevant grader,
can be served to verify its relevance. To reduce computational requirements of
a relevant grader, a lightweight small language model is preferred. In this
work, we finetuned llama-3.2-1b as a relevant grader and achieved a significant
increase in precision from 0.1301 to 0.7750. Its precision is comparable to
that of llama-3.1-70b. Our code is available at
https://github.com/taeheej/Lightweight-Relevance-Grader-in-RAG. | [
"cs.AI"
] |
# 1 Introduction
Security vulnerabilities are a major concern for the safety and robustness of software systems. Much of the technological infrastructure in today’s world heavily relies on $\mathrm { C / C } { + + }$ projects, and consequently, these projects are critical targets for security vulnerabilities. Vulnerabilities in these projects can have a widespread impact on many downstream systems, making their reliability and robust maintenance of paramount importance [17, 8]. However, existing tools and techniques for detecting security vulnerabilities in $\mathrm { C / C } { + + }$ often fail to address real-world complexity, diverse codebases, and evolving security threats [21]. The rapid adoption of Large Language Models (LLMs) in software engineering has opened new avenues for automating many critical tasks [24, 10]. While LLMs have demonstrated impressive potential in code-related tasks, their effectiveness in tackling real-world $\mathrm { C / C } { + + }$ security vulnerabilities remains underexplored.
As more and more LLMs emerge, a reliable benchmark is crucial to evaluating LLMs’ capability to detect security vulnerabilities in $\mathrm { C / C } { + + }$ projects. Recently, many benchmarks have been proposed for $\mathrm { C / C } { + + }$ , such as $\mathrm { B i g V u l }$ [6], CVEFixes [1], DiverseVul [3], MegaVul [18], PrimeVul [5], etc. Although promising, the existing benchmarks suffer from a few major limitations. First, they lack essential features such as statement-level vulnerability localization, which poses a significant challenge for tasks that require fine-grained analysis, training, or evaluation. Second, some datasets omit crucial details, like bug-fix code pairs, vulnerability types (CWE), and precise CVE metadata. The absence of this information limits researchers’ and developers’ ability to conduct in-depth investigations or build effective repair tools, ultimately hindering advancements in the field. Third, existing datasets frequently include only the vulnerable functions, omitting the broader program context that is essential for accurately identifying and understanding security flaws. This missing context encompasses critical aspects such as data and control dependencies, interprocedural interactions, and environment constraints, all of which play a key role in determining whether a piece of code is truly vulnerable and how the vulnerability manifests. Finally, although some datasets offer line-level labels (e.g., $\mathbf { B i g V u l }$ [6]), simply having added-deleted lines from a commit may not be very useful. Specially, for $\mathrm { C / C + + }$ , some statements can be very long, and it is common practice to break them down into several lines1. Therefore, it is hard to make a meaningful understanding of only the line fragments without seeing the entire statements.
Most of the current vulnerability detection techniques (e.g., [7], [12], [20]) conduct vulnerability detection at a local scope, often focusing on a given function in isolation. These approaches frequently overlook critical contextual information from related codebases, such as variable state returned from an external function, function arguments, execution environment, etc. A recent study [19] has shown through empirical evaluation that most vulnerabilities in $\mathrm { C / C } { + } { + }$ require some level of external context to be correctly identified, such as variables, functions, type definitions, and environmental constraints that affect the function. As a result, neglecting the contextual information of a code snippet hinders these techniques from accurately assessing the presence of vulnerabilities within the code. Their study further reveals that many of the machine learning (ML) techniques that report high scores in vulnerability detection may be learning spurious features instead of the true vulnerability. This underscores the need for a more granular identification of vulnerabilities, along with correct reasoning to determine whether the models can truly spot vulnerabilities.
In this work, we address these limitations by introducing a comprehensive $\mathrm { C / C } { + + }$ vulnerability dataset that provides granular information up to the statement level. We focus on vulnerabilities that have been patched, ensuring that the dataset reflects real-world fixes. For each vulnerability, we gather detailed metadata, including CWE types, corresponding CVE (Common Vulnerabilities and Exposures) IDs and descriptions, commit IDs, commit descriptions, changed files, changed functions, and the modified statements that were deleted or added before and after the patch. We also extracted the contexts related to the vulnerable functions using GPT-4.1 and added them to the dataset.
We adopt the five levels of context defined by Risse et al. [19] to represent the essential context for a vulnerability: Function Arguments, External Functions (functions called from within the target function), Type Execution Declarations (e.g., struct, enum, and other type definitions), Globals (such as global variables and macros), and Execution Environments (e.g., the presence of a specific file in a given path). Our manual analysis of a subset of 100 samples shows that GPT-4.1 can identify the contexts necessary for a given vulnerability in a function with $8 2 . 9 8 \%$ accuracy. The statement-level granularity and contextual information, along with other metadata, enable deeper analysis and a more accurate evaluation of vulnerability detection.
We further evaluate five LLMs, including both open-source models such as Qwen2.5-Coder-32B, Deepseek-Coder-33B, Codestral- $2 2 B$ and proprietary models like GPT-4.1 and Claude-3.7-Sonnet on our dataset to show their ability to detect ${ \mathrm { C } } / { \mathrm { C } } { + } { + }$ vulnerabilities at statement level. Note that, in our experiments, we employ a multi-agent pipeline in which each agent is powered by an LLM. This design is motivated by prior work showing that decomposing complex tasks into smaller, actionable components can enhance LLM performance [16, 22, 23], thereby justifying our choice of a multiagent architecture. Our initial experiments show that state-of-the-art LLMs are still far from being applicable as vulnerability detection tools for $\mathrm { C / C } { + + }$ . The top-performing Claude-3.7-Sonnet model attains only a $2 3 . 8 3 \%$ F1-score, with GPT-4.1 trailing closely.
Table 1: Comparison of SECVULEVAL to widely used $\mathrm { C / C } { + + }$ vulnerability datasets from key aspects, i.e., the number of vulnerable functions, the availability of metadata, the duplication rate, the availability of context information, and the detection level. Duplicate rates marked with \* are reported from [5].
Artifacts. We release the dataset (https://huggingface.co/datasets/arag0rn/SecVulEval) and code (https://github.com/basimbd/SecVulEval) to help other researchers replicate and extend our study.
# 2 Related Work on Vulnerability Datasets for $\mathbf { C / C } \mathbf { + + }$
Zhou et al. [25] provides the Devign dataset, collected to evaluate their Devign detection model. This dataset includes over $1 2 { , } 4 5 7 \mathrm { C / C } { + } +$ vulnerabilities. However, it does not include other metadata such as CWE, CVE, etc. Also, they collect the vulnerable functions with a simple commit search with string matching, which resulted in the inclusion of many inaccurate functions, i.e., up to $20 \%$ according to a manual analysis done by [19] on a random subset. The ReVeal dataset proposed by [2] includes $1 , 6 5 8 \mathrm { C } / \mathrm { C } + +$ vulnerable functions, but only from the Chromium and Debian projects. Chen et al. [3] proposed DiverseVul, a $\mathrm { C / C } { + + }$ vulnerability detection benchmark with 18,945 vulnerabilities (from 797 projects and covering 150 CWEs). They showed that code-specialized models, e.g., CodeT5 and NatGen, surpass graph-based methods but face persistent issues such as high false positives, poor generalization, and limited data scalability, highlighting the need for improved deep learning approaches. Ding et al. [5] introduced PrimeVul, another $\mathrm { C / C } { + + }$ vulnerability benchmark with rigorous de-duplication, chronological splits, and VD-S metrics, exposing code models’ near-random failure despite prior overestimation and underscoring the urgency for innovative detection paradigms. However, all these datasets only include vulnerability annotations at the function level, i.e., whether the function is vulnerable or not. They lack vulnerability information at a more granular level, like the statement level. Statement-level labels are necessary to understand how the vulnerability is caused, which can then be utilized for better training and evaluation of vulnerability detection models.
Fan et al. [6] proposed BigVul, a $\mathrm { C / C } { + + }$ vulnerability dataset derived from open-source GitHub projects containing 3,754 vulnerabilities from 348 projects to support vulnerability detection. This dataset is closer to our work, as it includes line-level labels for vulnerable functions. Bhandari et al. [1] collected a large collection of $5 { , } 3 6 5 \mathrm { C } / \mathrm { C } { + } +$ vulnerabilities from NVD. It includes vulnerabilities at five levels of abstraction, including line-level vulnerability labels and other metadata. However, both datasets heavily suffer from high duplication rates, which risks data leakage in the testing or evaluation of detection models. The SVEN dataset proposed by He et al. [11] also includes line-level vulnerability information, and has accurate labeling as the entire dataset is manually annotated. But, due to this manual process, the dataset only includes 417 vulnerable $\scriptstyle \mathbf { C } / \mathbf { C } + +$ functions, limiting its use cases. Moreover, these three datasets include line-level labels, which may not be useful in many cases. $\mathrm { C / C } { + + }$ is a verbose language, and it is common for a statement to span multiple lines. Therefore, a single line might only be a fragment of a statement, and therefore, by itself, does not carry meaningful information.
Vulnerability Collection Commit Data Collection Filtering CVEs With Commit Id Commit Id git repository non-C files removed × ZVD CVE-1999-0199 multi-commit fix removed C/C++ CVEs CVE-2007-6761 ... Commit Id 肉 Commit files w/o function
No AvPaialtacbhl e? CVE-2024-49859 Commit Description Irrelevant changes Changed Lines ? X and noise Scrape CVE Yes & Commit id Changed Functions Changed Files Data Extraction
No Commit id Yes Extract Changed Statements Available? Vulnerable Fixed 1 Version Version Extract Required Context
In our work, we address these shortcomings and challenges by including vulnerable and nonvulnerable functions along with statement-level vulnerability labels, along with contextual information for the vulnerability. These are accompanied by other metadata for varied analysis, along with rigorous de-duplication and filtering to maintain data quality. Table 1 provides a detailed comparison between our benchmark and previous works.
# 3 Benchmark Construction
In this section, we provide a detailed overview of the different steps and stages in building our benchmark data, i.e., vulnerability collection, commit data collection, noisy data filtering, and contextual information collection. The workflow is illustrated in Figure 1.
# 3.1 Vulnerability Collection
We start by collecting CVEs recorded in the National Vulnerability Database $( \mathrm { N V D } ) ^ { 2 }$ . NVD has a rich collection of vulnerabilities and is regularly updated, making it a standard vulnerability repository. NVD provides detailed metadata for each CVE entry, including descriptions, severity scores (e.g., CVSS), affected products, and references to patches or advisories. However, one limitation of the NVD is that it does not explicitly categorize vulnerabilities by programming language. To focus specifically on $\mathrm { C / C } { + + }$ vulnerabilities, we leverage the project names identified as $\mathrm { C / C } { + + }$ projects in prior vulnerability datasets, such as BigVul [6], CVEFixes [1], and PrimeVul [5], as listed in Table 1. By reusing these curated project names, we ensure that the collected CVEs are associated with $\scriptstyle \mathbf { C } / \mathbf { C } + +$ codebases. We further retrieve CVE records for these projects (called ‘product’ in NVD) where the CVE status is not REJECTED. We also utilize the keyword search feature of the NVD API to search with related keywords $\mathbf { \bar { \rho } } ^ { \circ } ( \mathbf { C } \mathbf { + } \mathbf { + } ^ { \prime }$ , ‘C language’, ‘.cpp’, etc.). These results are then filtered by the file types, and only $\mathrm { C / C } { + + }$ vulnerabilities are kept. To enable the study of actual vulnerabilities, only CVEs with patch-related information are retained. Using the “Patch” tag from the NVD Developers API, CVEs without patch references are discarded. Additionally, to avoid duplication, we only kept the CVEs that had at least one link to a patch commit in their references. However, some of the commit links point to many forked repositories. We discarded such forked commits and only kept commits to the original repo. We ended up with a collection of CVEs, each with a description, associated CWE, fixing commit ID, and other metadata.
# 3.2 Commit Data Collection
Given the commit IDs collected for each CVE, the next step is to collect the commit-related information. We utilize the GitHub REST API to fetch commit details. For each commit, we collect its commit ID, commit message, and the files changed in the commit. We also sanitize the commit descriptions by removing accreditation lines (such as reporter emails, cc emails, etc.) since they do not contain any information related to vulnerable code changes. For each changed file, we extract the changed lines (i.e., added lines and deleted lines) and changed functions, i.e., functions containing the changed lines. For all changed files, lines, and functions, we include two copies: one before the fixing commit (i.e., vulnerable version) and one after the fixing commit (i.e., fixed version).
Table 2: Top 10 projects in SECVULEVAL.
Figure 2: Distribution of five context categories in SECVULEVAL.
# 3.3 Filtering & De-noising
After collecting the commit artifacts, we apply a series of filtering and denoising steps to enhance the quality and reliability of our dataset. The filtering criteria are $\textcircled{1}$ removing non-C files, $\textcircled{2}$ removing multi-commit fix, $\textcircled{3}$ removing commits with no functions changed, and $\textcircled{4}$ using heuristics to remove refactoring, reformatting, etc. Below we describe them in detail.
$\textcircled{1}$ We exclude non- $\mathrm { { C / C + + } }$ files (e.g., .S, .rst, config files) as their changes are often in documentation, macros, or signatures, and are side effects unrelated to the vulnerability. $\textcircled{2}$ We retain only CVEs with single-commit fixes for simplicity, as multi-commit data can be challenging to present effectively to the model. Our manual investigation revealed that, in many cases, multi-commit fixes primarily involve similar changes across multiple files or refactoring of related functions in other files. To maintain clarity and consistency, we discard only 43 CVEs with multiple commits. $\textcircled{3}$ We exclude commit files that do not involve any changes to functions. Some commits may only modify function prototypes, add comments, or update enum values or struct fields3. While these changes may be related to the codebase, they provide minimal insight into the actual vulnerability and instead introduce unnecessary noise. $\textcircled{4}$ Finally, we use heuristics to filter out tangled files in changes and improve the labeling accuracy. Specifically, when a commit changes several functions in a file, all the changes are not necessarily related to the vulnerable code. In fact, in many cases, variable/function renaming or function signature updating is carried out across the whole file, resulting in multiple functions being updated. We filter out tangled changes by retaining only functions (i) solely modified in the file, or (ii) explicitly referenced in the CVE or commit message, avoiding unrelated edits like renaming or formatting.
The final step in the filtering process eliminates any duplicate functions. Duplicates can arise for several reasons in the initial collection process, such as multiple copies of the same function in different files, or a CVE being assigned to multiple CWE types, etc. Duplicate entries are a big problem in vulnerability benchmarks as they can leak data to training and also make the benchmark biased towards excessive duplicates. Previous benchmarks such as DiverseVul [3], BigVul [6], and CVEFixes [1] suffer from $3 . 3 \% - 1 8 . 9 \%$ function duplication problem as shown in [5]. Our filtering process eliminates duplicate functions by mapping each function to an md5 hash as done by [5]. We normalize each function by stripping away all leading/trailing whitespaces, $\mathrm { \Delta ^ { \circ } u } ^ { \mathrm { , } }$ , and $\mathbf { \chi } ^ { \star } \mathbf { \backslash t ^ { \star } }$ . Then we convert the function string to an md5 hash and keep only one copy of a function in the case of a collision. In this way, we ensure that all functions in our dataset are unique, eliminating the data leakage problem.
After filtering, we end up with a collection of 5,867 unique $\mathrm { C / C } { + + }$ vulnerabilities (CVEs) from 707 different projects, distributed over 145 CWE types. Figure 3 shows the number of vulnerable functions from the top-20 CWE types. The benchmark consists of 25,440 functions, including 10,998 vulnerable and 14,442 non-vulnerable functions. The functions range in various sizes from 4 lines to 541 lines (from 2.5 to 97.5 percentile), with a median size of 44 lines per function. The average number of statements changed in each function is around 4 (deleted) and 6 (added). Table 2 shows the overall statistics of the dataset along with the top 10 most vulnerable projects.
# 3.4 Contextual Information Collection
Real-world vulnerabilities are intricate and often result from the interaction of multiple entities. However, previous works do not incorporate this important feature into their datasets. Indeed, it is extremely arduous to manually check each function in the dataset and identify the required contexts. To this end, we harness the code-analyzing ability of LLMs to automatically extract required contexts for vulnerable functions.
We prompt GPT-4.1 with all the available information to identify the context required to understand the vulnerability in this function, and categorize
Figure 3: Number of vulnerable functions in Top-20 CWE types in SECVULEVAL. The $\mathbf { \boldsymbol { x } }$ -axis represents the list of CWE IDs, while the y-axis indicates the number of corresponding samples.
them according to the five definitions. We provide the following information to the model: the vulnerability type (CWE-ID and description), the full function body, the patch (deleted-added lines), the commit message, and the CVE description. Using this detailed information, the model decides which symbols (variables, functions, etc.) are required or help to identify the vulnerability in the given function.
To measure how accurately GPT-4.1 can identify related contexts in a given function, we manually validated 100 randomly selected samples. Ground truth contexts were determined by tracing variables used in vulnerable statements back to their external symbols or verifying their origin within the function, with external symbols being defined as the relevant contextual elements. Heuristics were also used to capture additional potentially influential symbols (e.g., those that affect branching within if or switch-case statements). A prediction was deemed correct if GPT-4.1 identified the ground truth contexts, permitting the inclusion of up to one superfluous symbols within any single category. Samples where no relevant context could be identified from the function (e.g., only hard-coded strings changed in the function) were marked as ‘N/A’ and excluded from the evaluation, resulting in four such cases out of the initial 100 samples. Among the remaining samples with identifiable context, GPT-4.1 achieved an accuracy of $8 2 . 9 8 \pm 9 . 6 8 \%$ (with $9 9 \%$ Confidence Interval). Incorrect predictions resulted primarily from missing one or two required symbols or incorrectly including unrelated ones, particularly when analyzing larger code modifications. Figure 2 shows the distribution of the five context categories in the dataset.
# 4 Experiments
Comprehensive and diverse datasets are vital for vulnerability research. SECVULEVAL offers detailed, statement-level vulnerability annotations, enriched with contextual data and metadata like CWE IDs and CVE descriptions, enabling fine-grained, context-aware analysis. With 707 projects and 145 CWE types, it provides a diverse, fully de-duplicated benchmark ideal for evaluating detection techniques. This section uses SECVULEVAL to evaluate LLMs’ effectiveness in detecting vulnerabilities (Section 4.1) and identifying essential contextual information (Section 4.2).
- ...all previous inputs - ...all previous inputs ...all previous inputs - normalized Func - func summary more - extracted context - if vulnerable or not - final verdict
- Func - func AST - pitfall checklist context? - vulnerable statements - vuln statements 7 Yes No Agreed ↓ ↓
Normalization Agent Planning Agent Context Agent Detection Agent Validation Agent </> 淘 I
- generate AST of the - generate func summary - get what context needed
function - list common pitfalls -coenxtreaxct ts definition for all - dgetevcut inf vstualtne rmaebnltes if yes -Daetg.reAege/ndti.sagree with → Disagreed
# 4.1 Experiment 1: Vulnerability Detection
We investigate the effectiveness of LLMs in detecting security vulnerabilities in $\scriptstyle \mathbf { C } / \mathbf { C } + +$ code. Previous studies have shown that single LLMs perform very poorly on the $\mathrm { C / C } { + + }$ vulnerability detection task, even when doing a function-level binary classification (vulnerable or non-vulnerable) [5]. Therefore, we adopt a multi-agent pipeline for vulnerability detection as illustrated in Figure 4. These LLM-based agents have separate responsibilities and complete the entire task through collaboration. This type of approach has been shown to be more effective than a single LLM by multiple studies [16, 23, 22]. To the best of our knowledge, this is the first time that an LLM-based multi-agent pipeline is applied for the vulnerability detection task.
The pipeline consists of five agents, four powered by LLMs. It starts with the Normalization Agent, which parses the input function into AST form using tree-sitter. This output, along with the normalized function, is passed to the Planning Agent, where an LLM summarizes the function and generates a checklist of potential vulnerabilities. The Context Agent then iteratively queries an LLM to identify required external symbols for vulnerability detection, stopping once context is deemed sufficient or after three attempts. Symbol definitions are extracted via tree-sitter also. The Detection Agent uses all prior inputs to determine if the function is vulnerable, pinpoint vulnerable statements, and provide a rationale. Finally, the Validation Agent evaluates the Detection Agent’s output. If disagreement arises, the Detection Agent reruns (up to three iterations) until both agents agree.
# 4.2 Experiment 2: Context Identification
The second experiment demonstrates the use of our dataset for context identification, i.e., evaluating how effectively LLMs extract the contextual elements required for vulnerability detection. Within the Context Agent, an LLM is prompted to identify relevant symbols, such as function arguments, external calls, and type definitions, that are needed to analyze a target function and determine the presence of vulnerabilities. These identified definitions are then forwarded to the Detection Agent. To assess the accuracy of context extraction, we compare the LLM-generated symbols against the ground-truth dependencies annotated in our dataset.
# 4.3 Evaluation Metrics
In this work, we use Precision, Recall, and $F l$ -Score to measure the vulnerability detection performance of the models. Specifically, vulnerable instances are regarded as positive, and non-vulnerable instances as negative. For statement-level vulnerability detection, we measure a prediction as True Positive if it correctly predicts vulnerable statements along with accurate reasoning. If the model correctly identifies a function as vulnerable, but the reasoning is incorrect, we consider this a True Positive for function-level detection but a False Negative for statement-level detection since it misses the correct vulnerability. Precision measures the likelihood that a prediction is correct when it identifies an instance as positive. Recall, on the other hand, measures the ability to correctly identify all the positive instances, even if the model makes some False Positives. Finally, F1-Score is the harmonic mean of both Precision and Recall.
Table 3: Vulnerability detection performance of LLM-driven agents on Func-level (whether the function is vulnerable or not) and Stat-level (if the model identified vulnerable statements in the function with correct reasoning)
# 4.4 Model Setup
We selected five state-of-the-art LLMs for our evaluation tasks, including open-source and proprietary models. Specifically, we select Deepseek-Coder-33B-Instruct [9], Codestral-22B-v0.1 [15], Qwen2.5- Coder-32B-Instruct [13], GPT-4.1, and Claude-3.7-Sonnet because these models are widely used in the community and have demonstrated high performance in various software engineering tasks. For the open-source models, we have used the weights from HuggingFace.
We report all LLM scores with pass $\ @ 1$ and use temperatur $e = 0 . 1$ for stable outputs as commonly used in the literature [4, 14]. We use pass $@ 1$ as it is more representative of the real scenario where a developer does not have the output reference to validate the attempts. All our experiments were carried out in a system with Intel(R) Xeon(R) Gold 6442Y CPU and a GPU cluster of 4 NVIDIA L40S.
# 5 Result Analysis
# 5.1 Experiment 1: Benchmarking LLMs in Vulnerability Detection
Approach: To find the effectiveness of LLMs in detecting statement-level vulnerabilities, we adopted a multi-agent based approach as described in Section 4.1. We run our evaluation on the output of the Validation Agent. If the model finds any vulnerability, then it shall output each vulnerable statement and its reason as a pair. Otherwise, the model should output an empty list, and ‘is_vulnerable’ field in the output will be ‘false’. Since the outputs include explanations for the vulnerability, it is not possible to automatically evaluate the results. In addition, the statements returned by the model may not always be the exact same statements as the changed statements. For example, sometimes the models output the ‘sink’ statement as vulnerable (where the vulnerability causes crash or other symptoms) instead of the ‘source’ (where the vulnerability is introduced), or sometimes return both statements. Moreover, sometimes the model may return the correct statements, but with incorrect reasoning. Therefore, we randomly selected 300 samples from the Top-25 CWE types for the experiment and manually validated the outputs. We identify an output to be True Positive for statement-level detection if: $\textcircled{1}$ the model outputs the exact vulnerable statements and the correct reasoning, $\textcircled{2}$ if same vulnerability is fixed at multiple places in the function, the model at least returns one such statement with correct reasoning, $\textcircled{3}$ the model returns the vulnerable statements with correct reasoning and at most two unrelated statements, $\textcircled{4}$ the model outputs only the sink statements or both vulnerable and sink statements with correct reasoning. For function-level detection, we automatically match the output of the ‘is_vulnerable’ field with the ground truth from the dataset.
Results: Table 3 shows that statement-level detection performance remains low. The best model, Claude-3.7-Sonnet, achieves only a $2 3 . 8 3 \%$ F1-score and $1 5 . 3 5 \%$ precision, with GPT-4.1 close behind. Open-source models perform worse overall. Closed-source models like Claude and GPT-4.1 adopt a more aggressive detection strategy, reflected in their higher recall (e.g., Claude’s $5 3 . 2 3 \%$ vs. Codestral’s $1 8 . 7 5 \%$ ) but lower precision due to more false positives. This trend is even more pronounced at the function level, where Claude and GPT-4.1 show very high recall $( 7 5 . 6 3 \%$ , $7 3 . 1 1 \%$ ) but still lower precision than open-source models, suggesting a tendency to over-flag functions as vulnerable.
Note that when comparing the performance of these models on function-level and statement-level vulnerability detection in Table 3, we can observe that identifying whether a function is vulnerable and pinpointing the exact vulnerable statements are distinct tasks—the latter being significantly more challenging with lower precision and recall values. Both open-source and proprietary LLMs show significant drops in precision and recall when required to locate vulnerable statements with correct root cause explanations. Unlike function-level detection, statement-level analysis demands fine-grained reasoning about issues like pointer arithmetic and memory bounds, requiring deeper contextual and interprocedural understanding. This raises concerns about the reliability of functionlevel predictions. Risse et al. [19] have shown that models may rely on spurious patterns rather than true vulnerabilities. Our results echo this, as models often fail to find the real vulnerability even when correctly flagging a function. Thus, advancing statement-level detection is essential, as without it, developers risk being misled by false positives or missing subtle but critical flaws.
The models are not effective at detecting vulnerable statements in $\mathrm { C / C } { + + }$ functions, as the best performing Claude-3.7-Sonnet model achieves only $2 3 . 8 3 \%$ F1-score. SECVULEVAL’s diverse set of vulnerabilities uncovers LLMs’ severe lack of ability to find vulnerable statements and their root cause in complex real-world code.
# 5.2 Experiment 2: Benchmarking LLMs in Context Identification
Approach: This experiment evaluates how well LLMs identify the contextual elements needed for vulnerability detection. Within the Context Agent, an LLM is prompted to extract relevant context, such as function arguments, external calls, and type definitions, for analyzing a target function. To evaluate the
Table 4: Accuracy of essential context identification. No models identified any Environment-level context.
performance of LLMs in this task, we focus on vulnerabilities for which the Context Agent determined that additional contextual information was needed to facilitate detection. For these cases, we compare the context extracted by the LLMs against the ground-truth dependencies provided in our dataset.
Results: Results show that LLMs struggle to identify key contextual information needed to understand function-level vulnerabilities. Most models, except Claude-3.7-Sonnet, primarily focus on external functions, which are useful but insufficient. Globals and type definitions, which include macros, constants, and structs, are also critical in $\mathrm { C / C } { + + }$ but often overlooked, limiting the models’ contextual understanding. Claude-3.7-Sonnet performs better in identifying these elements, likely contributing to its higher detection score. All models perform poorly on identifying function arguments, possibly due to focusing on their struct types rather than the variables themselves. None of the models identified any environment-level context. This is understandable as it is very rare ( $1 . 5 \%$ of all contexts), and models are unlikely to catch environmental contexts from the function.
The results reveal the limited capability of current LLMs in identifying relevant contextual information for vulnerability detection. While Claude-3.7-Sonnet demonstrates relatively higher coverage, other models frequently overlook critical context types. | Large Language Models (LLMs) have shown promise in software engineering
tasks, but evaluating their effectiveness in vulnerability detection is
challenging due to the lack of high-quality datasets. Most existing datasets
are limited to function-level labels, ignoring finer-grained vulnerability
patterns and crucial contextual information. Also, poor data quality such as
mislabeling, inconsistent annotations, and duplicates can lead to inflated
performance and weak generalization. Moreover, by including only the functions,
these datasets miss broader program context, like data/control dependencies and
interprocedural interactions, that are essential for accurately understanding
real-world security flaws. Without this context, detection models are evaluated
under unrealistic assumptions.
To address these limitations, this paper introduces SecVulEval, a benchmark
designed to support fine-grained evaluation of LLMs and other detection methods
with rich contextual information. SecVulEval focuses on real-world C/C++
vulnerabilities at the statement level. This granularity enables more precise
evaluation of a model's ability to localize vulnerabilities, beyond simple
binary classification at the function level. By incorporating rich contextual
information, SecVulEval sets a new standard for vulnerability detection
benchmarks in realistic scenarios. This benchmark includes 25,440 function
samples covering 5,867 unique CVEs in C/C++ projects from 1999 to 2024. We
evaluated the SOTA LLMs with a multi-agent-based approach. The evaluation on
our dataset shows that the models are still far from accurately predicting
vulnerable statements in a given function. The best-performing
Claude-3.7-Sonnet model achieves 23.83% F1-score for detecting vulnerable
statements with correct reasoning. Finally, we analyze the LLM outputs and
provide insights into their behavior in vulnerability detection for C/C++. | [
"cs.SE"
] |
# 1 INTRODUCTION
Foundation models have become highly accessible to users thanks to the availability of model hosting platforms such as HuggingFace [59], Ollama [12], and ModelScope [55]. Developers download the pre-trained models hosted on these platforms (e.g., from cloud storage), and then apply them to various tasks such as finetuning [25, 53, 57], distillation [64, 67] and inference [44, 68]. Commonly, different tasks demand different model precisions; for example, fine-tuning is often performed using higher precisions such as FP16 [42], then, the fine-tuned model would be quantized to a lower precision format such as INT8 [13, 39] for faster inference. Hence, many workflows require access to the same model under different precisions (e.g. FP16 and INT8): in addition to fine-tuningthen-inference, other tasks with this requirement include Model Cascade [26, 66] and Model Chaining [22, 58, 61]. Moreover, data scientists and researchers also iterate between different-precision models for testing, experimentation and benchmarking [10, 11].
High-precision model weights (FP16)
27.25 0100111011010000
27.30 0100111011010011 Quantization (rounding, scaling, etc.) Conditional 01 102 01100110 information Common Information (INT8) 10 (FP16|INT8)
Storing Multiple Models is Costly. Currently, a common approach to maintaining multiple models of varying precisions while doing the aforementioned tasks is to store them as is (i.e., separately storing the multiple precision versions) [10, 11]. However, as newer, more complex tasks demand ever-increasing model sizes (e.g., Mistral-7B [38] being sufficient for simple math tasks, while more complex, multi-modal tasks [60] require larger models such as Qwen2.5-VL 32B [21]), the storage cost incurred by storing multiple versions of a model can quickly become prohibitive — for example, 91.8 GB of space is required to store just the BF16 [40] and INT8 (quantized) versions of the Deepseek-Coder [69] 33B parameter model. While this is a significant issue for developers using these models, it also increases the incurred cloud storage cost for model hubs like HuggingFace, Ollama, and ModelScope, since model providers and users end up storing multiple precisions of these models separately on these platforms to account for user accesses to models in different precisions.
One potential approach to reduce storage cost is to only store the highest-precision model (e.g., FP16 or BF16), then quantize inmemory if lower-precision versions (e.g., INT8) are needed [30]. However, retrieving a low-precision model with this approach is inefficient as it requires (i) loading more data than necessary (i.e., the high-precision model) and (ii) a computationally expensive quantization process (e.g., up to 21 GPU minutes for a 13B model [31]). Alternatively, stored models can be compressed with an algorithm such as LZ4 [3], ZSTD [4], or ZipNN [34]. However, these algorithms either utilize generic techniques that underperform on ML model weights (e.g., LZ4 and ZSTD), or are tailored to one specific precision (e.g., ZipNN for FP16/BF16 weights).
Our Intuition. We propose QStore, a data format for efficiently storing varying precision versions of a model. We observe that despite being quantized, a lower-precision (e.g., INT8) version of a model contains significant information that is also present in a higher-precision (e.g., FP16, BF16) version. Hence, compared to separately compressing and storing a pair of higher and lowerprecision models, it is possible to use less space to simultaneously represent both models in a unified format. Fig 1 illustrates this idea: much of the information present in the weights of a highprecision FP16 model is already contained in the low-precision (i.e., quantized) INT8 version. Hence, given an already efficiently stored low-precision model, we can also store the high-precision model using only a few additional bits per weight representing ‘extra information’ not present in the low-precision model (i.e., the ‘FP16 | INT8’ conditional model). Such a unified data format would (1) save storage space versus storing both models separately (regardless of compression), (2) enable faster loading of the lower-precision model versus loading a high-precision model and quantizing it, while (3) still enabling fast loading of the high-precision model.
Challenges. Designing a unified data format for simultaneously and efficiently storing a pair of high and low-precision models is challenging. First, we need to carefully define the ‘extra information’ not present in the lower-precision model required to reconstruct the higher-precision model. Significant information is lost while quantizing a higher-precision model to a lower-precision one (e.g., from operations like rounding), hence, our definition should effectively encapsulate this information gap for lossless reconstruction. Identifying this information gap is nontrivial, as a quantized weight may be significantly different from the original weight in both bit representation and numerical magnitude (Fig 1). Second, our representations of the lower-precision model’s information and ‘extra information’ should strike an acceptable storage/processing speed trade-off: for example, naïvely defining and storing information at a bit-level granularity would enable the most efficient model storage, but can result in unacceptable model loading and saving times.
Our Approach. Our key idea for QStore is to design a generalized compressed representation for conditional information that can work well despite the differences between floats and integers; such a format would allow us to load low and high-precision models, regardless of their data type, with perfect accuracy.
First, for storage, given a high and low-precision model pair’s weights, we separately encode the low-precision model weights and conditional weights (i.e., the ’extra information’) with novel entropy coding and intelligent grouping strategies to enable significantly better compression ratios versus separately compressing the two models using off-the-shelf compression algorithms.
Then, for model loading from QStore, we process the encoded low-precision model’s weights, or additionally the conditional weights, to retrieve the low-precision or high-precision model, respectively. We perform decoding at a byte-level granularity to ensure high decoding speeds on common computing architectures [48]. Our decoding is notably lossless (e.g., versus dequantization [47]).
Contributions. Our contributions are as follows:
(1) Format. We describe how QStore, a data format to efficiently store a high and low-precision model pair. (§3)
(2) Usage. We describe efficient encoding and decoding schemes for storing/loading models to/from QStore. (§4)
(3) Evaluation. We verify on 6 popular foundation models of varying sizes that QStore reduces storage costs of a pair of
high and low-precision models by up to $5 5 \%$ while enabling up to $1 . 6 \times$ and $2 . 2 \times$ faster loading and saving of the model pair, respectively, versus alternative approaches. (§6)
# 2 BACKGROUND
Efficiently storing and deploying large foundation models is challenging. Our work addresses this challenge through proposing a compressed format capable of concurrently storing multiple model representations of different precisions. This section overviews related work on quantization (§2.1) and compression (§2.2).
# 2.1 Quantization
Quantization is commonly applied to models to achieve desired quality-resource consumption tradeoffs. In this section, we overview the pros and cons of common quantization techniques, and key differences between QStore and quantization.
Common Quantization Targets. While 32-bit floating-point (FP32) precision was once standard [46], the recent increases in model sizes and corresponding increases in computational and memory requirements have driven the adoption of lower-precision, quantized model formats. For example, 16-bit precision (FP16 [35], BF16 [7, 40]) formats have become a de-facto standard for training and fine-tuning to balance between accuracy and resource consumption. For more resource-constrained scenarios or latency-sensitive applications (e.g., on-device processing [63]), further quantization is common— typically to 8-bit (INT8) [28, 36], but sometimes more aggressively to 4-bit (INT4, NF4) [29, 30, 43] or even lower [56]. Recently, FP8 quantization has also been used during inference [45].
Quantization Methods. There exists several notable classes of quantization methods commonly applied to foundation models. (1) RTN (round to nearest) rounds weights to the nearest representable value in low-precision format (e.g., $4 2 . 2 5 4 2$ ), which is fast, but can significantly degrade model accuracy (e.g., with outlier weights). (2) Channel-wise quantization such as LLM.int8() [28] and SmoothQuant [62] apply per-channel scaling and quantization to model weights to better preserve outliers. (3) Reconstructionbased approaches such as AWQ [43] and GPTQ [30] are also applied on a per-channel or per-block level, but they aim to quantize in a fashion such that the original high-precision weights can be reconstructed with minimal error. While these methods are capable of quantizing to very low precisions such as INT4 and INT3, they incur higher computational overhead versus alternatives.
Quantization methods operate at a per-block level, since it allows them to be efficient, permitting parallelization over multiple threads (including GPUs), and requiring less metadata compared to quantizing every element separately. We will later show how this nature allows our approach to be generally extendable (§4.2).
QStore vs Lossy Quantization. Quantization is inherently a lossy transformation aimed at reducing model complexity. In comparison, our approach for model storage via QStore is orthogonal, since it takes the quantized and unquantized models as input, and subsequently performs lossless compression to store them efficiently into a unified format. While we focus on storing a pair of models at two specific precision levels (16-bit FP16 and BF16, 8-bit INT8) in this paper, our approach does not assume any specific closed form for the quantization method that is used; hence, our techniques can be generalized to other quantization levels (e.g., INT4, or other custom levels). We briefly describe how this can be done in $\ S 7$ .
# 2.2 Data compression
Model hosting platforms (e.g., HuggingFace [59]) store foundation models in wrapper formats such as Safetensors [9, 24], ONNX [6], TensorFlow, and SavedModel[8] that allow transparent storage of additional information such as tensor names and quantization information along with the model weights. However, these formats store weights in an uncompressed fashion. Another approach orthogonal to quantization that has been explored to reduce model sizes (for storage) is compression. We discuss the pros and cons of various compression techniques applicable to foundation models.
Generic Compression Algorithms. Standard compressors such as GZip [1], ZSTD [4], LZ4 [3] can be applied to model weights. These approaches treat (the sequence of) weights as a generic byte stream and are agnostic to specific structural and numerical properties of the model weights. ALP [20] targets general floating point numbers, but only supports 32-bit and 64-bit floats, so their method cannot be directly applied to 16-bit models. Generic methods do not achieve optimal compression ratios on model weights due to their high entropy (e.g. the mantissa bits of floats [34]) rendering common techniques such as dictionary coding [52] ineffective.
Compression for ML Models. Recently, some approaches have been proposed for specifically compressing ML models: ZipNN [34] compresses BF16 weights by reordering the 16-bit float into 2 byte streams, and compressing each stream separately with Huffman coding. Additionally, they propose numerical delta storage to store multiple perturbed versions (e.g., after fine-tuning) of the same base model at the same precision. NeuZip [33] uses lossy compression to speed up inference by quantizing mantissa bits, and applying lossless compression to exponent bits with an entropy coder to speed up training. Huf-LLM [65] uses hardware-aware huffman compression, breaking the 16-bit value into non-standard bit-level patterns and compressing these streams separately for fast inference.
QStore (ours): Joint Compression. Unlike existing compression methods, QStore targets the joint compression of a quantized and unquantized pair of models, and achieves higher compression ratios versus compressing them separately (empirically verified in $\ S 6$ ). Additionally, QStore runs purely on CPU, and does not depend on the availability of specific architectures (e.g., systolic arrays, TPUs/NPUs) required by some of the aforementioned methods.
# 3 QSTORE OVERVIEW
This section presents the QStore pipeline. QStore is a format that efficiently stores a pair of high and low-precision models: first, the model pair is compressed using an encoder into the unified QStore format. Then, a decoder is applied onto the QStore files to losslessly retrieve the high or low-precision model (or both).
QStore Input. QStore’s encoding takes in the weights of the high and low-precision model versions ( $\dot { \boldsymbol { w } }$ and $Q ( w )$ , respectively) as input. QStore does not impose restrictions on the input format; our approach can work within any format implementation as long
Input Our unified Outputs compression High- format (QStore) High
precision precision Model Encoder low-precision Decoder Model (§4.2) model weights (§4.4) Low- Low
precision conditional precision Model weights Model
as it stores tensors separately (e.g., safetensors [9], PyTorch pickle objects [2], TensorFlow SavedModel [8], etc. are acceptable).
Encoding. QStore’s compression process utilizes an encoder to encode the weights of the models: the encoder first compresses the weights of the low-precision model, then compresses the conditional information present in the high-precision model but not in the low-precision model (i.e., ‘extra information’, $\ S 1$ ). We describe QStore’s encoding in detail in $\ S 4 . 2$ .
Format. The unified QStore format, generated by encoding the input model pair, consists of two files: the compressed low-precision weights and the compressed conditional information (§4.3).
Decoding. QStore’s decompression process utilizes a decoder to act on the two files contained within QStore to reconstruct either the low or high-precision model (or both): If the user requests the low-precision model, the decoder is invoked on the compressed quantized model weights to reconstruct it. If (additionally) the highprecision model is requested, the decoder is invoked on the newly decompressed low-precision model weights and the compressed conditional information to reconstruct the high-precision model. We describe QStore’s decoding in $\ S 4 . 4$ .
# 4 QSTORE: UNIFIED FORMAT
This section details the QStore format and its encoding and decoding algorithms. We describe our intuition to encode conditional information in $\ S 4 . 1$ , the encoding of a model pair into the QStore format in $\ S 4 . 2$ , the QStore format itself in $\ S 4 . 3$ , and decoding to obtain the original high or low-precision weights (or both) in $\ S 4 . 4$ .
# 4.1 Key Intuition
This section describes our intuition for compressing conditional information present in the high-precision model but not in the lowprecision model. Without loss of generality, we will be describing QStore’s operations with a FP16/BF16 and INT8 model pair.
Conditional Information. Given a high and low-precision model pair, it is possible to derive the low-precision model from the highprecision model (e.g., via quantization). Hence, all information present in the low-precision model is contained within the high precision model. Given the weights of the high-precision model 𝑊 and a quantization function $\boldsymbol { Q }$ that maps it to the corresponding quantized weights, we can model the information in the model pair:
$$
H ( W ) = H ( Q ( W ) ) + H ( W | Q ( W ) )
$$
Figure 3: Weighted entropy of different grouping strategies on the Llama 3.1 8B Instruct model’s 16-bit weights. QStore’s combined grouping achieves high entropy reduction (hence compression ratio) versus alternative grouping strategies.
QStore aims to find an efficient bit-level representation corresponding to $H ( Q ( W ) ) + H ( W | Q ( W ) )$ in Eq. (1). Notably, the representation of the conditional data $W \vert Q ( W )$ must be lossless regardless of the quantization function $\boldsymbol { Q }$ used, which QStore will not know in advance (i.e., prior to compression). In particular, given floating point $W$ and quantized $Q ( W )$ , the key challenge is in finding overlapping bit-level patterns in dynamic-precision floating point data that is informed by the corresponding quantized data, which the remainder of this section will aim to address.
Grouping by Quantized Weight. Most common recent quantization schemes use a combination of scaling (e.g., normalizing weights into a range) and rounding (§2.1). Given such quantization schemes, we observe that two floats that quantize to the same value (with the same quantization function, described shortly) can be expected to have more overlapping bits compared to two randomly selected floats, such as those that quantize to different values (Fig 3). Higher bit-level overlap between floats is directly correlated with compressibility (e.g., via entropy coding schemes); hence, QStore groups the high-precision (floats) weights by quantized value during encoding.
Grouping by Quantization Function. Recent popular quantization schemes apply multiple independent quantization functions to a single tensor and perform block-wise quantization (§2.1). For example, LLM.int8() [28] uses a different scaling factor to quantize each tensor row (e.g., $\begin{array} { r } { Q _ { r o w = i } ( w _ { i } ) = r o u n d ( \frac { 1 2 8 w _ { i } } { s _ { i } } ) } \end{array}$ , where $s _ { i }$ is the scaling factor for row 𝑖). The quantization function is often chosen w.r.t. the 16-bit weights; a common choice is $s _ { i } = a b s ( m a x ( w _ { i } ) )$ , the magnitude of the largest/smallest weight in group $i$ [28, 43]. Hence, the conditional information of a group of floating point weights w.r.t. their quantized integer weights $H ( W | Q ( W ) )$ will change as $Q ( W )$ changes. While grouping floats by the quantization function applied alone achieves negligible entropy reduction (due to the intra-group float distributions still being largely random), we observe that a combined grouping of the quantization function applied, and the quantized weight value achieves significant compression benefits (e.g., versus grouping only by one of the two criteria, or randomly grouping with the same number of groups, Fig 3).
# 4.2 Encoding to QStore
This section describes how a high and low-precision model pair is encoded into the QStore format. As described in $\ S 3$ , QStore’s encoder compresses the low-precision model and the high-precision model’s conditional information w.r.t. low-precision model (§4.1).
Encoding Quantized Weights. QStore’s encoder utilizes an entropy coding scheme to compress the (quantized) weights of the low-precision model $Q ( w )$ . It follows zstd’s approach [4] to divide $Q ( w )$ into sequential, fixed-size chunks, on which per-chunk Huffman compression is applied for up to $12 \%$ size reduction (§6.2).
Encoding Conditional Information. QStore’s encoder computes the conditional information using weights of both the high and low-precision model (𝑤 and $Q ( w )$ , respectively) as input. Following intuition described in $\ S 4 . 1$ , the weights of the high-precision model $w$ are first grouped according to the applied quantization function (e.g., for LLM.int8() [28] each group will consist of all tensors with the same applied scale value). Then, weights in each group are further divided into subgroups of weights quantizing to the same value. Figure 4 depicts an example: rows $w _ { 1 } , w _ { 3 }$ , and $w _ { 2 }$ are quantized with distinct scale values (32 and 16, respectively), hence their weights are placed into group 1 $\begin{array} { r } { { \bf \nabla } ^ { \prime } s _ { 1 } = s _ { 3 } = 3 2 , } \end{array}$ ) and group 2 $\begin{array} { r } { { \bf \sigma } _ { S 2 } = 1 6 \mathbf { \sigma } , } \end{array}$ . In group 1, $w _ { 1 1 } , w _ { 1 3 } , w _ { 3 2 }$ , and $\boldsymbol { w _ { 3 3 } }$ quantize to the same value (yellow) and are placed in one subgroup; $w _ { 1 2 }$ and $\displaystyle w _ { 3 2 }$ quantize to another value (blue) and are placed in another subgroup.
Per-subgroup compression. Similar to how we compress the lowprecision quantized weights, QStore’s conditional encoder then compresses conditional information using Huffman compression on a per-subgroup basis. If a chunk is not compressible enough (e.g., due to high entropy, or very few unique values in a subgroup), QStore skips encoding and stores that chunk uncompressed.
Remark. The combined size of QStore’s compressed quantized weights and conditional information is much lower than the original uncompressed size of both models; in fact, QStore’s size is close to only compressing the high-precision model (e.g., via ZipNN, $\ S 6 . 2 \AA$ ; however, QStore additionally allows the low-precision model to be directly retrieved without requiring in-memory quantization.
# 4.3 QStore Format
This section describes how QStore stores an encoded high and lowprecision model pair. Each compressed QStore model pair consists of two files—the compressed quantized weights and conditional information, both stored in a columnar format.
Compressed Quantized Weights. QStore stores the compressed quantized weights of the low-precision model alongside a header storing relevant metadata—number of chunks, tensor dimensions, and per-chunk metadata of (1) whether compression was applied and (2) compressed and uncompressed chunk sizes.
Compressed Conditional Information. QStore stores the conditional information following group (i.e., applied quantization function), then subgroup (i.e., post-quantization value) order. It maintains a header, which stores (1) the mapping from groups to their positions in the original model (e.g., row number), and within each group, (2) per-subgroup data (i.e., whether compression was applied, and chunk sizes, similar to the quantized weights). Notably, despite QStore also reordering the weights in each group based on subgroups, it does not store the mapping of weight positions within each sub-group (row): this is because the information is already present in the quantized weights, e.g. $\boldsymbol { w } _ { 1 3 }$ assigned to group
High-precision Model Quant. Scale Grouping Quant. Value Subgrouping QStore Row Quant. scale Weights Group 1 (𝑠 = 32.0) Group 1 (𝑠 = 32.0) 𝑤1 𝑠1 = 32.0 0.26 1.3 0.29 0.26 1.3 0.29 Subgroup 1 (𝑄 𝑤 = 1) Compress 1.2 0.27 0.28 0.26 0.29 0.27 0.28 Compressed 𝑤2 𝑠2 = 16.0 5.55 5.45 0.14 Subgroup 2 (𝑄 (𝑤) = 5) conditional 𝑤3 𝑠3 = 32.0 1.2 0.27 0.28 Group 2 (𝑠 = 16.0) 1.3 1.2 information 5.55 5.45 0.14 Group 2 (𝑠 = 16.0)
Quantize Low-precision Model Subgroup 1 (𝑄 (𝑤) = 44)
𝑄1(2𝑤8 𝑖𝑤) Row Weights 5.55 5.45 Compressed 𝑠𝑖 𝑄 (𝑤1) 1 5 1 Subgroup 2 (𝑄 (𝑤) = 1) low0.14 precision 𝑄 (𝑤2) 44 44 1 model Compress weights 𝑄 (𝑤3) 5 1
1, subgroup 1 in Fig 4 can be inferred to be the third element in row $w _ { 1 }$ based on the corresponding quantized weights in $Q _ { 1 } ( w _ { 1 } )$ .
# 4.4 Decoding from QStore
This section covers how a model pair stored with QStore can be losslessly decoded to retrieve the high and/or low-precision models.
Retrieving the Low-Precision Model. The model’s quantized weights are encoded to QStore with per-chunk Huffman compression into a file (§4.2). Hence, directly loading the compressed quantized weights from QStore, and applying per-chunk huffman decompression allows the low-precision model to be retrieved losslessly.
Retrieving the High-Precision Model. As QStore stores the encoded conditional information for the high-precision model w.r.t. the low-precision model, it requires the low-precision model to be retrieved first following the procedure described above. Then, QStore’s decoder first decompresses the conditional information, then applies the decompressed information onto the low-precision model weights to retrieve correct per-group weight ordering $( \ S 4 . 3$ . Finally, QStore uses the stored group-to-row mappings to losslessly reconstruct the high-precision model’s weight tensor.
Remark. QStore’s decoding process for retrieving the high or low-precision model is faster than loading the respective model uncompressed, and comparable to loading the respective model (separately) compressed using an off-the-shelf algorithm (e.g. LZ4). However, as QStore jointly stores the model pair, QStore’s approach achieves significant time savings for loading the low-precision model versus the common practice of loading the unquantized model, then quantizing it in memory (§6.4).
# 5 IMPLEMENTATION
Choice of Encoding Scheme. Our implementation of QStore uses the FiniteStateEntropy library’s near-state-of-the-art Huffman encoding Huff0 [5]. However, other entropy-based encoding schemes can be used instead, such as the FiniteStateEntropy coder from the same library or non-Huffman methods. (e.g., arithmetic coding [15])
Efficient Decode Pipelining. For efficiency, we implement QStore’s per-tensor decoding for model loading (§4.4) in a pipelined manner, where one tensor’s decompression overlaps with the next tensor’s read. However, other parallelization strategies can be used in its place [49, 51], such as completely parallelizing both the reading and decompression of tensors, which may bring larger benefits on specific hardware (e.g., local SSD [23, 50]).
Lazy Model Loading. As QStore’s encoding and decoding of model pairs operate independently on each tensor, it can be naturally extended to support lazy loading (e.g., similar to Safetensors [9]). In this situation we would not apply decode pipelining, and only read and decompress tensors when required; we defer detailed performance optimization and engineering to future work.
# 6 EVALUATION
In this section, we empirically study the effectiveness of QStore’s quantization-aware model storage. We make the following claims:
(1) Effective Compression: QStore achieves up to $2 . 2 \times$ compression ratio $45 \%$ of the original size) for storing a high and low-precision model pair—up to $1 . 6 \times$ better than the next best method. (§6.2)
(2) Fast Storage: A model pair can be stored with QStore up to $2 . 8 \times$ faster than uncompressed storage, and $1 . 7 \times$ faster versus alternative storage and/or compression methods applied separately on the two models (§6.3).
(3) Fast Retrieval: A model pair stored in the QStore format can be loaded up to $1 . 8 \times$ faster versus alternative formats. Specifically, the low-precision model can be loaded from QStore up to $2 . 5 \times$ faster versus loading and quantizing the high-precision model in-memory (§6.4).
# Deeper Performance analysis of QStore (Ours)
(1) Effectiveness Under Constrained Bandwidth: QStore’s effective model compression and storage enables up to $2 . 2 \times$ faster model loading times versus loading uncompressed models under I/O-constrained scenarios (§6.5).
Table 1: Summary of models used for evaluation.
(2) Effective Encoding of Conditional Information: QStore efficiently compresses conditional information—despite being necessary for reconstructing the high-precision model from the low-precision model, its comprises only up to $3 6 . 2 \%$ of the total QStore file size (§6.6).
# 6.1 Experimental Setup
Dataset. We select 6 popular foundation models across various modalities, domains, and languages for comprehensive evaluation, which we further divide into 3 ‘small’ $\mathrm { \hbar } ^ { \prime } { < } 2 0 \mathrm { B }$ parameters) and 3 ‘large’ $( \geq 2 0 \mathrm { B }$ parameters) models. For each model, we create a high and low-precision model pair consisting of the (1) original BF16 model and (2) quantized INT8 model (via LLM.int8() [28]) weights. We summarize models and their characteristics in Table 1.
Methods. We evaluate QStore against existing tools and methods capable of storing the high and low-precision model pairs:
Safetensors [9]: The default uncompressed model storage format of HuggingFace’s transformers library [59]. We use its Python API [16, 18]. lz4 [3]: We use the default compression level of 1. Zstd [4]: We use a compression level of 2. • ZipNN [34]: A Huffman-based compression algorithm that targets compression of 16-bit model weights. Since it cannot compress 8-bit weights, in order to compare the storage cost of both precisions, we use ZipNN for high precision and the best alternative baseline (Zstd) for low precision.
We implement all the methods to sequentially process each tensor to and from a single file for both model saving and loading. Tensor read/write and decompression/compression is pipelined (where applicable) to overlap I/O and compute (§5).
Environment. We use an Azure Standard E80is (Intel(R) Xeon Platinum 8272CL, 64-bit, little-endian) VM instance with 504GB RAM. We read and write (compressed) model data to and from local SSD for all methods. The disk read and write speeds are $1 . 5 \ : \mathrm { G B } / s$ and $2 5 6 . 2 \mathrm { M B } / s$ , respectively,1 with read latency of $7 . 4 9 \mathrm { m s }$ .2
Time Measurements. We measure (1) save time as the time taken to compress and store a model onto storage, and (2) load time as the time taken to read and decompress the selected model (high or low-precision) from storage into memory. We force full data writing (via sync [14]) and reading during model saving and loading. We perform data reading and writing with a single thread and compression/decompression with 48 threads for all methods. The OS cache is cleared between consecutive experiment runs.
Table 2: Average bits per weight to store each model pair.
Reproducibility. Our implementation of QStore and experiment scripts can be found in our Github repository.
# 6.2 QStore Saves Model Storage Cost
This section studies QStore’s model storage cost savings. We store model pairs to disk with each method, and compare the resulting on-disk file sizes of QStore versus alternative methods in Fig 5.
QStore’s file size is consistently the smallest, and is up to $2 . 2 \times$ and $1 . 6 \times$ smaller versus Safetensors (uncompressed) and next best compression method, respectively. As hypothesized in $\ S 2 . 2$ , Zstd and $\mathsf { I } z 4$ achieve suboptimal compression ratios due to the traditional compression techniques they utilize being ineffective on model tensor data—When Zstd is used along with ZipNN (Fig 5, the size decreases slightly, but is still $1 . 6 \times$ bigger than our model pair. $\mathsf { I } z 4$ achieves no benefits compared to the uncompressed storage. QStore’s high compression ratio translates to significant $( 5 2 \% - 5 5 \% )$ 1 space savings across model sizes (Fig 5b): storing the Deepseek Coder’s model pair with QStore takes only 42GB versus the 92GB of storing the models as is without compression.
Savings Versus Storing Only High-Precision Model. We additionally compare QStore’s storage cost versus storing only the high precision model (BF16) with baselines in Fig 6. Notably, QStore’s storage cost for the entire model pair is still up to $3 3 \%$ smaller than storing only the high precision model without compression, up to $1 3 \%$ smaller versus general compression algorithms (Zstd), and is comparable to (only up to $7 \%$ greater) the specialized ZipNN method designed for 16-bit models.
# 6.3 QStore Enables Faster Model Storage
This section investigates QStore’s time for storing model pairs. We measure the time taken for storing a model pair from memory into storage with the QStore format versus alternative methods.
We report results in Fig 7. QStore’s model pair storing time is up to $1 . 7 \times$ and $2 . 8 \times$ faster compared to the next best compression scheme and non-compression method, respectively. Notably, given each model pair, uncompressed methods need to write 24 $( 1 6 + 8 )$ bits per model weight to disk, whereas QStore significantly reduces this number to 10.7-11.5 (Table 2), which is also smaller than the 19.1-19.6 bits incurred by separately compressing both models with Zstd. Expectedly, QStore’s number of incurred bits is in alignment with QStore’s high compression ratio (Fig 5).
Figure 5: QStore’s storage cost for storing a high and low precision model pair versus baselines. QStore achieves up to $2 . 2 \times$ spac savings versus storing the models uncompressed with file sizes up to $1 . 6 \times$ smaller than the next best alternative.
Figure 6: QStore’s model pair storage cost versus only storing the high-precision model with baselines. QStore’s size is up to $\mathbf { 1 . 5 \times }$ smaller versus no compression and is comparable to storing with ZipNN (only up to $5 \%$ larger).
# 6.4 QStore Saves Model Load Time
We investigate QStore’s time savings for loading a model pair. We store the model pair using each method, then measure the time taken for loading one or both models from storage into memory.
We report results for loading a high-precision model, a lowprecision model, and both models in Fig 8, Fig 10, and Fig 9, respectively. QStore loads the high-precision model up to $1 . 4 \times$ faster versus loading it without compression (Safetensors), and exhibits comparable loading times versus loading it with a specialized compression algorithm $( \pm 5 \% ,$ ZipNN). QStore loads the low-precision model with comparable time $( \pm 5 \% )$ versus loading it with (Zstd) or without compression (Safetensors).
Time savings for Simulataneous Model Access. Notably, QStore saves significant time in cases where simultaneous access to both models (e.g., model cascade and chaining $\ S 1$ or interactive computing [41]) is required; it loads the model pair up to $2 . 2 \times$ and $1 . 8 \times$ faster versus separately loading the two models stored without compression (Safetensors) or with an applicable compression algorithm (Zstd), respectively; this is because the size of QStore’s model pair being significantly smaller than that incurred by separately storing the two models with alternative approaches (§6.2).
# 6.5 High Savings on Constrained Bandwidths
This section studies the effect of I/O bandwidth on QStore’s time savings. We perform a parameter sweep on bandwidth from SSD by throttling with systemd-run [19] (verified using iostat [17]) and measure the time to load a model pair stored with QStore vs uncompressed storage (Safetensors) at various bandwidths (Fig 11).
While QStore is faster than uncompressed loading at all bandwidths, the speedup increases from $1 . 7 { \times } \left( 5 0 0 \mathrm { M B } / s \right)$ to $2 . 1 \times$ and $2 . 2 \times$ in the lowest bandwidth settings $( 2 0 \mathrm { M B } / s )$ for the small Llama 3.1 model and large Qwen $2 . 5 \mathrm { V L }$ model, respectively. Notably, the absolute time saving of QStore versus uncompressed is 2483 seconds for loading the Qwen $2 . 5 \mathrm { V L }$ model at $2 0 M \mathrm { B } / s$ ; this significantly improves user experience with models in the common scenario where models are downloaded from cloud storage with limited network bandwidth (typical speeds of $3 0 \mathrm { M B / s }$ [34], grey vertical lines in Fig 11).
# 6.6 Effective Conditional Information Storage
This section studies the effectiveness of QStore’s compression of conditional information. We store the model pair using QStore, and measure the space taken by the low-precision weights and conditional information, respectively (results in Fig 12). QStore’s compressed conditional information only takes up to $3 9 \%$ of the total size, and accordingly contributes only up to $4 0 \%$ of the model pair loading time across all 6 models. This shows the effectiveness of QStore’s conditional encoding in reducing storage and load time redundancies incurred by the typical approach of users storing and using both the high and low-precision models as is (§1).
# 7 DISCUSSION
Compatibility with other Quantization Methods and Datatypes. While we present our entropy analysis (Fig 3) and experiments (§6) for one of the default quantization methods on HuggingFace, LLM.int8() [28], (i.e. a FP16/BF16-INT8 model pair), QStore is compatible with other quantization schemes and datatypes (e.g., integertyped low-precision models). This is because QStore does not use specific values of the high or low-precision models and directly applies byte-level entropy coding for storage (§4.2); only the ordering of weights in each group (present in the low-precision model), along with the stored conditional information are required to losslessly reconstruct the high-precision model (§4.4), and both are datatypeagnostic. Hence, QStore can be trivially extended to support other datatypes (e.g., FP16-FP8 or FP32-BF16 model pairs).
Data Compressibility. QStore’s compression ratios may differ based on the datatype of the high-precision model. For example, given a low-precision INT8 model, and a choice of either BF16 or FP16 for the high-precision model, the conditional information of BF16|INT8 compresses slightly better $( \sim 2 \% )$ compared to FP16|INT8. This is because two floats in the same group quantizing to the same value are likely to overlap in their significant (exponent) bits. The first byte of BF16 has 7 exponent bits, vs 5 exponent and 2 mantissa bits of FP16; hence, two BF16 floats quantizing to the same value enables more effective compression versus 2 FP16 values.
Storing more than Two Models. Fundamentally, QStore relies on using conditional information to simultaneously store model pairs (§4.1). Hence, QStore’s approach can be extended to store more than two precisions, for instance, a three-level FP32-BF16-INT8
Figure 7: QStore’s encoding time for saving a model pair versus baselines. QStore enables up to $2 . 8 \times$ faster model saving versu storing the models uncompressed, and is up to $1 . 7 \times$ faster than storing the models with an applicable compression algorithm
Figure 8: QStore’s decoding time for loading the high precision model versus baselines. QStore allows up to $1 . 4 \times$ faster loading compared to safetensors, and is comparable $( \pm 5 \% )$ to loading the models with a specialized compression algorithm (ZipNN).
Figure 9: QStore’s decoding time (secs) to load both the high and low precision model pair versus baselines. QStore is up to $2 . 2 \times$ faster compared to loading uncompressed models, and up to $1 . 8 \times$ faster than applicable compression baselines.
Figure 10: QStore’s decoding time (secs) to load only the lowprecision model is comparable $( \pm 5 \% )$ to other baselines.
Figure 11: QStore’s decoding time (secs) versus read bandwidth for two selected models. QStore’s smaller incurred storage size saves loading time by $2 . 2 \times$ at lower bandwidths.
Safetensors QStore (Ours) 1000 5000
? 800 Cloud storage 4000 Cloud storage
download speed download speed 600
s 2400 12000 0 0 0 100 200 300 400 500 0 100 200 300 400 500 600 Read bandwidth (MB/s) I/O bandwidth (MB/s) (a) Llama 3.1 8B (b) Qwen 2.5 VL 32B
Figure 12: QStore’s storage cost and loading time breakdown for the INT8 and conditional (BF16 | INT8) encodings. Less than $4 0 \%$ of QStore’s size is from the conditional encoding.
model chain: First, QStore would store the largest FP32 model as a BF16 model and a FP32 | BF16 conditional encoding $E _ { 1 }$ , then decompose the BF16 model into the INT8 model and BF16 | INT8 conditional encoding $E _ { 2 }$ . Hence, the final compressed QStore would be $\{ w _ { I N T 8 } , E _ { 2 } ( w _ { B F 1 6 } | w _ { I N T 8 } ) , E _ { 1 } ( w _ { F P 3 2 } | w _ { B F 1 6 } ) \}$ . As mentioned in $\ S 1$ , this extension would especially benefit model storage hubs like HuggingFace [59] which can store multiple quantized representations of the same model for anticipated user access with significantly lower storage cost versus separately storing the precisions. | Modern applications commonly leverage large, multi-modal foundation models.
These applications often feature complex workflows that demand the storage and
usage of similar models in multiple precisions. A straightforward approach is
to maintain a separate file for each model precision (e.g., INT8, BF16), which
is indeed the approach taken by many model providers such as HuggingFace and
Ollama. However, this approach incurs excessive storage costs since a higher
precision model (e.g., BF16) is a strict superset of a lower precision model
(e.g., INT8) in terms of information. Unfortunately, simply maintaining only
the higher-precision model and requiring every user to dynamically convert the
model precision is not desirable because every user of lower precision models
must pay the cost for model download and precision conversion.
In this paper, we present QStore, a unified, lossless compression format for
simultaneously storing a model in two (high and low) precisions efficiently.
Instead of storing low-precision and high-precision models separately, QStore
stores low-precision model and only the residual information needed to
reconstruct high-precision models. The size of residual information is
significantly smaller than the original high-precision models, thus achieving
high savings in storage cost. Moreover, QStore does not compromise the speed of
model loading. The low-precision models can be loaded quickly just like before.
The high-precision models can also be reconstructed efficiently in memory by
merging low-precision data and the residual with QStore's lightweight decoding
logic. We evaluate QStore for compressing multiple precisions of popular
foundation models, and show that QStore reduces overall storage footprint by up
to 2.2x (45% of the original size) while enabling up to 1.7x and 1.8x faster
model saving and loading versus existing approaches. | [
"cs.DB"
] |
# I. INTRODUCTION
H sOigLnOifGicRanAtPpHotIeCntdiiaslpilnaythsearaeugwmideenltyedreagnadrdveird uas hreaavliintyg (AR/VR) field due to the rich depth cues they can provide [1], [2]. Computer-generated holograms (CGH) is a method of generating holograms by simulating diffraction models in computer [3] rather than through real optical recordings and reconstruction. Spatial light modulator (SLM) is a type of device to load these holograms for reconstruction in real life. SLMs are primarily categorized into two types: amplitude-only and phase-only. Phase-only Hologram (POH) is the dominant encoding method due to its high diffraction efficiency [4]. To obtain a POH, methods are generally divided into noniterative and iterative approaches. Non-iterative methods, such as double phase-amplitude coding (DPAC) [5], process the data once to generate a POH. In contrast, iterative methods, including Gerchberg-Saxton (GS) [6], Wirtinger Holography (WH) [7], and stochastic gradient descent (SGD) [8], [9], can yield higher-quality reconstructed images but require extensive computation, often iterating hundreds or thousands of times.
Recently, learning-based methods [10], [11] in CGH have garnered considerable attention for their speed and high reconstruction quality. These methods can integrate the physical wave propagation model in free space, such as the angular spectrum method (ASM) [12], into the neural network framework, making image reconstruction both efficient and accurate. Notable frameworks like HoloNet [8] and CCNNCGH [13] demonstrate the capability to generate high-quality holograms in real time. Both frameworks utilize two networks: the first network, the phase predictor, takes the target amplitude as input to predict the phase on the target plane. This predicted phase, combined with the target amplitude, forms a complex amplitude, which is then processed using forward ASM to obtain the SLM field distribution, serving as input for the second network. This second network, the hologram encoder, generates the hologram and utilizes a backward ASM to reconstruct field, then computed the loss between the reconstructed and target amplitudes, facilitating backpropagation to update network parameters. However, this framework often need a pair or more networks to generate holograms, requires more memory to store. An alternative framework, Holo-encoder [14], directly generates holograms by inputting target amplitudes into a single network. While this approach simplifies and accelerates the generation process, it typically results in poorer image quality due to its reliance solely on amplitude information. Furthermore, several studies have modified these networks [15] to improve outcomes by incorporating Fourier transforms [16], wavelet transforms [17], and compensation networks [18]. However, such modifications often complicate the model and demand greater computational resources and inference time.
The diffraction process is inherently global, meaning that each pixel on the hologram can affect the image on the reconstruction plane. For neural networks, this necessitates a larger effective receptive field (ERF) to achieve better global information extraction capabilities. In traditional convolutional neural networks (CNNs), utilizing larger convolutional kernels and increasing network depth are two feasible approaches to enhance the receptive field. However, these methods significantly increase the number of network parameters and substantially prolong inference time, making it challenging to develop a real-time, lightweight hologram generation network.
In this paper, we propose a straightforward yet effective framework for generating POH using a deformable convolutional neural network (DeNet) to increase the flexibility of ERF that achieves superior reconstruction quality and fast inference speed compared to almost existing open-source networks. Our approach employs the complex amplitude obtained after ASM of the target amplitude as the input for our CNN, which is a complex-valued CNN based on the U-Net architecture. Although we are not the first to utilize this convolutional structure, our method for hologram generation distinguishes itself from prior complex-valued approaches. To capitalize on the benefits of deformable convolution, we designed complexvalued deformable convolution in the form of complex amplitudes, allowing the model to more effectively capture both local details and global phase interactions, thereby enhancing performance in hologram reconstruction. Our simulation and optical experiment results indicate that our model achieves a peak signal-to-noise ratio (PSNR) that is 2.04 dB, 5.31 dB, and 9.71 dB higher than those of CCNN-CGH, HoloNet, and Holo-encoder, respectively, at a resolution of $1 9 2 0 \times 1 0 7 2$ . Additionally, our model demonstrates a comparably fast inference speed and has a parameter count approximately oneeighth that of CCNN-CGH, effectively minimizes storage and computational requirements.
# II. RELATED WORK
Holography was first proposed by Dennis Gabor in 1948 [19]. Research on holographic displays has been going on for decades, and we review the works of CGH in this section.
# A. Holographic display
Holographic displays are able to reproduce the entire continuous light field of a given scene through SLM regulation of incident light. This capability enables them to provide all depth cues, making them highly promising for future applications in AR [20], [21], VR [22], [23] and head-up display [24] applications. Typically, dynamic holographic displays [3], [25] generally employ SLMs, like phase-only liquid crystal on silicon (LCoS) devices [26], in conjunction with CGH algorithms.
# B. Computer-generated hologram
The concept of CGH was first proposed by Lohmann et al [27]. Creating an optical hologram necessitates that the object be real, allowing the object light wave and the reference light wave to coherently superimpose on the holographic plane. This requirement makes traditional holography unsuitable for virtual objects. In contrast, CGH only requires the object light wave distribution function to generate the hologram. Additionally, CGH is less susceptible to external influences and allows for easier and more precise reproduction.
Numerous generated-CGH methods have emerged in recent years. In 2015, Zhao et al [28]. introduced a CGH algorithm based on the angular spectrum method, which effectively reduces computational load while maintaining image quality. Additionally, models such as Kirchhoff and Fresnel diffraction [29] are widely used for numerically propagating wave fields. In the optimization of 3D holograms, while point-cloud [30] and polygon-based [31] sampling strategies exist, the processes predominantly segment the object wave into layers [32]. A traditional approach to optimizing 3D holography relies on wavefront superposition. All these methods aim to facilitate the rapid generation of 3D holograms. Moreover, there are iterative techniques focused on quality enhancement, such as the improved GS method proposed by Liu et al [33], and the multi-depth SGD method introduced by Chen et al [34].
# C. CGH based on deep learning
CNNs have been widely employed in the real-time generation of holograms due to their ability to efficiently handle complex computations. Peng et al. introduced a method called HoloNet [8], which incorporates aberrations and light source intensity into the network’s learning process. This approach aims to mitigate the impact of optical equipment mismatches on experimental results, although it does not fully account for all errors. In contrast, Choi et al. [35] proposed CNNpropCNN, which uses captured images to train the neural network to simulate physical errors, thereby addressing a broader range of mismatches during hologram generation. For 3D hologram generation, Liang [11] used RGB-D data as input and developed a network capable of photorealistic reconstruction, effectively simulating defocus effects. Yan et al. [36] utilized a fully convolutional neural network to generating multi-depth 3D holograms, which can generate multi-depth 3D holograms with a resolution of $2 1 6 0 \times 3 8 4 0$ . Additionally, Choi et al. [37] employed time-multiplexing techniques to achieve impressive defocus effects with various input data types, such as focal stacks and light fields.
Regarding real-time capabilities, Zhong et al. [13] utilized complex-valued convolutions to achieve fast and high-quality holograms, which utilizes a complex-valued convolutional neural network. This model significantly reduces the number of parameters while achieving the fastest generation speed. Meanwhile, Wei et al. [38] introduced self-attention mechanism into model, achieve a high perceptive index. Qin et al. [39] employed a complex-valued generative adversarial network to generate holograms. Although the quality of these holograms surpasses that of CCNN-CGH, both the number of parameter and processing time remain substantial.
Unlike previous methods, our approach does not rely on a phase prediction network based on complex-valued networks. Instead, we utilize the complex-valued field propagated by ASM as input. Within our network, we incorporate deformable convolution, which addresses the limitations of traditional convolutional receptive fields found in earlier networks, thereby enhancing feature extraction capabilities.
# III. MODEL FRAMEWORK
# A. Deformable Convolution
The traditional convolution operation involves dividing the feature map into segments that match the size of the convolution kernel, and then performing the convolution on each segment, with each segment occupying a fixed position on the feature map. However, for objects with more complex deformations, this approach may not yield optimal results. We can define the relationship between input feature $x$ and output feature $y$ with equation below [40],
$$
y ( p _ { 0 } ) = \sum _ { p _ { n } \in R } w ( p _ { n } ) \cdot x ( p _ { 0 } + p _ { n } ) ,
$$
here, $R$ is a regular grid used to sample, the total of sampled values each multiplied by the weight $w$ , and $p _ { n }$ enumerates the locations in $R$ .
In deformable convolution, shown in Fig. 1, offsets are introduced into the receptive field, and these offsets are learnable. This allows the receptive field to adapt to the actual shape of objects rather than being constrained to a rigid square. Consequently, the convolutional region consistently covers the area around the object’s shape, enabling effective feature extraction regardless of the object’s deformation.
$$
y ( p _ { 0 } ) = \sum _ { p _ { n } \in { \cal R } } w ( p _ { n } ) \cdot x ( p _ { 0 } + p _ { n } + \Delta p _ { n } ) ,
$$
here, $\Delta p _ { n }$ is the offsets at the $n$ -th position. To enhance the capability of deformable convolution in controlling spatial support regions, which is introduced a modulation mechanism [41].
$$
y ( p _ { 0 } ) = \sum _ { p _ { n } \in { \cal R } } w ( p _ { n } ) \cdot x ( p _ { 0 } + p _ { n } + \Delta p _ { n } ) \cdot \Delta m _ { n } ,
$$
$\Delta m _ { n }$ is the modulation scalar at the $n$ -th position, which is also learnable parameter. The range of modulation scalar is from 0 to 1, which is limited by sigmoid function.
# B. Architecture
The framework of our model generated POH is shown in Fig. 2. At first, the amplitude of the target image is propagated forward using the ASM to obtain the complex amplitude in the SLM plane. Complex amplitude serves as the input for our network, allowing it to capture both amplitude and phase information. Subsequently, the POH generated by the model is propagated backward through backward ASM to reconstruct the amplitude. ASM can be expressed in equation below:
$$
\begin{array} { r } { u ( \phi ) = \mathcal { F } ^ { - 1 } \{ \mathcal { F } \{ u ^ { i \phi } \} H ( f _ { x } , f _ { y } ) \} } \\ { H ( f _ { x } , f _ { y } ) = \left\{ \begin{array} { l l } { \mathrm { e } ^ { i \frac { 2 \pi } { \lambda } z \sqrt { 1 - ( \lambda f _ { x } ) ^ { 2 } - ( \lambda f _ { y } ) ^ { 2 } } } , } & { \mathrm { i f ~ } \sqrt { f _ { x } ^ { 2 } + f _ { y } ^ { 2 } } < \frac { 1 } { \lambda } , } \\ { 0 , } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \end{array}
$$
here, $u ^ { i \phi }$ is the optical field distribution, $\lambda$ is the wavelength, $z$ is the distance between SLM plane and target plane, $f _ { x }$ and $f _ { y }$ is the spatial frequencies, $\mathcal { F }$ means the Fourier transform.
The Mean Squared Error loss function $( \mathcal { L } _ { M S E } )$ is employed to evaluate the discrepancy between the reconstructed amplitude and its original counterpart, facilitating updates to the model parameters accordingly. Incorporating Total Variation loss $( \mathcal { L } _ { T V } )$ can lead to smoother phase in the hologram.
$$
\mathcal { L } = \mathcal { L } _ { M S E } ( | u ( \phi ) | , a _ { \mathrm { t a r g e t } } ) + \alpha \mathcal { L } _ { T V } ( \phi )
$$
Fig. 1. The illustration of $3 \times 3$ deformable convolution.
Fig. 2. The framework of proposed model generated POH.
Fig. 3. The architecture of proposed network.
Input Output ↓ ↑ → Conv ReLU atan2 SC ReLU ConvT ConvT
Down Up1 DeConv ↑ ↑ ↑ ReLU SC: Skip Connection
Down Up 1 CDoenCvoTn:v:CDonefvoTrrmaanbslpeosCeonvolution
$$
\mathcal { L } _ { T V } ( \phi ) = \frac { \displaystyle \sum _ { i , j } ( ( \phi _ { i , j - 1 } - \phi _ { i , j } ) ^ { 2 } + ( \phi _ { i + 1 , j } - \phi _ { i , j } ) ^ { 2 } ) } { ( M - 1 ) ( N - 1 ) }
$$
here, $| u ( \phi ) |$ means the reconstructed amplitude and atarget denotes the target amplitude. $\alpha$ is a weighting coefficient. During the optimization process, if $\alpha$ is too large, it can result in poor quality of the reconstructed image, while if it is too small, it may not significantly affect the smoothness of the phase. Therefore, $\alpha$ is set to $0 . 1 \times 0 . 1 ^ { e p o c h }$ in this paper. $M$ $N$ is the resolution of input field.
Fig. 3 shows the detailed network architecture, which is based on U-Net. Our network utilizes complex-valued fields as inputs, rather than concatenating them together, thereby enabling feature extraction and processing to be conducted in a complex-valued format. The downsampling layer comprises a standard convolution that reduces the size of the feature map by half, along with a deformable convolution that permits the network to dynamically adjust the position of the convolution kernel during the operation. This approach significantly enhances feature representation. Both activation functions after convolution are ReLU. Upsampling layer merely employs a single deconvolutional layer to restore the feature map to the original size of the input amplitude, the activation function of the first upsampling layer is also ReLU, but that of the second layer is arctangent function, which limits the range of generated POH. And SC represents a simple addition operation.
Fig. 4. The loss curves of train (top) and validation (bottom).
# IV. MODEL TRAIN AND VALIDATION
In order to validate the effectiveness of proposed model, all algorithms were implemented in Python 3.9 using the PyTorch 2.1.1 framework on a Linux workstation equipped with an AMD EPYC 7543 CPU and an NVIDIA GeForce RTX 3090 GPU. Models were trained for 20 epochs with a batch size of 1 and a learning rate of 0.001 on the DIV2K [42] training set, and performance was assessed on both the DIV2K and the Flickr2K validation dataset. Holograms were generated at a resolution of $1 9 2 0 \times 1 0 7 2$ pixels; SLM had an $8 ~ \mu \mathrm { { m } }$ pixel pitch. Optical parameters were fixed at a laser wavelength of $6 7 1 ~ \mathrm { { n m } }$ and a propagation distance of $2 0 0 ~ \mathrm { { m m } }$ .
Fig. 4 illustrates the loss curves for both the training and validation datasets, representing the average loss values. The results indicate that our model achieves better convergence in fewer training epochs compared to the others. Specifically, our model, along with the holo-encoder and CCNN-CGH, requires approximately 35 minutes to train for 20 epochs, whereas the HoloNet takes about 50 minutes for the same epochs.
Fig. 5 illustrates the differences in holograms generated using different loss functions. Under certain initial conditions, using only MSE can result in numerous phase discontinuities. While these discontinuities may not significantly affect simulation results, they can greatly impact the quality of reconstructed images in optical experiments. By introducing TV loss, the phase continuity of the hologram is significantly improved, effectively reducing the impact of these discontinuities on optical experiments.
As illustrated in Fig. 6, we have visualized the ERF of three models. Fig. 6 (a) represents the CCNN-CGH, which exhibits a smaller ERF compared to our model. To eliminate the impact of network depth and to validate the efficacy of deformable convolution, we replaced the deformable convolutions with a four-layer complex-valued convolution, thereby forming a fivelayer complex-valued convolution as the downsampling layer. The resulting ERF is depicted in Fig. 6 (b), it remains smaller than that of our model, which is shown in Fig. 6 (c).
Since the validation set of DIV2K contains only 100 images, we conducted inference on the Flickr2K dataset to evaluate the generalization performance of our model. Table I present the results of numerical simulations of various models when the resolution is $1 9 2 0 \times 1 0 7 2$ . To quantitatively assess reconstruction quality, we employ PSNR, Structural Similarity Index (SSIM), and floating-point operations (FLOPs) as evaluation metrics. All reported values represent the averages across the dataset and we calculate parameters and FLOPs using the thop Python package. Our model achieves superior results, with a PSNR of 33.50 dB and $3 3 . 5 3 ~ \mathrm { d B }$ , an SSIM of 0.921 and 0.928 on DIV2K/Flickr2K, separately. In comparison, the metrics for CCNN-CGH are 2.04 dB, 1.81 dB and 0.077, 0.065 lower than those of our model, while the performance of HoloNet and holo-encoder is comparatively weaker on both datasets. The inference speed of our model has a comparable performance compared with that of CCNN-CGH, but the reconstruction quality we achieve is significantly higher. Additionally, our model FLOPs is the lowest among all models. Fig. 7 presents the simulated reconstructed images. The Holo-encoder performs poorly in reconstructing complex images, resulting in significant blurring. In contrast, both HoloNet and CCNNCGH are capable of reconstructing images with greater clarity, although some noise is still present. Our model, however, achieves the best quality by reconstructing clear images with minimal noise.
Commercially available SLMs come in various pixel pitches, including $3 . 7 4 \ \mu \mathrm { m }$ and $6 . 4 ~ \mu \mathrm { m }$ . To validate the effectiveness of our model, Table II presents the results of numerical simulations of various pixel pitches at a resolution of $1 9 2 0 \times 1 0 7 2$ . Holo-encoder and HoloNet remain the two lowest-performing models. Our model still achieves the best simulation results, with a PSNR of 33.15 dB, 33.45 dB and an SSIM of 0.899, 0.910. This represents a significant improvement compared to the CCNN-CGH model, which shows a lower PSNR of 1 dB, 1.78 dB and an SSIM of 0.036,
Fig. 5. The holograms visualization using different loss functions.
Fig. 6. The visualization of effective receptive field. (a) CCNN-CGH; (b) Four layers complex-valued convolution; (c) Our deformable convolution.
TABLE IV PERFORMANCE IN DIFFERENT CHANNEL NUMBERS AND KERNEL SIZES
TABLE I PERFORMANCE IN $1 9 2 0 \times 1 0 7 2$ RESOLUTION CGH GENERATION ON DIV2K/FLICKR2K
TABLE II PERFORMANCE IN $1 9 2 0 \times 1 0 7 2$ RESOLUTION CGH GENERATION IN DIFFERENT PIXEL PITCHES
TABLE III SIMULATION PERFORMANCE OF ABLATION STUDY
# V. ABLATION STUDY
We conducted an ablation study using different models to evaluate performance. Our model demonstrated perfect overall performance across almost assessed metrics, thus validating its effectiveness. As shown in Table III, the evaluated models include those where the second model of HoloNet (RC) or CCNN-CGH (CC) is replaced with our proposed model. Additionally, to maintain a comparable number of parameters, we substituted the deformable convolution with four layers of complex-valued convolution (ND). We also move the first forward ASM (NA) to valid the effectiveness of complex amplitude as input rather than only amplitude.
After integrating our model into existing networks, all models exhibited improved performance, indicating that the use of deformable convolution enhances the quality of the reconstructed images. Specifically, the PSNR of RC reached 34.52 dB, which is 3.06 dB higher than that of CCNN-CGH. However, it has a significantly larger number of parameters and the longest inference time. Furthermore, NA shows the lowest reconstructive quality, which validates the effectiveness of using complex amplitude as input. Overall, our model strikes an optimal balance between quality and computational efficiency.
Table IV highlights how performance varies with different initial channel numbers and kernel sizes. The model achieves its highest PSNR of 33.71 dB and SSIM of 0.925 when configured with 10 channels and a kernel size of 3. However, this setup also leads to an increase in the number of parameters and longer inference times. Furthermore, when varying the kernel sizes for deformable convolution while keeping the number of channels fixed at 8, the best results are obtained with a kernel size of 3, whereas the poorest performance is observed with a kernel size of 5.
Fig. 7. Numerical simulation results of all evaluated methods in $1 9 2 0 \times 1 0 7 2$ resolution.
Fig. 8. The setup of holographic display. OA: Optical Attenuator, BE: Beam Expander, P: Polarizer, BS: Beam Splitter.
# VI. OPTICAL EXPERIMENT
Our holographic display setup is shown in Fig. 8. Coherent light is generated by a laser, passed a optical attenuator (OA) and beam expander (BE), then collimated using a lens. A beam splitter (BS) is employed to modify the optical path. The POH is uploaded to the SLM, which reflects and modulates the incoming light. To filter out higher diffraction orders from the holographic reconstruction, a 4f system is used, consisting of two lenses and a filter. The resolution of the phase-type SLM (FSLM-2K70-P03) used is $1 9 2 0 \times 1 0 8 0$ , and the pixel pitch of it is $8 ~ \mu \mathrm { m }$ . Other parameters is the same as those of the numerical simulation.
The results of the optical experiment are presented in Fig. 9. It is clear that the Holo-encoder performs significantly worse than the other models, as it fails to reconstruct detailed information effectively. While HoloNet offers more details compared to the Holo-encoder, it introduces blurring, leading to less clear images. Among the three comparison models, CCNN-CGH shows the highest quality, but suffers from stray light and noise issues. In contrast, our model delivers more consistent reconstruction quality than CCNN-CGH, especially in terms of preserving details. | Holographic displays have significant potential in virtual reality and
augmented reality owing to their ability to provide all the depth cues. Deep
learning-based methods play an important role in computer-generated holograms
(CGH). During the diffraction process, each pixel exerts an influence on the
reconstructed image. However, previous works face challenges in capturing
sufficient information to accurately model this process, primarily due to the
inadequacy of their effective receptive field (ERF). Here, we designed
complex-valued deformable convolution for integration into network, enabling
dynamic adjustment of the convolution kernel's shape to increase flexibility of
ERF for better feature extraction. This approach allows us to utilize a single
model while achieving state-of-the-art performance in both simulated and
optical experiment reconstructions, surpassing existing open-source models.
Specifically, our method has a peak signal-to-noise ratio that is 2.04 dB, 5.31
dB, and 9.71 dB higher than that of CCNN-CGH, HoloNet, and Holo-encoder,
respectively, when the resolution is 1920$\times$1072. The number of parameters
of our model is only about one-eighth of that of CCNN-CGH. | [
"physics.optics",
"cs.CV"
] |
# 1 Introduction
One of the most frequently applied interventions in the intensive care unit (ICU) is invasive mechanical ventilation (MV) [3], and its critical role became even more evident during the COVID-19 pandemic, which saw a surge ICU admissions, prolonged mechanical ventilation needs, and early intubation.
However, MV is also associated with an increased risk of organ damage, particularly ventilator-induced lung injury (VILI) [35]. To prevent VILI, clinical guidelines recommend the limitation of tidal volumes, respiratory rate and inspiratory pressures. However, these protocols only provide a general guidance, leaving the actual choice of ventilator setting to the clinical judgment and expertise of healthcare providers. Furthermore, it was shown that protocols of protective MV are poorly followed worldwide [3]. MV also demands a high nurse-to-patient ratio, leading to suboptimal recovery and prolonged ICU stays in times of high workload [31].
AI-based decision support systems (AI-DSS) can address these challenges by providing personalized MV treatment recommendations that reduce the risk of VILI while enhancing accessibility. Offline Reinforcement Learning (RL) algorithms can leverage ICU datasets to learn interventions that optimize MV settings, ensuring both immediate patient stability and improved long-term outcomes. Previous retrospective studies (e.g., [17, 25]) have demonstrated the potential of applying offline RL to develop AI-DSS for MV.
In this project, we focus on developing IntelliLung using Offline RL, an AI-DSS for MV in the ICU. This initiative involves 15 medical and technical partners from Europe. This includes close collaboration with domain experts and clinicians in identifying relevant cohorts, problem formulation, selection of state, actions and rewards, and evaluation to make it practically applicable. This paper addresses several critical technical challenges from previous studies and highlights important concerns. Our contributions are as follows:
C1. Previous methodologies focused on optimizing MV using sparse rewards based on mortality. However, previous medical studies [40, 34] indicate that just mortality can be a poor endpoint for evaluating MV interventions. We introduce a reward based on ventilator-free days (VFD) and physiological parameter ranges. The results show that this approach better aligns with the medical objective of reducing VILI while balancing the contributions of both factors.
C2. Previous studies often restrict the number of discrete actions because the action space grows exponentially with the number of dimensions. We show a simple approach to reduce the action space and combine it with optimization from prior research [38]. It enables an increase in the number of actions while enhancing safety.
C3. MV has both continuous and discrete settings (actions). To avoid the pitfalls of discretizing continuous actions, we demonstrate how to adapt SOTA offline RL algorithms namely IQL and EDAC, while enabling them to operate directly on hybrid action spaces.
C4. Previous methods simplify continuous actions by discretizing them. However, during inference, these discrete outputs are converted back into continuous values using fixed rules or by clinicians selecting a value from the predicted bin based on their expertise. Our experiments show that this reconstruction introduces a distribution shift, potentially causing the learned policy to operate in regions where predictions are highly uncertain.
# 2 Related Work
In previous studies involving MV and RL, some have focused on binary decisions, such as whether to initiate MV [22], while others have addressed complex tasks including sedation and weaning strategies [29, 42] and focused on optimizing MV for ICU patients [25, 17, 23, 32, 43].
Existing approaches for optimizing MV parameters using offline RL either discretize the actions or purely use continuous actions. Studies that consider discretization [25, 17, 23, 32] restrict the range of interventions due to the exponential growth of the action space, while studies based on continuous actions [43] omit categorical actions. For discrete action spaces, we address the limitations by constraining the action space to the dataset distribution and employing a factored action space. Although this method supports higherdimensional actions, our experiments reveal that reconstructing continuous actions from discrete representations introduces distribution shifts and potentially unsafe policies, an issue that previous studies have not addressed. We introduce hybrid actions for two of the SOTA offline RL algorithms (IQL and EDAC), enabling them to address these issues while capturing the full range of MV settings. [5] also uses a hybrid action space for optimizing MV settings. However, they adapt an off-policy RL algorithm that lacks the safety regularization of offline methods, potentially leading to unsafe policies due to overestimation. Additionally, they modify Soft Actor-Critic (SAC) for hybrid actions using the Gumbel-Softmax reparameterization trick to allow gradient flow. However, since the exact distribution is available for the discrete component, computing its expectation directly considerably reduces variance in policy updates [6].
Most prior studies mentioned above that focus on optimizing MV have primarily relied on mortality-based rewards, either sparse or shaped. However, medical studies (see Section 4.1) indicate that mortality is not a reliable indicator of MV treatment quality [2]. Instead, we adopt VFDs as a reward, reflecting that patients who spend less time on MV and avoid mortality received better care. Additionally, we add rewards to maintain MV-related physiological vitals within safe ranges and prevent complications.
A similar setup is used by [17], where the Apache-II score function is employed as an intermediate (range) reward, and augmented by a terminal reward based on mortality. However, combining an intermediate reward with a terminal reward is non-trivial. Depending on the terminal reward’s scale, the contributions at different time-steps in an episode can become skewed, either with an overwhelming influence from the terminal reward or an excessive emphasis on the range reward. We demonstrate that combining VFDs with range-based rewards avoids this while accounting for mortality in a better way.
# 3 Theoretical Background
Offline RL. The offline RL problem is formulated as an MDP where $\{ \mathcal { S } , \mathcal { A } , P , R , \gamma \}$ represents the state space, action space, transition distribution, reward distribution, and discount factor, respectively. The initial state is sampled as $s _ { 0 } ~ \sim ~ d _ { 0 }$ . RL training then optimizes a policy $\pi ( a | s ) : { \mathcal { S } } \to \Delta ( { \mathcal { A } } )$ , guided by $\mathrm { Q }$ -values defined as $\begin{array} { r } { Q ^ { \pi } ( s , a ) = \operatorname { \mathbb { E } } \left[ \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } R ( \cdot | s _ { t } , a _ { t } ) \right] } \end{array}$ . RL uses the Bellman error [37] to update the Q-function. In contrast to online RL, offline RL learns from a fixed dataset $\mathcal { D }$ and can suffer from overestimation due to out-of-distribution (OOD) actions. Therefore, a regularization term is often added to the standard Bellman error to mitigate this overestimation, e.g.:
Conservative Q-Learning (CQL). The CQL loss function minimizes Q-values alongside the standard Bellman objective [20], effectively lower bounding Q-values for unseen state-action pairs to prevent overestimation.
Implicit Q-Learning (IQL). The IQL objective [18] learns highperforming actions by computing action advantages via the expectile of the state value function, thereby updating the policy without querying $\mathrm { \Delta Q }$ -values for unseen actions.
Ensemble Diversified Actor Critic (EDAC). The EDAC [1] use an ensemble of critics $( Q ^ { \pi } )$ to estimate the uncertainty of a given state-action pair, effectively lower bounding Q-values for uncertain pairs to prevent overestimation. Additionally, it incorporates a diversity loss among ensemble members to promote varied Q-value estimates, improving uncertainty estimation.
# 4 Clinically-Guided Reward Design (C1)
This section explains the reward design process, including the medical aspects and studies that form its foundation.
# 4.1 Medical Context
Mortality is influenced by various factors, including the underlying disease and comorbidities. It was therefore proposed that the number of ventilator-free days within the first month after the start of mechanical ventilation (MV) be used to assess the quality of MV. This measure combines mortality within the first month with the duration of mechanical ventilation, and is directly linked to the quality of ventilator settings [33]. Ventilator-free days were used as the main outcome measure in several large clinical trials investigating the effects of MV. The MV strategies that shorten the ventilation time of patients who ultimately survive not only increase the number of survivors but may also reduce the cost of medical care. Accordingly, in collaboration with clinicians, we defined two main objectives to guide the experiments:
Primary Objective. The primary objective is to reduce the duration of mechanical ventilation (MV). Prolonged MV increases the risk of complications such as ventilator-induced lung injury, infection [41, 36], hypotension, and diaphragm dysfunction due to disuse atrophy [27]. These complications can hinder successful weaning and are associated with increased mortality. Moreover, effective MV strategies may provide significant clinical benefit even in the absence of mortality reduction, if they facilitate earlier liberation from the ventilator [33].
Secondary Objective. The secondary objective is to limit physiological impairments due to MV. Oxygenation levels (e.g., SpO2, PaO2) and vital signs (e.g., blood pH, mean arterial pressure (MAP), heart rate) must remain within safe ranges to prevent adverse outcomes. For example $\mathrm { P a O } 2$ demonstrated a U-shaped association with mortality [4], while dangerously in- or decreased blood pH-values are closely linked to organ failure and increased mortality [19]. In collaboration with experienced physicians, we have identified key physiological parameters and their safe ranges to guide decisionmaking.
# 4.2 Reward Design
The total reward at each step is the sum of the range reward $r _ { r a n g e }$ , VFD reward $r _ { v f d }$ , and time penalty $\dot { \mathbf { \zeta } } _ { t p } \mathbf { : } r = r _ { r a n g e } + r _ { t p } + r _ { v f d }$
# 4.2.1 Range Reward
The $r _ { r a n g e }$ guides the agent toward learning the secondary objective. It is calculated as follows:
$$
r _ { r a n g e } = \frac { \sum _ { i = 1 } ^ { N } w _ { i } \cdot \mathbf { 1 } _ { [ a _ { i } , b _ { i } ] } ( p _ { i } ) } { \sum _ { i = 1 } ^ { N } w _ { i } } , \quad r _ { r a n g e } \in [ 0 , 1 ]
$$
where $N$ is the total number of physiological parameters, $p _ { i }$ is the value of the $i$ -th parameter, $w _ { i }$ is its assigned weight, $\mathbf { 1 } _ { [ a _ { i } , b _ { i } ] } ( p _ { i } )$ is the indicator function that activates when $p _ { i }$ is within $[ a _ { i } , b _ { i } ]$ , its defined safe range. The physiological parameters, their safe ranges, and weights are listed in Table 1.
Table 1. Variables considered for range reward along with there safe ranges and the weight given
# 4.2.2 Time Penalty
$\boldsymbol { r } _ { t p }$ is added to encourage the policy to favor actions that lead to shorter episodes for similar states, aligning with the primary objective and to discourage the agent from prolonging episodes solely to accrue step rewards within safe ranges. As the episode length increases, the agent accumulates more $\boldsymbol { r } _ { t p }$ . The penalty is defined as $r _ { t p } = - m a x \{ r _ { r a n g e } : r _ { r a n g e } \in [ 0 , 1 ] \} \quad \Rightarrow \quad r _ { t p } = - 1 .$ .
# 4.2.3 Ventilator Free Days
VFDs [33] are commonly used in clinical trials to evaluate interventions. They can be incorporated into the reward function as a measure of how effective MV was during an episode in achieving the primary objective.
VFDs can be calculated as the difference between the maximum days threshold $\Delta t _ { m a x }$ (usually 28 or 30 days [33]) and the days spent on mechanical ventilation $\Delta t _ { m v }$ , as follows:
$$
V F D = \left\{ \begin{array} { l l } { \Delta t _ { m a x } - \Delta t _ { m v } , } & { \mathrm { i f } \Delta t _ { m a x } \geq \Delta t _ { m v } } \\ { \Delta t _ { r e } - \Delta t _ { m v } , } & { \mathrm { i f } \mathrm { p a t i e n t r e i n t u b a t e d } \mathrm { b e f o r e } \Delta t _ { m a x } } \\ { \Delta t _ { d e a t h } - \Delta t _ { m v } , } & { \mathrm { i f } \mathrm { p a t i e n t d i e s } \mathrm { b e f o r e } \Delta t _ { m a x } } \\ { 0 } & { \mathrm { o t h e r w i s e } } \end{array} \right.
$$
where $\Delta t _ { r e }$ is the time of reintubation and $\Delta t _ { d e a t h }$ is the time of death, both measured in days since the start of ventilation. The term $\Delta t _ { r e } \mathrm { ~ - ~ } \Delta t _ { m v }$ penalizes suboptimal trajectories that result in reintubation, guiding the policy to recognize that shorter trajectories are not always preferable, even when accounting for time penalty reward $\boldsymbol { r } _ { t p }$ . There are different definitions of VFD: while some authors assigns a score of zero to patients who die between getting off the ventilator and day 28, we opted to assign these patients the number of days between extubation and death [33]). This achieves a better discrimination between patients who die and are liberated from the ventilator $\mathrm { ( V F D > 0 ) }$ and those who die during MV (VFD=0). The resulting $r _ { v f d }$ is calculated as follows:
$$
r _ { v f d } = w _ { v f d } \cdot { \frac { V F D } { \Delta t _ { m a x } } }
$$
where $\boldsymbol { w _ { v f d } }$ is a hyper-parameter that controls the contribution of the $r _ { v f d }$ . We examine two ways how the $r _ { v f d }$ can be applied for each episode: Option 1) we apply $r _ { v f d }$ at the terminal time step and 0 otherwise or Option 2) apply at each time step. For our experiments, we applied $r _ { v f d }$ at each time step as it allows to balance with other the range reward (see Section 8.2.1).
# 5 Discrete Action Optimizations (C2)
Discretizing multidimensional continuous actions creates a large combinatorial action space $\mathcal { A }$ , where the number of distinct actions is given by $\left| { \mathcal { A } } \right| = \left| { \mathcal { A } } _ { 1 } \right| \times \left| { \mathcal { A } } _ { 2 } \right| \times \cdot \cdot \cdot \times \left| { \mathcal { A } } _ { k } \right|$ . Here, $k = 6$ represents the number of MV settings, and $| \mathcal { A } _ { i } |$ denotes the number of bins for action dimension $i$ .
Using clinician-defined bins for each MV setting (see supplementary material C), this results in $| \mathcal { A } | = 2 6 , 8 8 0$ distinct actions. However, this large action space introduces challenges, including increased computational complexity and Q-value overestimation for rarely observed action combinations. To address these issues, the rest of this section introduces optimizations in this regard.
# 5.1 Restrict Action Space
The action space is restricted to only the distinct action combinations present in the dataset, as shown in Fig. 1. This reduction results in $\vert \mathcal { A } _ { r } \vert = 1 8 7 0$ actions, just $6 . 9 \%$ of $| { \mathcal { A } } |$ . Beyond efficiency, this constraint eliminates unsafe action combinations, as they do not appear in the dataset because clinicians avoid them in practice. For example, setting both $V _ { T }$ and $F i O _ { 2 }$ too low could cause severe hypoxia, leading to organ damage or worse. RL algorithms cannot estimate the effect (Q-values) of actions absent from the dataset, and may overestimate and still select them at inference time despite offline-RL regularizations. By removing these unseen actions from the policy’s action space, we completely avoid the risk of choosing them.
# 5.2 Linear Decomposition of the Q-function
Even with $\boldsymbol { \mathcal { A } } _ { r }$ , the critic requires estimating value of 1870 action combinations. In offline settings with limited data coverage, where actions may be underrepresented, this increases variance and results in poor Q-value estimates. The study [38] demonstrates that leveraging factored action spaces allows a linear decomposition of the Q-function. In the case of MV, actions are discretized and naturally factored. This significantly speeds up training because of smaller network size, improves sample efficiency, and achieves a favorable biasvariance trade-off, while improving the policy. Fig. 2 illustrates the implementation details. Also, Section F.3 of the supplementary materials provides the code implementation.
Figure 1. This example illustrates a 3-dimensional discrete action space, where each dimension has 2 possible values, and each distinct action is represented as $a = [ a _ { 0 } , a _ { 1 } , a _ { 2 } ]$ . The action space is constrained to include only combinations present in the dataset (shown in green), excluding all other combinations (shown in red).
$$
\begin{array} { r l } & \begin{array} { c } \boxed { \underline { { [ \underline { { S } } ] } } \to [ \mathrm { C i n i t i c } \atop \mathrm { C i l i t c } } \to \overbrace { \{ Q ( s , a _ { 0 , 0 } ) \ldots \cdot Q ( s , a _ { k , l } ) \} } ^ { b a t c h \_ s i z e \times 3 7 } \qquad \underbrace { 1 } _ \begin{array} { c } \begin{array} { c } \begin{array} { c } \begin{array} { c } \begin{array} { c } { \begin{array} { c } { \end{array} } } \begin{ c } { \begin{array} { c } { \begin{array} { c } { \end{array} } } { \begin{ c } { \begin{array} { c } { \begin{array} { c } { \end{array} } } { \begin{ c } { \begin{array} { c } { \begin{array} { c } { \end{array} } } { \end{array} } \end{array} } } \\ { \begin{array} { c } { \begin{array} { c } { \begin{array} { c } { \begin{array} { c } \end{array} } { \begin{array} { c } { \begin{array} { c } \end{array} } { \begin{array} { c } \end{array} } { \begin{array} \end{array} } { c } { \begin{array} \end{array} } { c } \\ { \begin{array} \end{array} } { c } { \begin{array} \end{array} } { \begin{ c } { \begin{array} \end{array} } { c } { \begin{array} \end{array} } { c } \\ { \begin{array} \begin{array} { c } { \begin{array} { c } { \begin{ c } } { \end{array} } \end{array} } { \begin{array} \end{array} } { c } \\ { \begin{array} \begin{array} { c } { \begin{array} { c } { \begin{array} { c } } { \end{array} \begin{array} } { \end{array} } \end{array} } \\ { c } { \begin{array} \end{array} \begin{array} { c } { \begin{array} \end{array} } { c } } \\ { \begin{array} { \begin{array} \end{array} } { c } { \begin{array} { c } { \begin{array} \end{array} } { \begin{array} \end{array} } { c } \end{array} } \end{array} } \end{array} } } \end{array} } } } \end{array} } } } \\ & { \begin{array} { c } { \boxed { \mathrm { ~ R e d w e c l } } } \\ { \mathrm { ~ R e i m \underline { { ~ S } p a c e } \to L } } \\ { \mathrm { ~ 1 8 7 0 \times \theta } } \end{array} } \end{array} } \end{array} } \end{array} \end{array} \end{array} \end{array} \end{array} \end{array} \end{array} \end{array} \end{array} \end{array} \end{array}
$$
Figure 2. $\mathrm { ^ Q }$ -value calculation using linear Q decomposition. The critic outputs $\mathrm { Q }$ -values for each action bin, where $Q ( s , a _ { i , j } )$ represents the $\mathrm { ^ Q }$ -value of the $j$ -th bin of the $i$ -th action dimension. The one-hot encoded $\boldsymbol { \mathcal { A } } _ { r }$ masks all but one bin per action dimension before linearly combining them to compute the final Q-value for a specific action combination. The output, $Q ( s , \cdot ) \bar { }$ , has shape batch $\_ s i z e \times \bar { | \mathcal { A } _ { r } | }$ . The argmax operator can be applied along the second dimension to select the best action combination for state $s$ .
# 6 Adapting Offline RL algorithms for Hybrid Action Space (C3)
The default implementations of IQL and EDAC operate in continuous action space. We modified these algorithms to support hybrid actions based on the continuous action CORL [39] implementation, as follows:
IQL. For IQL, the critic function stays the same except that both continuous and one hot discrete actions are input to the network. IQL uses Advantage-Weighted Regression (AWR) [26] for policy optimization. The adapted $\log \pi _ { \phi } ( a | s )$ for AWR is calculated as $\begin{array} { r } { \log \pi _ { \phi } ( a | s ) = \log \pi _ { \phi } ^ { d } ( \bar { a } ^ { d } | s ) + \log \pi _ { \phi } ^ { c } ( { a } ^ { c } | s ) } \end{array}$ and $( a ^ { c } , a ^ { d } ) \sim \mathcal { D }$ .
EDAC. EDAC is a combination of SAC [13] with ensemble and diversity loss. We follow the approach described in [8] for SAC adaptation. However, unlike in the paper, our critic accepts both discrete and continuous actions as inputs rather than outputting Q-values for each discrete action combination. We opted for this design choice as it resulted in reduced variance in critic loss and much stabler training. It is important to note that the diversity loss requires $\Delta _ { a } Q ( s , a )$ , which in the hybrid case becomes $\Delta _ { ( a ^ { c } , a ^ { d } ) } Q ( s , a ^ { c } , a ^ { d } )$ However, since $a ^ { d }$ is one-hot encoded, the derivative effectively depends only on $a ^ { c }$ . While using differentiable encodings for $a ^ { d }$ could address this, our experiments worked without them.
# 7 Experimental Setup
This section details the experimental setup, datasets, metrics, and conditions used to evaluate our approach.
# 7.1 MDP Formulation
State. Relevant ventilation-related variables were identified with domain experts. Variables were selected based on their availability across datasets to maximize patient inclusion. The observable states comprise 26 variables (Table 2, supplementary material A), defined in collaboration with clinicians.
Table 2. The list of variables and their types involved in state space.
Actions. We consider six MV settings listed in Table 3. The MV settings consist of both discrete and continuous parameters. For the discrete actions setup, all the continuous actions are discretized using clinician defined bins given in the supplementary material C. The hybrid action setup uses the categorical and continuous parameters without any action space conversion. VT and $\Delta P$ are conditioned on the Vent Control Mode. Specifically, if the mode is volumecontrolled MV (VCV), then VT is used, while for pressure-controlled MV (PCV), $\Delta P$ is applied. When an action is disabled, the null bin is assigned during training (for discrete action setup). This reduces the total number of unique action combinations, which is useful for reducing action space (see Section 5.1). However, to convert discrete values to continuous ones for calculating $d ^ { \pi }$ (see Eq. (5)) and to make it comparable with hybrid action algorithms, we did not use ventilation mode-conditioned masking, as it only altered the action space size without affecting the final performance.
Rewards. Environment rewards are calculated as defined in Section 4.2.
Table 3. Action variables, their units and possible action space
# 7.2 Study Datasets
For our experiments, we use data from three publicly available clinical databases on PhysioNet [11]: MIMIC IV [16], eICU [28] and HiRID [9]. The datasets include patients from different hospitals across Europe and US, ensuring broad representativeness of patient characteristics and treatment regimes.
Cohort. The cohort includes patients aged 18 years or older who underwent at least 4 hours of MV in the ICU.
Pre-processing. Identical pre-processing steps were applied to each database using individual pipelines to account for datasetspecific characteristics. These steps included data cleaning, filtering, episode construction, and computation & imputation:
Data cleaning consisted of standard cleaning steps such as unit conversion and outlier removal. String values like ventilation mode and sex were encoded numerically
During data filtering, only MV periods meeting the minimum duration requirement of 4h were retained. Patients fully missing any of the required variables were excluded.
Episode building involved defining ventilation episodes and time steps. Episodes were identified using invasive ventilation identifiers or, when unavailable, inferred from ventilation-specific variables. A gap of at least 6 hours between ventilation variables marked the end of one episode and the start of another. This threshold was defined in collaboration with clinical partners to ensure meaningful episode segmentation, avoiding unnecessary splits for short gaps while accounting for potential clinical changes over longer gaps. For each episode, 1-hour time steps were created. When multiple values were available within a time step, a rule-based selection using LOINC codes was applied, prioritizing measurements based on clinical relevance, such as method of collection. If unresolved, the median was chosen for numerical variables, while for categorical variables, the values with the longest duration within the time window was selected.
Computation & Imputation included calculating values for the state vector (e.g. cumulative fluids intake/4h, MAP etc.) and reward (e.g. $\Delta t _ { m v } , \Delta t _ { d e a t h } , \Delta t _ { \prime }$ $\Delta t _ { r e }$ , as well as imputing missing data within an episode using forward propagation).
The resulting state vectors for each database (details see supplementary materials B) were then combined, forming a final state vector containing 12,572 patients and 1,252,505 hours of MV. The dataset was split into $80 \%$ training and $20 \%$ testing. Stratified splitting was performed based on episode length and mortality while ensuring that no patient appeared in both the training and test sets. To ensure comparability, the data splits remained unchanged across all experiments.
# 7.3 RL Training
Discrete Actions. Our discrete actions setup uses Factored Critic CQL (FactoredCQL) with high regularization $( \alpha = 1 0 ) \$ ), $\gamma = 0 . 9 9$ , an MLP with 4 layers of 256 units, gradient clipping at 0.01, and a learning rate of $1 e ^ { - 5 }$ over 400,000 gradient steps, while soft updating target critic with a Polyak coefficient of 0.005.
Hybrid Actions. The hybrid actions setup was trained using IQL (HybridIQL) and EDAC (HybridEDAC). HybridIQL proved robust to hyper-parameter choices, using an actor and critic learning rate of 0.0003, inverse temperature $\beta = 1 0 0$ , expectile $\tau = 0 . 8$ , and an MLP with 4 layers of 256 units. HybridEDAC was sensitive to hyperparameter choices. We used a small gradient diversity term $( \eta = 0 . 1 )$ to avoid critic loss divergence. We also applied automatic entropy adjustment [14] with a target entropy of $\mathcal { H } _ { c } = - 0 . 3$ for continuous actions and $\mathcal { H } _ { d } = 0 . 3$ for discrete actions, and set the learning rate for each loss to $3 e ^ { - 5 }$ .
# 7.4 Evaluation Metrics
Fitted Q-Evaluation. FQE [21] is an Off-Policy Evaluation (OPE) method used to estimate the policy $\pi$ performance using previously collected dataset $\mathcal { D }$ . The FQE method fits a $Q ^ { \pi }$ using $y =$ $r + \gamma Q ( s ^ { \prime } , \pi ( s ^ { \prime } ) )$ as target. The policy performance metric $V ^ { \pi }$ is defined as estimated returns of policy $\pi$ on the initial state distribution:
$$
V ^ { \pi } = \mathbb { E } _ { s _ { 0 } \sim d _ { 0 } } \left[ Q ^ { \pi } \big ( s _ { 0 } , \pi ( s _ { 0 } ) \big ) \right]
$$
Since traditional FQE only captures expected returns, the distributional FQE (DistFQE) is implemented following the Quantile Regression DQN (QR-DQN) approach [7]. The performance of the behavior (or clinicians) policy, $V ^ { \pi _ { b } }$ , is evaluated by replacing the target with $y = r + \gamma Q ( s ^ { \prime } , a ^ { \prime } )$ , where $a ^ { \prime }$ is drawn from $\mathcal { D }$ , and then applying Eq. (4).
Policy Coverage. To quantify the coverage (or mismatch) of policy $\pi$ with respect to the dataset $\mathcal { D }$ , we first train an autoencoder on $\mathcal { D }$ to learn the distribution of $( s , a )$ pairs. The autoencoder is optimized by minimizing the negative log likelihood (NLL) loss: θ∗ = arg minθ∗ $\sum _ { ( s , a ) \in \mathcal { D } } - \log p _ { \theta ^ { * } } ( s , a )$ . Once trained, we evaluate the coverage of $\pi$ by computing the expected log-likelihood for pairs $( s , \pi ( s ) )$ , where $s$ is sampled from $\mathcal { D }$ :
$$
d ^ { \pi } = \mathbb { E } _ { s \sim \mathcal { D } } \left[ \log p _ { \theta ^ { * } } \left( s , \pi ( s ) \right) \right] .
$$
A higher value of $d ^ { \pi }$ indicates that the actions produced by $\pi$ lie within the in-distribution region of $\mathcal { D }$ . Conversely, a lower $d ^ { \pi }$ suggests that the actions are OOD relative to $\mathcal { D }$ .
Policy Selection. The top $10 \%$ of checkpoints by $d ^ { \pi }$ were identified for each algorithm, and among them, the one with the highest $V ^ { \pi }$ was chosen to prevent overestimation while ensuring strong performance. To keep comparison fair between discrete and hybrid action space, the discretized actions in discrete action space were converted to continuous by selecting bin modes before estimating $d ^ { \pi }$ and $V ^ { \pi }$ .
Reward effectiveness. To evaluate how well the learned $\mathrm { \bf Q }$ -values achieve both primary and secondary objectives, we validate them using the Spearman correlation with both safe range rewards and episode length. First, we assess the effectiveness of the two $r _ { v f d }$ implementation options. We then compare these results to the Q-values of a policy trained with a mortality-based reward.
# 8 Results & Discussion
# 8.1 Policy Comparison
Table 4 presents the evaluation results for policies trained using different algorithms. It shows comparisons with the behavior policy using the metrics of $V ^ { \pi }$ (Eq. (4)) and $d ^ { \pi }$ (Eq. (5)). Overall, the trained policies consistently outperform the clinician baseline, as indicated by higher $V ^ { \pi }$ . Notably, HybridEDAC achieves the highest $V ^ { \pi }$ , but it also exhibits the greatest distribution mismatch (lowest $d ^ { \pi }$ ). In contrast, HybridIQL stay close to the behavior policy distribution by avoiding unseen actions during policy improvement. It shows lower $V ^ { \pi }$ and higher $d ^ { \pi }$ , as expected.
Table 4. DistFQE performance estimates $( V ^ { \pi } )$ and policy coverage $( d ^ { \pi } )$ for trained policies relative to the clinician policy.
Previous studies (e.g., [17]) evaluate distribution mismatch by plotting behavior and trained policy distributions. For instance, Fig. 3 shows that tidal volume distributions for clinicians and the trained policy appear similar. However, this approach overlooks that each value is conditioned on the state and other action dimensions. In contrast, our method reveals a much larger disparity in the tidal volume $d ^ { \pi }$ , with values of -11.77 for IQL and -167.77 for EDAC. A full overview of action distributions and $d ^ { \pi }$ are provided in the supplementary material D & E.
Figure 3. Action distribution of normalized tidal volume for trained and dataset policies
Given that HybridIQL achieves a higher $V ^ { \pi }$ than the behavior policy $V ^ { \pi _ { b } }$ and exhibits high policy coverage $( d ^ { \pi } )$ compared to other algorithms, it is more likely to be adopted by clinicians due to its minimal discrepancies relative to clinician policies.
# 8.2 Effect of Rewards Design on $\boldsymbol { Q }$ -values
# 8.2.1 Terminal vs. Each Step $r _ { v f d }$
Applying VFD as a terminal reward has shown high sensitivity to the value of ${ w _ { v f d } }$ . Fig. 4 shows that if ${ w _ { v } } _ { f d }$ is too high, the shortterm rewards become irrelevant, but if a lower $\boldsymbol { w _ { v f d } }$ is chosen, the agent will not attribute the $r _ { v f d }$ to earlier time steps in the episode unless a high value of $\gamma$ is used. This can be problematic in offline RL, as higher $\gamma$ can significantly increase the variance in Q-value estimation, especially when data coverage is poor [15].
Giving the terminal reward at each timestep typically alters dynamics, leading the agent to prefer longer episodes for higher cumulative rewards. However, since $r _ { v f d }$ goes to zero as episode length reaches $\Delta t _ { m a x }$ , this issue is avoided. With option 2, $r _ { v f d }$ is attributed uniformly to each timestep in an episode, allowing a lower $\boldsymbol { w _ { v f d } }$ (to avoid dismissing short-term rewards) to be used without increasing $\gamma$ . Unlike the VFD reward, the mortality reward does not decay to zero as the episode reaches $\Delta t _ { m a x }$ , making it unsuitable for per-step application and hindering balance with short-term rewards.
Figure 4. Correlation between the episode mean Q-value and the episode mean of $r _ { r a n g e }$ across different ${ w _ { v f d } }$ values, when $r _ { v f d }$ is applied at the terminal time step. The Q-function was learned using FQE. For each value of $\boldsymbol { w _ { v f d } }$ , five different policies were trained.
# 8.2.2 VFD vs Mortality
We compare $V F D _ { s t e p }$ with a terminal mortality reward $r _ { m o r t a l i t y } ~ \in ~ \{ - 1 0 0 , 1 0 0 \}$ . Table 5 shows the correlation between mean Q-values and mean safe range rewards per episode. A higher positive correlation with safe range rewards and a higher negative correlation with episode length indicate that higher Qvalues are assigned to episodes with optimal MV treatment (i.e., patients remain within safe ranges, spend less time on MV, and avoid mortality). $\mathsf { V F D } _ { s t e p }$ exhibits an increased correlation with safe range rewards, comparable to the case with no terminal reward applied (see Fig. 4 for $w _ { v f d } = 0 \mathrm { , }$ ). In contrast, the mortality-based reward shows no correlation with the range reward, implying that the safe range component is ignored. This suggests that our reward formulation enables the RL agent’s $\mathrm { ~ Q ~ }$ -values to more effectively capture performance across both medical objectives.
Table 5. Correlation of mean episode Q-values and dimensions of objectives for factored CQL
# 8.3 Impact of Action Discretization $( C 4 )$
To show the impact of discretization, we used HybridIQL policy as reference point since it showed the minimum distribution mismatch (see Table 4). We evaluate our approach by first discretizing the actions. The discretized actions are then converted back into continuous using a reconstruction function $r$ . The reconstructed actions $\textstyle { a _ { r } } $ are used to estimate the coverage $d ^ { \pi }$ (see Eq. (5)). The reconstruction functions $r$ evaluated include the bin mode, a normal distribution centered at the mode, the bin mean, and a uniform distribution representing the range of real-world choices.
Our results (see Table 6) indicate that discretizing actions leads to a distributional shift (evidenced by a lower $d ^ { \pi }$ ). Notably, reconstructing actions using the bin mode shows the minimum distribution mismatch which is on par with HybridEDAC policies in our experiment (see Table 4), whereas naive uniform sampling or using the bin mean produces the lowest $d ^ { \pi }$ (or highest divergence). In practice, the clinicians might select a value from the bin based on their clinical judgment by taking the patient status into account, e.g., the current oxygen saturation or $\mathrm { \ p H }$ . Clinicians are used to selecting a value from a bin. In fact, the current clinical standard for ventilating ARDS patients also suggests a bin for PEEP and FIO2 [24]. However, merely presenting a bin can bias clinicians toward sub-optimal in-bin choices, and any reconstruction error may translate into uncertain patient outcomes.
We attempted to use FQE to evaluate reconstruction methods. However, except for bin mode, FQE either significantly overestimated Q-values or diverged when using reconstructed actions, rendering any meaningful performance comparison not possible. This aligns with previous studies [10] that show value estimates can diverge due to distribution mismatch.
Table 6. Coverage of the policy distribution with respect to the dataset distribution, denoted as $d ^ { \pi }$ . “None” denotes the original HybridIQL policy without any conversion.
# 8.4 Limitations & Future Research
While offline RL methods (CQL, IQL, EDAC) inherently avoid OOD actions, the agent may still propose unsafe actions if such actions are present in the dataset. In future work, we aim to address this risk by collaborating with clinicians to identify patterns of potentially unsafe actions. In addition, we will incorporate current national and international ventilation guidelines [12, 30] to further support safe and clinically relevant recommendations. In addition, evaluations using FQE can overestimate $\mathsf { Q }$ -values under large distribution shifts. This can cause complications for hyperparameter selection requiring a balance between improved objectives (high $V ^ { \pi }$ ) and low distribution mismatch (high $d ^ { \pi }$ ). Furthermore, our study relies on publicly available ICU datasets that suffer from noise, limited resolution, and necessitate some assumptions about patient states and interventions. As part of the IntelliLung project, it is an ongoing collaboration to collect high-quality data from partner hospitals in Germany, Spain, Poland, Italy and the US. Future work includes robust uncertainty quantification to show the prediction confidence and policy explainability. While these aspects can serve as safeguards to support clinical validation, we will further explore the ethical and legal implications of using AI in clinical decision-making through ongoing dialogue with clinicians and regulatory experts. Additionally, while the system is designed as a recommendation system rather than a decision-making tool, it may still bias clinical decisions. Nevertheless, full agency and responsibility remain with the treating physician. Lastly, as all evaluations were conducted retrospectively and offline, any observed benefits should be validated in prospective, observational, and randomized controlled trials. Since the algorithm may qualify as a software medical device, future clinical trials will be designed in accordance with Medical Device Regulation 745/2017, ISO 14155:2020, relevant legal requirements, and the Declaration of Helsinki. | Invasive mechanical ventilation (MV) is a life-sustaining therapy for
critically ill patients in the intensive care unit (ICU). However, optimizing
its settings remains a complex and error-prone process due to patient-specific
variability. While Offline Reinforcement Learning (RL) shows promise for MV
control, current stateof-the-art (SOTA) methods struggle with the hybrid
(continuous and discrete) nature of MV actions. Discretizing the action space
limits available actions due to exponential growth in combinations and
introduces distribution shifts that can compromise safety. In this paper, we
propose optimizations that build upon prior work in action space reduction to
address the challenges of discrete action spaces. We also adapt SOTA offline RL
algorithms (IQL and EDAC) to operate directly on hybrid action spaces, thereby
avoiding the pitfalls of discretization. Additionally, we introduce a
clinically grounded reward function based on ventilator-free days and
physiological targets, which provides a more meaningful optimization objective
compared to traditional sparse mortality-based rewards. Our findings
demonstrate that AI-assisted MV optimization may enhance patient safety and
enable individualized lung support, representing a significant advancement
toward intelligent, data-driven critical care solutions. | [
"cs.LG",
"cs.AI"
] |
# I. INTRODUCTION
The increasing deployment of Artificial Intelligence (AI) systems across critical domains, such as healthcare, finance, public administration, has raised urgent concerns about their ethical, legal, and social implications [1]. In response, the notion of Responsible AI (RAI) has emerged as a multidimensional paradigm that seeks to ensure fairness, transparency, accountability, privacy, safety, and human oversight in the development and use of AI technologies [1].
A key component of RAI is the establishment of effective governance mechanisms, which encompass regulatory frameworks, organizational structures, internal processes, and stakeholder engagement strategies that ensure AI systems are trustworthy and aligned with societal values [2]. In this context, Governance is not limited to compliance with legal norms. However, it includes the design of institutional arrangements that operationalize ethical principles in real-world settings—particularly in technology companies, in which AI systems are developed and deployed at scale [2].
Over the past years, academic interest in AI governance has intensified [3], [4]. Numerous systematic reviews and scoping studies have been published, covering ethical principles, explainability techniques, regulatory developments, and organizational practices. However, the growing number of such reviews, we believe it is time to better understand important key points regarding, for instance, how governance has been conceptualized, which frameworks and practices are emerging, and how stakeholder roles are being addressed in the literature.
This paper presents a rapid tertiary review of the secondary literature on Responsible AI governance to address this gap. We call it rapid because it uses elements of a Rapid Review [5], [6] (more in Section II. By synthesizing findings from systematic reviews, scoping studies, mapping studies, and multivocal literature reviews published between 2020 and 2024, we aim to provide an integrative perspective on RAI’s state of the art. Our review focuses not only on theoretical framings but also on practical mechanisms and relevant recommendations to organizations, particularly those involved in the development and application of AI technologies.
We seek to answer four research questions: (1) What frameworks of AI governance are most frequently cited in the literature? (2) Which principles are emphasized across secondary studies? (3) What organizational structures and governance mechanisms are recommended? and (4) How are stakeholders involved and represented in governance discussions? By addressing these questions, we aim to support both researchers and practitioners in designing governance strategies that are theoretically grounded and practically actionable.
The rapid tertiary review of AI governance literature yielded three primary insights:
Dominance of High-Level Regulatory Frameworks with Operational Gaps: The studies frequently reference established frameworks such as the EU AI Act and NIST RMF. However, there is a notable dearth of actionable governance mechanisms and stakeholder engagement strategies detailed within these secondary reviews. This indicates a significant gap between prescriptive regulatory guidance and concrete, empirically validated implementation practices. • Prevalence of Transparency and Accountability Principles: Transparency and accountability consistently emerge as the most emphasized governance principles across the analyzed literature. These are often discussed alongside other core tenets like fairness, explainability, and privacy. This highlights a shared conceptual foundation for responsible AI.
Call for Empirical Validation and Enhanced Inclusivity: The review underscores a critical need for empirical validation of proposed AI governance practices. Furthermore, it identifies a deficiency in the literature regarding the detailed exploration and effective integration of diverse stakeholder perspectives, particularly those of underrepresented groups. This suggests a requirement for future research to move beyond conceptual discussions to real-world impact assessments and more inclusive governance models.
The remainder of this paper is organized as follows. Section II describes the methodology adopted for this rapid tertiary review, including the search strategy, inclusion criteria, and synthesis approach. Section III presents the main findings, structured around the four research questions. Section IV discusses the implications of our results for industry, society, and future research. Section $\mathrm { \Delta V }$ addresses threats to validity. Section VI concludes the paper with final remarks and directions for future work.
# II. METHODOLOGY
This study adopts a rapid tertiary review methodology, which merges the scope of tertiary evidence synthesis with the time-efficiency of a rapid review [7]. This approach is particularly suitable for emerging research topics, like AI Governance, where timely insights are prioritized over exhaustive coverage. To ensure efficiency, the review was conducted under streamlined conditions: only the first author performed the literature screening and data extraction and the search was limited to two well-established digital libraries—IEEE Xplore and the ACM Digital Library. The review was performed in May 2025.
Then, we belive that tertiary reviews aggregate and analyze existing secondary studies, such as systematic reviews and scoping reviews, while rapid reviews adopt streamlined procedures to deliver timely insights under practical constraints. This hybrid approach is especially useful in dynamic and multidisciplinary domains such as Responsible AI governance.
# A. Research Questions
The aim of this study is to investigate how governance in RAI has been addressed in the secondary literature. Our focus is particularly on frameworks, governance principles, stakeholder engagement, and organizational mechanisms relevant to technology companies. The following research questions were defined to guide the analysis:
RQ1: What are the main AI governance frameworks discussed in secondary reviews? Justification: This question seeks to identify which normative and regulatory frameworks (e.g., EU AI Act, NIST RMF) are most cited in review literature, revealing their influence on research, policy, and organizational adoption of Responsible AI.
• RQ2: Which governance principles (e.g., transparency, accountability, auditability) are most frequently addressed? Justification: By mapping which principles are emphasized—such as fairness, privacy, or explainability—this question reveals ethical priorities and possible gaps or tensions in the governance discourse across different contexts.
RQ3: What organizational structures and internal governance mechanisms are identified as good practices? Justification: Operationalizing AI governance requires institutional arrangements. This question examines the organizational practices (e.g., AI ethics committees, algorithmic audits, documentation protocols) highlighted as effective for implementing Responsible AI.
• RQ4: What is the role of stakeholders (e.g., regulators, developers, users, citizens) in AI governance? Justification: AI governance is a sociotechnical challenge. This question investigates how reviews classify and engage with different stakeholders, including their influence, responsibilities, and representation within governance frameworks.
# B. Search Strategy
The search was conducted in April and May 2025 using two major digital libraries: IEEE Xplore and ACM Digital Library. The following query string was applied to search titles and abstracts:
(“AI governance” OR “AI compliance”) AND (“systematic review” OR “scoping review” OR “literature review” OR “mapping study” OR “metaanalysis” OR “research synthesis”)
Filters were applied to restrict results to English-language, peer-reviewed articles published between 2020 and 2024.
# C. Inclusion and Exclusion Criteria
# Inclusion criteria:
The article is a secondary study (e.g., systematic review, scoping review, mapping study, multivocal review, or meta-analysis).
• It explicitly addresses Responsible AI and governancerelated topics.
• It includes practical governance recommendations, mechanisms, or frameworks relevant to technology companies or organizations implementing AI.
• It is peer-reviewed, written in English, and published between 2020 and 2024.
• The full text is accessible for analysis.
# Exclusion criteria:
• The article is a primary study, opinion piece, or theoretical essay without review methodology. It does not include governance-related content (e.g., discusses only ethical theory without implementation). It lacks practical relevance to organizations or industries using AI. • It was published before 2020, is not peer-reviewed, not written in English, or is not available in full text.
# D. Study Selection
A total of 55 articles were retrieved across IEEE Xplore and ACM Digital Library. After applying the inclusion and exclusion criteria through manual screening of titles, abstracts, and full texts, 9 articles were selected for detailed analysis. These studies met all methodological and thematic requirements of our study, and presented governance solutions, classifications, or organizational mechanisms applicable to real-world Responsible AI adoption.
# E. Data Extraction and Analysis
Data were extracted into a structured spreadsheet that captured bibliographic information, type of review, year of publication, governance themes discussed, and relevance to each research question. A classification was also applied based on six key RAI governance pillars, inspired in recent literature on the topic [1], [8]: fairness, transparency, privacy and security, sustainability, accountability, and explainability.
The complete pillars description, and data (selection and extraction) sheets are available for open science 1.
# F. Synthesis Approach
We used a thematic synthesis approach, following the recommended steps: extract data, code data, translate code into themes, and create a model of higher-order themes [9]. Our analysis combined descriptive mapping with interpretative synthesis. Whenever possible, original excerpts from the studies were preserved and integrated into the discussion to ensure analytical transparency and maintain fidelity to the reviewed literature. Special attention was given to identifying actionable governance mechanisms and practices recommended for implementation in technology companies.
# G. Limitations
This rapid tertiary review prioritizes coverage and insight over exhaustiveness. The scope was limited to two digital libraries and a five-year publication window. The quality and depth of individual secondary studies also varied. However, we mitigated these risks through strict inclusion criteria, manual validation, and methodological triangulation (e.g., thematic and semantic classification).
# III. RESULTS
This section presents the synthesis of findings from the selected secondary reviews, structured around four research questions defined for this tertiary study.
# A. General Results
Table I presents the nine secondary studies included in this rapid tertiary review. All selected articles were published between 2020 and 2024 in peer-reviewed venues, including conferences (e.g., FAccT, ICSR) and indexed journals (e.g., IEEE Access, ACM Computing Surveys).
The studies cover diverse governance-related themes, such as explainability, stakeholder engagement, internal accountability, and metrics for trustworthy AI. The combination of conceptual and practical contributions across the selected articles provides a robust foundation for answering the research questions proposed in this review.
TABLE I INCLUDED STUDIES IN THE REVIEW.
RQ1: What are the main AI governance frameworks discussed in secondary reviews?
The reviewed studies present a variety of frameworks aimed at operationalizing Responsible AI. One prominent example is the Responsible AI Pattern Catalogue, which organizes practices into three categories: multi-level governance patterns, trustworthy process patterns, and responsible-by-design product patterns. These patterns were identified through a multivocal literature review and are intended to support the systemic implementation of Responsible AI throughout the AI lifecycle [16].
Another contribution is the Responsible AI Metrics Catalogue, which focuses specifically on accountability. It proposes a structured set of metrics organized into process, resource, and product categories. These metrics are designed to fill existing gaps in practical guidance for operationalizing AI accountability, particularly in the context of generative AI [17].
A systematic review presents a conceptual Privacy and Security-Aware Framework for Ethical AI, structured around four dimensions: data, technology, people, and process. This framework provides a foundation for the development and evaluation of AI systems with integrated privacy and security concerns [12].
Other studies discuss initiatives and frameworks such as AI Verify (Singapore), the EU’s capAI project, the NIST AI Risk Management Framework, the EU Trustworthy AI Assessment List, the NSW AI Assurance Framework (Australia), and Microsoft’s Responsible AI Impact Assessment Template [16], [17].
International guidelines developed by governmental and intergovernmental bodies, such as the OECD, G7, G20, and national governments (EU, US, UK, Canada, Australia), are also widely referenced. Examples include the EC-HLEG AI guidelines, the Montreal Declaration, and the Beijing AI Principles [10], [16], [17].
Contributions from technology companies such as Microsoft, Google, and IBM have also been cited in the secondary reviews, particularly in relation to their frameworks and assessment templates for Responsible AI [12], [16], [17].
Professional organizations, including the ACM and IEEE, are frequently referenced, especially for their ethical guidelines and design standards for intelligent systems [10], [16].
Additionally, research institutes such as the Alan Turing Institute are recognized for their significant contributions to AI ethics and governance research [10].
# RQ2: Which governance principles (e.g., transparency, accountability, auditability) are most frequently addressed?
The reviews consistently emphasize a core set of ethical principles, often reflecting national and international guidelines. These include human-centered values, social and environmental well-being, fairness, privacy and security, reliability and safety, transparency, explainability, contestability, and accountability [16].
Accountability is a central theme in the Responsible AI Metrics Catalogue. It is defined through three complementary elements: responsibility, auditability, and redressability. These components are essential for transparent and auditable decision-making, building public trust, and complying with emerging regulations [17].
The emphasis on privacy and security is especially notable in the Privacy and Security-Aware Framework, which highlights the need for an integrated approach. The study finds that privacy is widely addressed in the literature, whereas security is less frequently discussed [12].
Explainability and transparency are key principles in reviews dedicated to explainable AI. One such study outlines their importance in addressing the lack of interpretability in AI systems and in meeting regulatory requirements [14].
Across all studies, transparency and privacy are the most frequently cited principles, followed by fairness, accountability, explainability, autonomy, responsibility, and safety [16], [17].
# RQ3: What organizational structures and internal governance mechanisms are identified as good practices?
The reviews identify a variety of internal governance mechanisms considered effective for supporting Responsible AI. The Responsible AI Pattern Catalogue emphasizes the importance of multi-level governance structures involving the industry, the organization, and technical teams [16].
The establishment of AI governance committees is described as an effective practice. These committees should include professionals from diverse areas and strategic leadership to ensure ethical oversight across the entire lifecycle of AI systems [16], [17].
The literature also suggests establishing formal processes for ethical oversight, compliance verification, and incident response. It recommends competence assessments tailored to different roles in the organization, aligned with industry standards [17].
Responsible AI maturity models are presented as useful tools to evaluate and improve organizational capabilities in AI governance. Certification mechanisms are also proposed to demonstrate compliance with ethical standards [16].
Standardized reporting is highlighted as a necessary practice for transparency in communication with stakeholders. This includes disclosing when AI is being used and explaining its purpose and design [16], [17].
Some studies emphasize internal changes in organizations, such as the creation of AI ethics teams, adoption of internal governance guidelines, and the training of developers and engineers in ethics and human rights [12], [16].
# RQ4: How do secondary reviews discuss the role of stakeholders (e.g., regulators, developers, users, citizens) in AI governance?
The stakeholders role in Responsible AI governance is discussed extensively across the reviews. The Responsible AI Pattern Catalogue categorizes stakeholders into three levels: industry, organizational, and team. At the industry level, policymakers and regulators act as enablers, while technology producers and procurers are key affected parties. At the organizational level, managers are responsible for governance structures, influencing employees, users, and individuals impacted by AI. Development teams are directly involved in implementing Responsible AI in practice and product design [16].
Several frameworks highlight the collaborative nature of Responsible AI, requiring engagement from policymakers, developers, end-users, and civil society. The Privacy and SecurityAware Framework, for example, is designed to support public institutions, private companies, and academia in addressing shared concerns [12].
In the context of explainable AI, different stakeholder groups, such as AI experts, decision-makers, regulators, and users, are considered. One study identifies nine distinct stakeholder types with varying needs for explanation [14].
Other reviews aim to support AI system builders by helping them select appropriate governance guidelines. These builders include technical professionals but also interact with business executives, legal advisors, and policymakers [17].
The literature also acknowledges that AI governance involves addressing the needs of underrepresented and vulnerable populations, and that trade-offs must be managed across individual, organizational, and systemic levels [16], [17].
# IV. DISCUSSION
The results of this tertiary review indicate a rapidly evolving landscape in the field of AI governance, shaped by both international regulatory initiatives and industry-driven practices. While several frameworks—such as the EU AI Act, the NIST AI RMF, and various pattern catalogs—are widely referenced, the literature shows fragmentation in how governance is conceptualized and applied. The distinction between abstract principles and their practical operationalization remains a recurring challenge.
Notably, although many reviews emphasize the importance of principles like transparency, accountability, and explainability, few go beyond descriptive mappings to provide critical analysis of how these principles interact or conflict in organizational settings. Furthermore, while stakeholder involvement is frequently acknowledged, the literature lacks depth in evaluating the effectiveness of participatory governance approaches, especially in contexts involving marginalized or underrepresented groups.
Importantly, only a subset of the reviewed studies provided concrete, organization-level governance mechanisms such as audit procedures, ethics committees, or risk management practices tailored to technology companies. This indicates a persistent gap between ethical aspirations and the availability of actionable strategies to guide the real-world implementation of Responsible AI.
We contextualized our findings on three different perspectives that are detailed below: implications for industry, society and research.
# A. Implications for Industry
This review provides technology companies and AI product teams with a consolidated overview of governance mechanisms, frameworks, and practices that have been discussed and recommended in the literature. By identifying the most prominent principles (such as transparency, accountability, and fairness) and their associated implementation strategies (such as algorithmic audits and ethics committees), the review offers practical insights that can inform the development of internal Responsible AI policies. Furthermore, the categorization of stakeholder roles and organizational practices may assist in aligning cross-functional responsibilities (e.g., legal, engineering, ethics) within AI initiatives.
# B. Implications for Society
The findings underscore the importance of governance structures that not only ensure legal compliance but also proactively mitigate societal risks, such as bias, discrimination, opacity, and exclusion in AI systems. The emphasis on stakeholder involvement highlights the need for more inclusive governance strategies that recognize and respond to the concerns of historically underrepresented or vulnerable populations. For policymakers and civil society organizations, this review serves as a resource to understand how institutional and technical dimensions of AI governance are being addressed in academic and industry debates.
# C. Implications for Research
This tertiary review synthesizes fragmented knowledge across multiple reviews and discloses emerging patterns, redundancies, and gaps. It contributes to Responsible AI research by mapping the literature not only by principle or framework, but also by applicability to real-world governance contexts. The study highlights the need for more empirical validation of governance practices, greater attention to organizational dynamics in AI ethics, and methodological consistency across review studies. Future research should explore interdisciplinary approaches and develop metrics to evaluate the effectiveness and fairness of governance structures in operational environments.
# V. THREATS TO VALIDITY
Several limitations may affect the validity of this rapid tertiary review. Following established guidelines for systematic and rapid reviews, we outline potential threats according to four dimensions: construct, internal, external, and descriptive validity.
Construct validity refers to the adequacy of the concepts captured by our research questions and selection criteria. Although we focused on governance in Responsible AI, the term “governance” is used inconsistently across the literature.
Internal validity concerns the process of study selection and data extraction. Despite applying clear inclusion and exclusion criteria, the classification of studies into governance principles and practices involved human judgment. To reduce bias, we performed manual validation. Still, some interpretations may reflect subjective alignment.
External validity relates to the generalizability of the findings. Our review was limited to publications from IEEE and ACM between 2020 and 2024. Although these databases are highly reputable in computing and AI, they may not fully represent the legal, sociopolitical, or multidisciplinary dimensions of AI governance found in other domains (e.g., public policy, law, or HCI). Therefore, the findings should be interpreted as reflective of the technical computing community’s perspective.
Descriptive validity pertains to the accuracy and completeness of reporting. We preserved excerpts from the original reviews to support transparency and traceability. However, secondary reviews sometimes lack clarity or depth in their own reporting, which may have affected our ability to fully extract or categorize content. | Artificial Intelligence (AI) governance is the practice of establishing
frameworks, policies, and procedures to ensure the responsible, ethical, and
safe development and deployment of AI systems. Although AI governance is a core
pillar of Responsible AI, current literature still lacks synthesis across such
governance frameworks and practices. Objective: To identify which frameworks,
principles, mechanisms, and stakeholder roles are emphasized in secondary
literature on AI governance. Method: We conducted a rapid tertiary review of
nine peer-reviewed secondary studies from IEEE and ACM (20202024), using
structured inclusion criteria and thematic semantic synthesis. Results: The
most cited frameworks include the EU AI Act and NIST RMF; transparency and
accountability are the most common principles. Few reviews detail actionable
governance mechanisms or stakeholder strategies. Conclusion: The review
consolidates key directions in AI governance and highlights gaps in empirical
validation and inclusivity. Findings inform both academic inquiry and practical
adoption in organizations. | [
"cs.SE",
"cs.AI"
] |
# I. INTRODUCTION
Interacting with 3D scenes using open-vocabulary perception is a key challenge facing current AI-driven agents [1], [2]. Accurately querying semantic objects and their relationships through complex free-form queries in intricate 3D environments remains an unresolved issue [3].
Recent works [4]–[6] tackling 3D scene understanding tasks frequently rely on CLIP [7] to align textual queries with scene semantics, which heavily depends on large-scale pretrained datasets. Meanwhile, some methods leverage large language models (LLMs) [6], [8], [9] to facilitate flexible semantic interactions, which are crucial for handling complex queries in 3D scene understanding. We aim to develop a training-free framework that can acquire semantically aligned 3D features to support accurate free-form querying in 3D scenes. However, existing methods still encounter significant limitations: 1) Limited predefined vocabulary priors from training datasets hinder free-form semantic querying. Most 3D scene understanding models [10]–[12] depend on largescale training data and use CLIP to encode queries and scenes, inherently constraining them to a fixed set of predefined categories. This limitation hinders their capacity for free-form semantic querying and relational reasoning, as illustrated in Fig. 1 (a). 2) Inconsistency between 3D instance features and semantic labels. Recent methods [6], [9], [13], [14] rely solely on LLMs and LVLMs to generate semantic labels for 3D instance features, yet neglect the lack of 3D scene information. This often leads to inconsistent or incorrect outputs, where objects and relations misalign with true 3D semantics, resulting in unreliable reasoning, as shown in Fig. 1(b). 3) Lack of scene spatial relation reasoning. Current methods [15], [16] predominantly focus on object-level segmentation and retrieval, while disregarding spatial relationships within complex scenes. This oversight substantially constrains their capability to handle semantic relationship queries.
Fig. 1. We introduce FreeQ-Graph, a 3D scene understanding work for freeform complex semantic querying with a semantic consistent scene graph. (a) The open-vocabulary methods depend on pre-traineRd rdieavtal Oabnjedc predefined objects to align text with 3D features, limiting tthaebilr lsaumpport for free-form queries. (b) Some mtaetblheods overly depend on LLMs and LVfaLrMs for reasoning, yet their lack of 3D scene awareness often yields object lists misaligned with actual 3D semantpiiclls,owleading to ilnaamcpcurate rpeilalsownipnilgl.o(wc) We propose a training-free framework that leverages LVLMs and LLMs to build a complete e-3fDo mspatial sce3nDe egrmapnthi fCor ifsrtaenet-fSocernme Gqraupehrying without predefined priors. ySuwpitehropuotint merging ensures the alignFimnedn:t of 3D node features with correct rs:emantic labels, enabling accuratteabalned colansmipstent 3D scene understanding.
In our paper, we propose FreeQ-Graph, a training-free framework that enables free-form semantic querying with a semantic consistent scene graph for 3D scene understanding. Our key innovation lies in a training-free, free-form querying framework that constructs a scene graph with accurate nodes and relations, aligns 3D instances with correct semantics through superpoint merging, and integrates LLM-based reasoning for spatial queries, setting our approach apart. 1) We construct a complete and accurate 3D scene graph using LVLMs and LLMs to map free-form instances and their relationships, without relying on any training priors. Unlike ConceptGraph [13], which depends on 2D models and often misses or duplicates objects, our approach ensures accurate scene representation through mutual correction between agents and the grounded model. 2) We align free-form nodes with consistent semantic labels to obtain 3D semantically consistent representations. This is achieved by generating superpoints and performing structural clustering to extract 3D instance features and their semantic labels, thereby aligning each 3D point with its corresponding semantics. In contrast, others [13] struggle with a consistent semantic representation. 3) We develop an LLM-based reasoning algorithm that breaks complex queries into CoT-reasoning by combining scene and object-level information for free-form querying. In contrast, other [13] single strategy lacks scene context, limiting its query capabilities. We conduct thorough experiments on six datasets, covering 3D semantic grounding, segmentation, and complex querying tasks, while also validating the accuracy of scene graph generation. The results demonstrate that our model excels in handling complex semantic queries and relational reasoning. Our contributions are summarized as follows:
We propose FreeQ-Graph, a training-free free-form querying framework with the semantic consistent scene graph and LLM-based reasoning algorithm for 3D scene understanding.
We propose a 3D semantic alignment method that aligns 3D graph nodes with consistent semantic labels, enabling the extraction of free-form 3D semantic-aligned features.
We introduce an LLM-based CoT-reasoning algorithm that combines scene-level and object-level information for scene spatial reasoning.
• Extensive experiments on 6 datasets demonstrate that our method excels in querying complex free-form semantics and relation reasoning perceptions.
# II. RELATED WORK
1) Open-Vocabulary 3D Scene Understanding: Natural language querying in complex 3D scenes demands deep understanding of free-form semantics and relationships. Many prior works [10], [11], [17]–[21] rely on joint training with large-scale pretrained data to align 3D scenes and query embeddings, but their dependence on predefined vocabularies limits true free-form querying. Recent advanced methods [6], [13], [14], [22], [23] leverage large language models (LLMs) for flexible semantic reasoning, yet they overly depend on LLMs and LVLMs to generate semantic labels for 3D features without sufficient 3D scene awareness. This often results in inconsistent or inaccurate outputs where objects and relations misalign with actual 3D semantics. Additionally, some
LLM-based methods [3], [19], [24], [25] fine-tune on taskspecific datasets, improving performance on those tasks but still restricting free-form queries and requiring substantial training resources. Our work utilizes a training-free, freeform querying framework that constructs a scene graph with accurate nodes and relations, aligns 3D instances with correct semantics for scene understanding.
2) 3D Scene Graphs: The 3D Scene Graph (3DSG) represents scene semantics compactly, with nodes for objects and edges for their relationships [26]–[28]. Recent methods [29], [30] use 3DSG for 3D scene representations. These works [26], [31], [32], such as VL-SAT [31], construct scene graphs to model scene representations, but are constrained by the closed vocabularies from their training data, limiting their ability to support free-form semantic queries. More recent approaches like ConceptGraph [13] and BBQ [14] leverage LLMs to generate nodes and edges in scene graphs. However, their heavy reliance on LLM-generated outputs without incorporating 3D scene context often leads to inconsistent scene representations misaligned with actual 3D semantics. Our approach constructs a semantically consistent 3D scene graph by first obtaining complete and accurate free-form nodes, then aligning them with correct semantic labels.
# III. METHOD
In this section, we propose FreeQ-Graph, a framework that enables free-form querying with (A) a 3D spatial scene graph with complete nodes and relations to support free-form query, (B) the semantic alignment module to align nodes with the consistent semantic label, and (C) a LLM-based CoTreasoning for scene spatial querying, as shown in Fig. 2.
# A. Problem Formulation
Given each 3D scene $\mathbf { P }$ with multi-view posed RGB observations $\textbf { I } = \ \{ I _ { i } \} _ { i = 1 , \dots , M }$ as input, where $M$ is the total number of images. The objective of free-form querying via 3D scene graph is to depict a semantic 3D scene graph $\mathbf { G } = ( \mathbf { V } , \mathbf { E } )$ as the 3D scene representation, where $\mathbf { V } = \{ \mathbf { v } _ { j } \} _ { j = 1 , \dots , J }$ denote the set of 3D objects and edges $\mathbf { E } = \{ \mathbf { e } _ { k } \} _ { k = 1 , \ldots , K }$ represents the relation between them. G constitutes a structured representation of the semantic content of the 3D scene. Based on this semantic representation of the 3D scene $\mathbf { G }$ , during the reasoning phase, it interacts and queries with the query $q$ and finally outputs a final target $\mathbf { v }$ . Nodes. For each object $\mathbf { v } _ { i } \in \mathbf { V }$ , we characterize it as $\mathbf { v } _ { i } = $ $\{ { \bf p } _ { i } , { \bf f } _ { i } , { \bf c } _ { i } , { \bf b } _ { i } , n _ { i } \}$ , where $\mathbf { p } _ { i } = \{ \mathbf { x } _ { j } \} _ { j = 1 } ^ { N _ { i } }$ is the pointcloud that contains $N _ { i }$ points $\mathbf { x } _ { j }$ , $\mathbf { f } _ { i }$ is the semantic feature, $\mathbf { c } _ { i }$ is the node caption, $\mathbf { b } _ { i }$ is the 3D bounding box, $n _ { i }$ is the id of the node. We denote the set of all object categories as $\nu$ . Edges. For each pair of nodes $\mathbf { v } _ { i } , \mathbf { v } _ { j }$ , we denote the edge $\mathbf { e } _ { i j } = \{ \mathbf { r } _ { i j } , \mathbf { d } _ { i j } \}$ , where $\mathbf { r } _ { i j } \in \mathbf { E }$ is the relation label that provides the underlying rationale, ${ \bf { d } } _ { i j }$ is the Euclidean distance between centers of bounding boxes for $\mathbf { v } _ { i }$ and $\mathbf { v } _ { j }$ .
The construction of the object set $\mathbf { V }$ and edge set $\mathbf { E }$ is in Sec. III-B. For each object $\mathbf { v } _ { i }$ , we define the object-level information $\mathbf { s } _ { o _ { i } }$ as ${ \bf s } _ { o _ { i } } = \{ { \bf c } _ { i } , { \bf b } _ { i } , n _ { i } \}$ . For better reasoning, we define scene-level information $\mathbf { s } _ { c }$ which represents the scene captions. The detailed reasoning algorithm is in Sec. III-D.
LVLM Object Ǿ 3D Semantic Aligned Features 3D Nodes Categories Free-form Information
(A) “listtheobjectintheimage” Grounded vocabulary desk 1 四 sofa 2MDodSelgment 山 Node Node Ǿ LLM 2D Masks {M i}iM1 chair Categories Multi-view posed images Nodesgeneration Superpoint Similarity Semantic-aligned For Each Map Object Merging Matrix A Point Representations 3D Point Cloud Q QiiJ1 LVLM Superpoints SpectralClustering 3D Semantic Instance Labels Captiongeneration Edgesgeneration
(B) Generation (Graphcuts) . Nodes Ǿ Captions ǫ Edges ǭ LLM-based CoT-Reasoning 3D Scene Graph with Semantic Aligned Features Stage 2: Target and Relation Reasoning Stage 1: Scene and Object Analysis G=(V, E) 3Dobject-level: ȕ Target: Ș 日 RyellaotewdpiOllbojewcts: grey pillow 3D scene-level caption: the lpailmlopwisownhtihte saonfda..n.e..ar .. chair
(C) ? lIanmthpisanscdetnhe,rtehaerseotfwaoispnilelaorwtsh..e.. Rel“anteiaorn”s: ȇ LLM 1 LLM desk Complex qurey with relation: Find the pillow near the table lamp.
# B. 3D Scene Graph with Complete Nodes and Edges.
To facilitate the mapping of free-form objects and the capture of relations in 3D scenes, the crux lies in acquiring complete nodes, encompassing all small and thin objects, along with their complete captions, and edges that include detailed relations reflecting complex semantic connections. To achieve this, we construct the 3D scene graph G through three primary steps: 1) Complete and accurate nodes generation without any training priors. It is a free-form method without relying on predefined vocabulary. 2) 3D semantic consistent feature generation. 3) Edges and captions generation.
Complete 3D scene nodes generation without priors. To obtain objects with semantic labels without predefined vocabulary, we first adopt a large vision-language model (LVLM) [33] to obtain the object set, then use a 2D instance segmentation model [34] to correct potential hallucinations, forming the set $\nu$ of object categories, which can be denoted as:
$$
\mathcal { V } , \{ \mathcal { M } _ { i } \} _ { i = 1 } ^ { M } = \bigcup _ { i = 1 } ^ { M } \phi ( L V L M ( I _ { i } ) )
$$
where $\phi$ is the 2D segment model, $\mathcal { M } _ { i }$ is the mask set of image $I _ { i }$ . Specifically, for each 2D image view $I _ { i }$ , we prompt the LVLM model like “please list all the central objects in the scene, focus on smaller or overlooked objects, and visual attributes, omitting background details.”. We use specific prompts to focus on smaller or overlooked objects. We then parse the response and obtain the initial object list for each image. To reduce potential hallucinations by the visual agent, we subsequently employ a 2D instance segmentation model [34] to ground all initial objects, identifying the final grounded object lists and obtaining the corresponding 2D object mask set $\{ \mathcal { M } _ { i } \} _ { i = 1 } ^ { M }$ , representing the candidate objects on the posed image $I _ { i }$ . We then construct the objects set $\nu$ by retaining categories from the combined set.
3D semantic consistent feature generation. We extract visual descriptors (CLIP [7]) from 2D masks. Additionally, we generate 3D instance semantic labels via superpoint clustering and align nodes with point-level semantic representations (see Sec. III-C for details). The final nodes $\mathbf { v } _ { i }$ consist of point cloud $\mathbf { p } _ { i }$ , unit-normalized feature $\hat { \mathbf { f } } _ { i }$ , and 3D box $\mathbf { b } _ { i }$ .
3D nodes caption generation. For each posed image, building on ConceptGraph [13], we generate node captions via LVLM+LLM: (1) prompt LVLM with “describe the central object” at top- $\cdot n$ clean viewpoints for initial descriptions; (2) distill coherent captions $\mathbf { c } _ { i }$ via LLM refinement.
3D scene edges generation. Building upon 3D nodes and captions, we establish spatial edges through the 3D information analyzed by LLM. For each pair of nodes $\mathbf { v } _ { i } , \mathbf { v } _ { j }$ , we compute pairwise similarity matrices via 3D bounding box IoU, then prune edges using Minimum Spanning Tree optimization. Next, we query LLMs with node captions/coordinates (e.g., “What is the relationship between 1 and 2?”) to extract spatial relations. We also calculate the Euclidean distance ${ \bf d } _ { i j }$ between box centers. Thus we can generate the edge $\mathbf { e } _ { i j } = \{ \mathbf { r } _ { i j } , \mathbf { d } _ { i j } \}$ .
# C. 3D Scene Graph with Semantic Aligned Features
LLM-generated 3D scene graphs often suffer from potential inconsistencies with actual 3D semantics due to the lack of 3D scene information. This deficiency can lead to reasoning errors, such as misalignment between the semantic labels of 3D instance features and the nodes in the scene graph. To address this, we generate 3D semantic labels for instance features, ensuring precise alignment between each point and its corresponding semantics, thereby rectifying node-semantic misalignment. This alignment occurs in two key steps: (1) 3D semantic instance labels generation. We apply graph cuts to segment the scene into superpoints, each superpoint represents a semantic label (e.g., “desk” or “chair”). Then we generate semantic labels through structural clustering for 3D instance
Spectral Clustering 3D semantic 3D Point Cloud Q ={Qi}Ǎƽ=1 Laplacian 3D Semantic Instance aligned features SGurapperhpcouitnsts ij matrxixHL Sofa desk sofa Generation Superpoint Mergi cha fi D Multi-view Node V posed images Categories Encoder 2D Masks {M i}
features and align the graph nodes with consistent semantic labels. (2) 3D semantic-aligned feature representation. We integrate visual features with superpoint-based semantic label features to produce the final semantic-aligned features.
3D semantic instance label generation. We aim to generate 3D semantic labels of nodes from superpoint merging. Specifically, as shown in Fig. 3, inspired by PoLo [35], we segment the 3D point cloud $\mathbf { P }$ into $S$ superpoints ${ \mathcal { Q } } = \left\{ Q _ { i } \right\} _ { i = 1 } ^ { S }$ using graph cuts, where each $Q _ { i }$ is a binary mask label of points. To merge superpoints into 3D instances, we construct a similarity matrix $A$ , where each element $A _ { i j }$ represents the similarity between superpoints $Q _ { i }$ and $Q _ { j }$ :
$$
\begin{array} { r } { A _ { i j } = \left( \sum _ { m = 1 } ^ { M } g ( O _ { i , m } , \tau _ { \mathrm { i o u } } ) \cdot g ( O _ { j , m } , \tau _ { \mathrm { i o u } } ) \right) \cdot \frac { f _ { Q _ { i } } ^ { \top } f _ { Q _ { j } } } { \| f _ { Q _ { i } } \| \| f _ { Q _ { j } } \| } } \end{array}
$$
where $O _ { i , m }$ and $O _ { j , m }$ is the 2D mask projection of superpoint $Q _ { i }$ and $Q _ { j }$ in the $m$ -th image. $g ( O , \tau _ { \mathrm { i o u } } )$ is 1 if the IoU of mask $O$ exceeds threshold $\tau _ { \mathrm { i o u } }$ , and 0 otherwise. $f _ { Q _ { i } }$ and $f _ { Q _ { j } }$ are the semantic representations of $Q _ { i }$ and $Q _ { j }$ , obtained by encoding their label into feature vectors using a text encoder. We then perform spectral clustering by the Laplacian matrix $L$ and segmenting superpoints via its eigenvectors. The optimal clustering dimension $H$ is set using the eigengap heuristic, selecting $H$ with the largest eigenvalue gap to determine the final number of superpoint semantic labels:
$$
\begin{array} { r } { L = D ^ { - 1 / 2 } ( D - A ) D ^ { - 1 / 2 } , \quad H = \arg \operatorname* { m a x } _ { 1 \leq j \leq J - 2 } ( \lambda _ { j + 1 } - \lambda _ { j } ) } \end{array}
$$
where $D$ is the degree matrix, with $\begin{array} { r } { D _ { i i } = \sum _ { j } A _ { i j } , \lambda _ { j } } \end{array}$ are the eigenvalues of $L$ , and the maximum gap corresponds to the optimal number of clusters.
3D semantic aligned features. After obtaining 3D semantic feature labels, we aim to align scene graph nodes with pointlevel semantic representations for 3D semantic aligned features. Unlike prior works [13], [14] that rely solely on a visual encoder for per-point representation, we employ both a vision encoder and a text encoder to ensure semantic consistency between nodes and point representations. Specifically, for each the node $\mathbf { v } _ { i }$ and its assigned semantic superpoint label $Q _ { i }$ , we first extract its visual feature $\mathbf { f } _ { i }$ using CLIP. Additionally, we encode the superpoint’s semantic label $Q _ { i }$ through a encoder to obtain its semantic feature $\mathbf { f } _ { Q _ { i } }$ , and fuse it with the visual feature via meanpooling $\varphi$ to obtain the final semantically aligned representation $\hat { \mathbf { f } _ { i } }$ .
$$
\hat { \mathbf { f } _ { i } } = \varphi ( \mathbf { f } _ { Q _ { i } } , \mathbf { f } _ { i } )
$$
where $\varphi$ is the mean pooling. The $\hat { \mathbf { f } } _ { i }$ is used to align semantic labels (e.g., “desk”) to nodes in the graph. Errors in LLMs and LVLMs may misassign labels of 3D instance features, leading to incorrect results in JSON-based reasoning.
# D. LLM-based CoT-Reasoning
irWe designed the reasoning algorithm that breaks complex queries into CoT-reasoning, combining scene-level $\mathbf { s } _ { c }$ and object-level information $\mathbf { s } _ { o }$ (defined in Sec. III-A) for freeform semantic querying. Note that the CoT-reasoning are not separate. In the stage 1, we generate candidate objects, which are refined by further analysis in the next stage.
Stage 1: Scene and Object Analysis. As shown in Fig. 2 (C), to obtain the candidate targets $\mathbf { v } _ { r }$ and relations $\scriptstyle \mathbf { e } _ { r }$ , we input the user’s complex query $q$ alongside both object-level ${ \bf s } _ { o }$ and scene-level information $\mathbf { s } _ { c }$ which defined in Sec. III-A into the LLM, which can be denoted as:
$$
n _ { r } , \mathbf { e } _ { r } = L L M ( q , \mathbf { s } _ { o } , \mathbf { s } _ { \mathrm { c } } )
$$
where $n _ { r }$ is IDs of candidate targets $\mathbf { v } _ { r }$ , scene-level information $\mathbf { s } _ { c }$ represents scene captions. This stage serves two purposes: 1) Leveraging the LLM’s planning ability to summarize observations of the entire scene, then decompose the complex semantic query into target and relational queries. 2) Cooperating object-level information with scene-level details, we aim to capture spatial relationships like “near” in Fig. 2 (C), without overlooking smaller or less prominent objects.
The LLM agent decomposes the user’s query into object and relation queries, identifying candidate ids of related targets. The “object query” refers to the primary candidate objects cited in the semantic query. The “relation query” identifies the relations of the candidate objects with the target, also providing the Euclidean distance of pairs.
Stage 2: Target and Relation Reasoning. We further leverage the LLM for spatial reasoning based on the candidate objects $\mathbf { v } _ { r }$ , relation $\mathbf { e } ^ { r }$ , and the query $q$ , then generate the final target $\mathbf { v } ^ { t }$ . The reasoning stage can be denoted as:
$$
\mathbf { v } ^ { t } = L L M ( q , \mathbf { v } _ { r } , \mathbf { e } ^ { r } )
$$
With the candidate IDs, we input the corresponding objects captions, relations, 3D information, and the Euclidean distance from each candidate object to the centroid, along with the query $q$ , into the LLM to infer the final target object.
# IV. EXPERIMENT
# A. Datasets and Implementation Details
1) Datasets: We evaluated on Sr3D [36] and Nr3D [36], and ScanRefer [37] for visual grounding, and scene segmentation task on Replica [38] and ScanNet [39] RGB-D. We validate the accuracy of scene graph on the 3DSSG dataset [28]. Sr3D [36] dataset includes annotations based on spatial relationships between objects, while Nr3D [36] consists of human-labeled language object references. We selected a subset of 526 objects from Sr3D and filtered queries in $\mathrm { N r } 3 \mathrm { D }$ that only involved spatial relations between objects. For 8 corresponding ScanNet scenes, we conducted relational queries in the format (target, relation, anchor). We evaluate on Nr3D and $\mathrm { S r } 3 \mathrm { D }$ ’s standard splits using only the val set.
ScanRefer [37] comprises 51,583 descriptions for 11,046 objects across 800 ScanNet [39] scenes. Following the benchmark, the dataset is split into train/val/test sets with 36,655, 9,508, and 5,410 samples, using the val set for evaluation.
TABLE I COMPARISONS OF 3D VISUAL GROUNDING ON SCANREFER [37] DATASET. THE ACCURACY AT 0.25 AND 0.5 IOU THRESHOLDS IS PRESENTED SEPARATELY FOR “UNIQUE,” “MULTIPLE,” AND “OVERALL” CATEGORIES.
TABLE II COMPARISONS OF 3D VISUAL GROUNDING ON SR3D [36] AND NR3D [36]. WE EVALUATE THE TOP-1 ACCURACY USING GROUND-TRUTH BOXES. “SUPER”: SUPERVISION METHOD.
Replica [38] is a dataset of 18 realistic 3D indoor scene reconstructions, covering rooms to buildings. We selected 8 scene data samples (room0, room1, room2, office0, office1, office2, office3, office4) with their annotations.
ScanNet [39] is an instance-level indoor RGB-D dataset containing both 2D and 3D data. We selected 8 scene samples, which are 0011, 0030, 0046, 0086, 0222, 0378, 0389, 0435.
3DSSG dataset [28] offers annotated 3D semantic scene graphs. Adopting the RIO27 annotation, we evaluate 27 classes and 16 relationship classes and adhere to the experimental protocol of EdgeGCN [58] for the fair comparison, dividing the dataset into train/val/test sets with 1084/113/113 scenes. All the camera viewpoints follow the original dataset settings.
2) Performance Metric: For visual grounding on the Sr3D [36] and $\mathrm { N r } 3 \mathrm { D }$ [36], we follow the ReferIt3D [36] protocol by using ground-truth object masks and measuring grounding accuracy, whether the model correctly identifies the target object among the ground-truth proposals. Additionally, to ensure a fair comparison with related works [13], [14], [32], we also report $\operatorname { A c c } @ 0 . 1$ IoU and $\operatorname { A c c } @ 0 . 2 5$ IoU for the “easy”, “hard”, “view-dep.” and “view-indep.” cases. For ScanRefer [37], we calculate the metrics including $\operatorname { A c c } @ 0 . 2 5$ IoU and $\operatorname { A c c } @ 0 . 5$ IoU, reported for both unique, multiple, and overall categories. “Unique” which refers to scenes containing only one target object, “Multiple” which includes scenes with distractor objects from the same class, and “Overall” which represents the aggregated results across all scene categories. For Replica [38] and ScanNet [39], We compute the metrics of mAcc, mIoU, and fmIoU. For 3DSSG [28], we adopt the widely used top-k recall metric $( \operatorname { R } @ \operatorname { k } )$ for scene graph evaluation, assessing objects, predicates, and relationships separately. For assessment, as shown in Table V, Recall $\textcircled { a } 5$ and Recall $@ 1 0$ are used for object classification, Recall $\textcircled { a } 3$ and Recall $\textcircled { a } 5$ for predicate classification, and Recall $@ 5 0$ and Recall $@ 1 0 0$ for relationship classification. For out-of-word queries, we validate results using manually annotated ground truth, which will be publicly available. For 5 tested datasets, we all follow the original queries and annotations.
3) Implementation Details: We conduct experiments on the NVIDIA 3090 GPU using PyTorch. We adopt GPT-4o [8] as the LLM, LLaVa-7B-v1.6 [33] as the LVLM. For 2D objects and encoding, we use Grounded-SAM [34] for 2d mask segmentation and employ the CLIP ViT-L/14 encoder [59] as the visual feature extractor. We select the top-5 view masks with the highest projected point IoU for each superpoint. Following ConceptGraph [13], for each object, we select relevant image crops from the Top-10 best views and pass them to LLM to generate captions. For superpoint merging, we employ consistent thresholds of $\tau _ { i o u } = 0 . 9$ and $\tau _ { s i m } =$ 0.9 across all experiments. For each superpoint, we select the top-5 view masks with the highest IoU relative to the projected points. Following ConceptGraph [13], we set the voxel size and nearest neighbor threshold to $2 . 5 ~ \mathrm { c m }$ , and use an association threshold of 1.1.
Fig. 4. Comparison of 3D object visual grounding task with free-form query. The ground truth box is in green.
TABLE III COMPARISONS OF 3D VISUAL GROUNDING ON SR3D [36] AND NR3D [36] DATASETS. THE ACCURACY (A) AT 0.1 AND 0.25 IOU THRESHOLDS IS PRESENTED SEPARATELY FOR 5 CATEGORIES.
# B. Experiment Results
1) 3D Object Grounding: We conducted 3D visual grounding comparisons on the Nr3D [36], Sr3D [36], and ScanRefer [37] datasets. As shown in Table I, we conducted comprehensive experimental comparisons on the ScanRefer [37] benchmark, evaluating a wide range of models across different learning paradigms. These include state-of-the-art fully supervised approaches [6], [10]–[12], [17], [40], [41], [60], weakly supervised methods [55], fine-tuned models [3], [19], [24], [25] refer to methods that are adapted to specific tasks after fine-tuning, and zero-shot approaches [3], [6], [13], [14], [23], [32], [48] refer to methods that directly use LLMs without fine-tuning. Our model, without requiring any training, achieved best results, fully demonstrating its superiority. While our training-free model does not surpass fully-supervised or fine-tuned approaches like Scene-Verse [11] and Inst3DLLM [19], which demand extensive training on 3D data and LLMs, it achieves comparable performance without any training cost, underscoring its efficiency and effectiveness. Furthermore, compared to the zero-shot models, our model achieved best results across all categories with a clear advantage. To further substantiate our findings, we also performed ablation experiments on different LLM agents, which further demonstrates that our model consistently yields optimal results across various LLMs.
Besides, for $\mathrm { S r } 3 \mathrm { D }$ and Nr3D datasets, we evalute the top-1 using groud-truth boxes in Table II and the accuray at 0.1 and 0.25 IOU threshold for 5 categories in Table III. As shown in Table II, we validated the top-1 performance using groundtruth boxes across fully-supervised models [6], [10], [17], weakly-supervised model [55], and zero-shot models [23], [49]. Our model achieved best results across 5 different metrics, demonstrating its superior performance. As shown in Table III, our model also consistently outperformed all SOTA works [13], [14], [63] across 4 cases of $\mathrm { S r } 3 \mathrm { D }$ and $\mathrm { N r } 3 \mathrm { D }$ datasets. Compared to ConceptGraph [13], BBQ [14] and Open3DSG [32], which also utilize LLMs and graph representations for reasoning, our model shows significant advantages, validating the semantic aligned features in reasoning with free-form queries. Fig.4 shows the quantitative comparison. SeeGround [23] fails to capture object relationships like “near”, while BBQ [14] struggles with semantic label like “single sofa”, hindering accurate grounding. In contrast, our model precisely grounds objects with correct semantic labels and understands both scene-level and object-level spatial relationships.
Fig. 5. Comparison semantic segmentation on the Replica dataset. The semantic map highlights the regions most relevant to the query’s semantic features, with deeper colors indicating higher relevance, where red represents the most relevant semantics.
TABLE IV COMPARISONS OF 3D SEMANTIC SEGMENTATION TASK BETWEEN OUR MODEL AND SOTA METHODS ON REPLICA AND SCANNET DATASETS.
2) Complex Queries: To evaluate our model’s capability for complex semantic queries, we compare the “hard” case on Sr3D [36] and Nr3D [36] datasets, and “Multiple” case on ScanRefer [37]. As shown in Tables III and I, our model exhibits significant advantages in handling all complex semantic queries and multi-object queries. This validates that our approach can more effectively comprehend complex semantic queries, leveraging 3D semantically consistent scene graphs.
TABLE V COMPARISONS OF 3D SCENE GRAPH GENERATION IN OBJECT, PREDICATE, AND RELATIONSHIP PREDICTION ON 3DSSG [28] DATASET.
As illustrated in rows 3-4 of Fig. 4, we further present a comparison of our model against BBQ [14] and SeeGround [23] in 3D visual grounding with complex free-form semantic queries. The results demonstrate that our model consistently identifies the correct target objects under various complex semantic queries, whereas others struggle to comprehend and resolve such intricate semantics.
3) 3D Semantic Segmentation: As shown in Table IV and Fig. 5, we evaluate on 3D semantic segmentation task on Replica [38] and ScanNet [39] datasets. Following ConceptGraph [13], we matched object nodes’ fused features to CLIP text embeddings of “an image of class”, then assigned points to their semantic categories via similarity scores. We compare our model against SOTA zero-shot 3D open-vocabulary segmentation methods [13], [14], [65]–[67] and privileged approaches leveraging pre-trained datasets [45], [61], [62], where our method consistently achieves notable gains. Compared to BBQ [14] and Open3DSG [32], our model delivers superior results on the ScanNet benchmark [39]. Furthermore, our zero-shot approach surpasses OpenFusion [63], a supervised model fine-tuned for semantic segmentation, highlighting the strength of our training-free framework. In Fig. 5, following ConceptGraph [13], we compute the similarity between each node’s semantic features and the query’s CLIP text embedding, with darker map colors (red) indicating higher semantic similarity. Our method pinpoints key semantic features, whereas others fixate on irrelevant cues. For various free-form semantic queries, our model accurately segments the corresponding semantic areas, while PoVo [35] and PLA [4] fail to understand these complex, free-form semantic queries.
TABLE VI ABLATION STUDY. GRAPH: 3D SCENE GRAPH, SA: SEMANTIC ALIGNMENT. REASONING: LLM-BASED REASONING. WE PRESENT THE OVERALL ACCURACY.
Fig. 6. Comparison of our semantic consistent scene graph with other scene graphs of the ConceptGraph [13] and BBQ [14].
4) 3D Scene Graph: Table V shows our 3D scene graph evaluation on 3DSSG [28], where we surpass all SOTA models [13], [14], [32], [62] in object, predicate, and relationship prediction.This demonstrates that methods leverage on pre-train datasets, such as OpenSeg [62], are unsuitable for predicting object nodes and relationships, while models over-rely on LLMs(e.g., ConceptGraph [13], BBQ [14]) fail on small or thin objects. While Open3DSG [32] can predict open-vocabulary objects, it still faces challenges in free-form relationship prediction. In contrast, our model can precisely predict free-form objects, predicates, and relationships without any training priors. As shown in Fig 6, it illustrates that our model not only creates a semantically consistent scene graph with complete nodes and correct relations but also assigns the correct semantic labels to each node.
# C. Ablation Study
Ablations are conducted to validate the efficacy of the proposed methods. In the first row of Table 5, we use ConceptGraph [13] as a baseline. For the model without reasoning, we apply ConceptGraph’s simpler reasoning for inference.
1) 3D Scene Graph: The comparison between Rows 1–2 in Table VI demonstrates that our 3D scene graph significantly improves visual grounding performance on Sr3D and Nr3D. This validates the effectiveness of our scene representation in capturing free-form objects and their relationships. As illustrated in Fig. 6, our model constructs a semantically consistent scene graph with complete nodes and accurate relations.
TABLE VII COMPARISONS OF REASONING ALGORITHM ON SR3D AND NR3D.
TABLE VIII COMPARISON OF MEAN AND OVERALL COMPUTATIONAL TIME.
TABLE IX ABLATION STUDIES OF DIFFERENT LLAVA MODELS.
Fig. 7. Error analysis of FreeQ-Graph on ScanRefer dataset.
2) Semantic Alignment: As shown in rocorrewctsr latio2n locatinonds a3nti reoasfon Table VI, Aligning the semantic features of the g1r3aph nodes with the semantic consistent superpoint features sirgeansionfinicga45ntly enhances th5e8 performance of two datasets. Thicsor cth18irelgatiohnl liocgatiohn tse atntihc aretas the proposed module1 effectively aligns the consisten11t6 semantic label of the graph nodes, as shown in Fig. 6.
3) LLM-based CoT-reasoning: Rows 3-4 in Table VI show that the LLM-based reasoning enhances the model’s ability to infer complex semantics. It indicates that integrating scenelevel and object-level information fosters a more nuanced understanding of complex scenes. Furthermore, by decomposing the complex query into two stages, the model more effectively identifies candidate objects and their relationships, enabling deeper analysis to determine the final target.
4) Reasoning algorithms: We explored how reasoning algorithms affect 3D object grounding on the $\mathrm { S r } 3 \mathrm { D }$ [36] and Nr3D [36], evaluating with $\operatorname { A c c } @ 0 . 1$ and $\operatorname { A c c } @ 0 . 2 5$ metrics. As shown in Table VII, our model outperforms various SOTA reasoning methods. models [13], [14]. Moreover, our reasoning algorithm can seamlessly integrate with others, such as ConceptGraph [13], significantly enhancing their ability to handle free-form complex semantic queries. It demonstrates the superiority of our LLM-based reasoning algorithm for freeform scene semantic queries.
5) Computational costs: As shown in Table VIII, our method achieves superior efficiency with significantly lower computational cost than other zero-shot, LLM-based approaches that require no pre-training or fine-tuning [13], [14]. Unlike fully-supervised or fine-tuned LLM-based models that demand hours of training, our training-free framework highlights strong practical efficiency. While we also use a GPT-based model for reasoning, like ConceptGraph and BBQ, our method delivers faster inference under the same settings. This is enabled by our semantically aligned 3D scene graph, which ensures accurate and efficient semantic representation and relation extraction. Furthermore, our CoT reasoning decomposes complex queries into manageable steps, improving reasoning speed. In contrast, ConceptGraph and BBQ rely solely on LLM outputs, often overlooking inconsistencies that lead to semantic misalignment and slower performance.
6) Error Analysis: To assess reliance, we perform an error analysis on 200 randomly selected ScanRefer [37] samples (Fig. 7), categorizing errors into 5 cases. Our scene graph enhances localization and relation detection, while semantic alignment reduces mislabeling errors. Our reasoning module effectively mitigates inference errors and keep stability.
7) Different LLaVA models.: As shown in Table IX, ablation on different LLaVAs over Nr3D and Sr3D shows that advanced LLaVA improves grounding by reducing caption errors. Our model remains more stable than BBQ and ConceptGraph, indicating that our semantically consistent 3D scene graph and reasoning reduce reliance on specific versions. | Semantic querying in complex 3D scenes through free-form language presents a
significant challenge. Existing 3D scene understanding methods use large-scale
training data and CLIP to align text queries with 3D semantic features.
However, their reliance on predefined vocabulary priors from training data
hinders free-form semantic querying. Besides, recent advanced methods rely on
LLMs for scene understanding but lack comprehensive 3D scene-level information
and often overlook the potential inconsistencies in LLM-generated outputs. In
our paper, we propose FreeQ-Graph, which enables Free-form Querying with a
semantic consistent scene Graph for 3D scene understanding. The core idea is to
encode free-form queries from a complete and accurate 3D scene graph without
predefined vocabularies, and to align them with 3D consistent semantic labels,
which accomplished through three key steps. We initiate by constructing a
complete and accurate 3D scene graph that maps free-form objects and their
relations through LLM and LVLM guidance, entirely free from training data or
predefined priors. Most importantly, we align graph nodes with accurate
semantic labels by leveraging 3D semantic aligned features from merged
superpoints, enhancing 3D semantic consistency. To enable free-form semantic
querying, we then design an LLM-based reasoning algorithm that combines
scene-level and object-level information to intricate reasoning. We conducted
extensive experiments on 3D semantic grounding, segmentation, and complex
querying tasks, while also validating the accuracy of graph generation.
Experiments on 6 datasets show that our model excels in both complex free-form
semantic queries and intricate relational reasoning. | [
"cs.CV"
] |
# 1. Introduction
Image fusion is a technique that integrates complementary information from multiple sensors or diverse imaging conditions to generate a unified, comprehensive representation of the scene. By leveraging the distinct yet complementary characteristics of different modalities [1, 2]—such as thermal radiation in infrared imaging and texture details in visible light, this technology produces fused images with enhanced informational content and improved visual interpretability, thereby facilitating more accurate scene understanding and analysis. For example, in the infrared and visible image fusion task [3, 4], visible sensors excel in preserving texture details and vivid colors but may fail to capture the key information in low light conditions or when obstructions appear. Conversely, although infrared images cannot preserve these fine-grained texture details, they can highlight targets under these adverse conditions. Fusing these two modalities together, we can obtain high-quality images that preserve both rich texture details and salient thermal information, thus overcoming the imaging limitations of sensors. This technique plays a significant role in various fields such as autonomous driving [5], medical imaging [6, 7], visual object tracking [8, 9], etc.
Figure 1: Previous Euclidean-based methods perform algebraic attention weighting, which can inadvertently weaken feature representations. In contrast, our approach emphasizes semantic similarity computation, adhering to the geometric structure of the Grassmann manifold for modeling. By decomposing high-frequency and low-frequency information across modalities, our method generates more reasonable and discriminative attention outputs.
However, effectively integrating these complementary modalities requires sophisticated mechanisms to resolve inherent discrepancies in cross-modal representations, such as the attention mechanism [10, 11, 12, 13, 14, 15]. It originally emerged from cognitive science in the 1990s and quickly expanded into the field of computer vision, simultaneously driving the development of multimodal learning. The spatial and channel attention mechanisms enable fusion models to dynamically allocate weights based on the content of the input images [16, 17, 18], improving the extraction of salient image features. However, these methods often prioritize intramodal feature associations but overlook inter-modal relationships, which are essential for fusion tasks. We argue that the complementary information of different modalities should be emphasized more vigorously by enhancing the internal features with low correlation. In recent research, some methods have recognised this issue and designed approaches based on cross-attention [19, 20, 21], which delivers promising results. While cross-attention improves interaction, existing methods still struggle to fully decouple modality-specific features and efficiently model high-dimensional geometric relationships. To address this research gap, we propose a manifold learning framework. Unlike conventional approaches that rely solely on Euclidean metrics, our method embeds high-dimensional data into the Grassmann manifold, effectively preserving local Euclidean relationships while capturing global nonlinear correlations. Specifically, the geometric structure of the Grassmann manifold inherently facilitates cross-modal feature decoupling through its orthonormal basis system. When processing infrared and visible images, this architecture automatically separates spectral and textural features into distinct yet geometrically coherent subspaces via orthogonal matrix mappings, thereby maintaining inter-modal information integrity, and this proves particularly crucial for infrared-visible image fusion tasks. By leveraging the manifold’s intrinsic properties, our framework provides a more natural representation for multimodal fusion, overcoming the limitations of purely Euclideanbased approaches.
Therefore, in this paper, we propose a novel transformer architecture for infrared and visible image fusion based on the Grassmann manifold (GrFormer), which achieves semantic similarity computation within the attention mechanism to extract meaningful image outputs. As shown in Fig. 1, we dynamically enhance semantic alignment and salient region fusion of cross-modal features through the Grassmann manifoldconstrained attention mechanism, achieving complementary interaction between infrared and visible features in the low-rank manifold space. Compared to Euclidean attention, the features we obtain contain richer semantic information. At the same time, the Riemannian manifold-based attention network is extended to the spatial domain to better adapt to the intrinsic topological structure of the image, thereby achieving a better fusion of infrared and visible images. Finally, a new cross-modal strategy is introduced for the cross-modal image fusion task. The proposed embedding of this module into the classical Vision Transformer (ViT) structure [10] highlights the low-correlation (complementary) features of the two modalities and facilitates the analysis of inter-modal statistical relationships. The main contributions of this paper are summarized as follows:
• We propose a novel model to embed Grassmann subspace representations in Euclidean attention networks, which first extends both spatial and channel attention mechanisms to manifold space for fusion. This approach effectively captures and fuses the inherent structural and semantic properties of multi-modal image data.
• Our framework constructs low-rank subspace mappings to disentangle high-frequency details and low-frequency semantics in images, enabling hierarchical cross-modal statistical relation learning through manifold-constrained optimization.
• We propose a novel fusion strategy that leverages learnable mask tensors to achieve foreground-background separation, effectively enhancing complementary information exchange across modalities.
• Experimental results on widely used benchmarks clearly demonstrate the superior performance of the proposed manifold-based approach, both qualitatively and quantitatively.
# 2. Related work
In this section, we first review classic deep learning fusion frameworks, followed by an overview of feature decomposition-based fusion methods. We then present a detailed discussion on Grassmann manifold subspaces and their relevance to our work.
# 2.1. Fusion methods based on deep learning
Previous fusion networks have achieved impressive results by leveraging the powerful fitting capabilities of deep learning. These methods utilise Convolutional Neural Networks (CNNs) for efficient feature extraction and reconstruction [22, 16, 23]. However, the quality of fusion results heavily depends on handcrafted fusion strategies. Consequently, some end-to-end deep networks [24, 4] are proposed to address this issue. Recent work further tackles misaligned inputs by integrating implicit registration with fusion in a unified framework [25, 26, 27], eliminating reliance on pre-alignment. Meanwhile, generative paradigms including GANs [28, 29] and meta-learningenhanced architectures [30] have demonstrated advantages in texture generation and modality adaptation. These advances collectively underscore the evolving paradigms in feature representation and fusion strategy learning.
In addition, the introduction of attention mechanisms has significantly accelerated the advancement of image fusion. Some CNN-based fusion methods demonstrated attention’s effectiveness through dual mechanisms: [31] combined channel and spatial attention for adaptive feature fusion, while [16] used nested connections with multi-scale attention to preserve critical information. These studies established an important foundation for applying attention mechanisms to fusion tasks.
Figure 2: The workflow of our GrFormer. In the encoding stage, the input is first encoded by convolutional layers and divided into patches, followed by processing through the Grassmann-embedded self-attention module (GSSM) and cross-attention module (GSCM), which effectively capture both intra-modal and inter-modal discriminative semantic information. To achieve a more comprehensive information representation, we extend these two network architectures to both spatial and channel dimensions and integrate them through concatenation. In the decoding stage, a convolutional network-based decoder is employed to generate the fused image. In (c), “Gr” denotes both Grassmann manifold modeling (GrMM) and Grassmann network learning (GrNL). Specifically, the “GSCM” incorporates an additional cross-modal fusion strategy (CMS).
Building upon these foundations, the field has witnessed a paradigm shift with the introduction of Transformer architectures. Recent Transformer-based methods [32, 33, 20, 34, 17] have advanced fusion performance through self-attention mechanisms, effectively capturing global dependencies while preserving modality-specific features. These approaches excel in tasks that require precise spatial alignment across imaging modalities.
Unfortunately, this global attention mechanism may overlook the low-rank structure of image regions, resulting in insufficient capture of local details, which affects the fusion performance.
# 2.2. Feature decomposition-based fusion methods
The field of infrared and visible image fusion has witnessed significant advances through diverse methodologies that decompose and integrate multimodal information in distinct yet complementary ways. Among these, STDFusionNet[35] and SMR-Net[36] explicitly decompose salient targets and texture details, using spatial masks to guide the fusion process, while SSDFusion[37] further decomposes images into scene-related and semantic-related components, enriching contextual information by injecting fusion semantics. While these methods have explored salient target preservation and scene-semantic decomposition respectively, they fundamentally operate within Euclidean space and rely on implicit feature separation. In contrast, our Grassmann manifold-based framework explicitly models cross-modal relationships through geometric priors, eliminating the need for heuristic masking or manual decomposition. Similarly, FAFusion[38] decomposes images into frequency components to preserve structural and textural details but misses global nonlinear correlations in cross-modal data. To address this, our method leverages the Grassmann manifold’s orthonormal basis to explicitly model these global nonlinear relationships, enabling more effective fusion.
These methods share a common underlying principle: the decomposition of multimodal data into interpretable components to facilitate effective fusion. This principle extends naturally to subspace-based methods, which operate on the assumption that data can be embedded into a low-dimensional subspace to capture its most significant features. Many fusion methods based on subspace representation have been proposed [39, 40, 41], leveraging the inherent structure of the data to identify and preserve critical information. Among the most commonly used are sparse and low-rank representation techniques [42, 43], which exploit both local and global structural properties of the data to extract features and conduct fusion more effectively. However, such linear subspace paradigms inherently disregard the nonlinear manifold geometry underlying multimodal imagery, where the geodesic consistency of intrinsic structures is critical for harmonizing low-level gradients with high-level semantics during fusion.
# 2.3. Grassmann manifold subspace representation
Over the past decade, tasks such as face recognition, skeleton-based action recognition, and medical image analysis have received a lot of attention. Meanwhile, the learning method based on the Grassmann manifold representation has been widely applied in the classification task [44, 45, 46, 47].
GrNet [44] first generalises Euclidean neural networks to the Grassmann manifold, marking a novel exploration of deep network architecture. Following GrNet, GEMKML [47] realises video frame sequence classification by constructing a lightweight cascaded feature extractor to hierarchically extract discriminative visual information. SRCDPC [45] extends the research to affine subspaces, designing a new kernel function to measure the similarity between affine subspaces and generating a low-dimensional representation (RVF) of affine spaces through the diagonalization of the kernel-gram matrix. Additionally, in [46] the authors integrate Riemannian SGD into the deep learning framework, enabling the simultaneous optimisation of class subspaces on the Grassmann manifold with other model parameters, thereby enhancing classification accuracy.
While other Riemannian manifolds have been explored for representation learning, they present certain limitations. For example, SPD manifolds, which model symmetric positive definite matrices (e.g., covariance descriptors via ${ \mathcal { P } } _ { n } ~ = ~ \{ X ~ \in ~ \$ $\mathbb { R } ^ { n \times n } | X = X ^ { T } , X > 0 \}$ )—excel in capturing second-order statistics but struggle with high-dimensional image data due to computational complexity and sensitivity to noise [48, 49]. Similarly, Stiefel manifolds, defined as $\mathcal { V } _ { n , m } = \{ X \in \mathbb { R } ^ { n \times m } | X ^ { T } X =$ $I _ { m } \}$ , preserve orthonormality but enforce overly rigid constraints that may discard discriminative multi-modal correlations [50].
In contrast, Grassmann manifolds naturally encode affineinvariant subspace relationships. For example, while infrared and visible images may exhibit linear distortions due to sensor differences, their essential features (such as edge structures and thermal radiation distributions) correspond to subspaces that remain equivalent on the manifold [51]. This representation flexibly captures the underlying geometric structure of the data, complementing the long-range feature learning strengths of Transformers and making it particularly well-suited for multimodal fusion tasks.
# 3. Proposed method
In this section, we provide a detailed description of our method. The overall framework of this approach is presented in Fig. 2.
# 3.1. Preliminaries
The Grassmann manifold ${ \mathcal { G } } ( q , d )$ consists of all $q$ - dimensional linear subspaces in $\mathbb { R } ^ { d }$ , forming a compact Riemannian manifold of dimensionality $q ( d - q )$ . Each subspace is spanned by an orthonormal basis matrix $\mathbf { Y }$ with the dimension of $d \times q$ , satisfying $\mathbf { Y } ^ { T } \mathbf { Y } = \mathbf { I } _ { q }$ , with $\mathbf { I } _ { q }$ being the identity matrix of size $q \times q$ . The projection mapping [52] $\Phi ( { \mathbf { Y } } ) = { \mathbf { Y } } { \mathbf { Y } } ^ { T }$ not only represents the linear subspace but also approximates the true geodesic distance on the Grassmann manifold.
# 3.2. Image fusion pipeline
In this section, we provide a detailed explanation of the pipeline for the Grassmann manifold embedded fusion network.
# 3.2.1. Image encoder
In the initial stage of the network, $I _ { i r }$ and $I _ { \nu i }$ represent the infrared and visible images, respectively. Two separate streams are employed to process them individually, using identical convolutional encoding layers to extract deep features $\left\{ \Phi _ { I } ^ { D } , \Phi _ { V } ^ { D } \right\}$ from the corresponding source images. This process is represented by $\mathcal { D } ( \cdot )$ :
$$
\Phi _ { I } ^ { D } = { \mathcal { D } } ( I _ { i r } ) , \Phi _ { V } ^ { D } = { \mathcal { D } } ( I _ { \nu i } ) .
$$
# 3.2.2. Grassmann manifold modeling
In the Grassmann manifold attention module, we integrate a projection operation into the ViT architecture [10] to construct an attention-based fusion network on the Grassmann manifold, effectively leveraging low-rank semantics of distinct subspaces. Meanwhile, several common manifold operations will be defined in Section 3.3. Let $\mathbf { X } _ { \mathbf { k } } ~ \in ~ \mathbb { R } ^ { ( h \times w ) \times d }$ represent the input features. Here, $\mathbf { k }$ indexes different modalities, while $h , w _ { ; }$ , and $d$ represent height, width, and number of channels, respectively. By learning $d$ -dimensional projection matrices $\mathbf { W } \in \mathbb { R } ^ { d \times d }$ , we obtain the query, key, and value matrices respectively:
$$
\mathbf { Q } = \mathbf { X } _ { \mathrm { k } } \mathbf { W } _ { \mathbf { Q } } , \mathbf { K } = \mathbf { X } _ { \mathrm { k } } \mathbf { W } _ { \mathbf { K } } , \mathbf { V } = \mathbf { X } _ { \mathrm { k } } \mathbf { W } _ { \mathbf { V } } .
$$
To satisfy the orthogonality assumption of the queries and keys on the Grassmann manifold, we perform a projection operation on the attention matrix, as shown in Fig. 2 (b):
$$
\mathcal { A } _ { r } = \operatorname { O r t h M a p } ( \operatorname { P r o j } ( \mathbf { Q } ^ { T } \mathbf { K } ) ) ,
$$
where Proj [52] is a projection mapping, OrthMap [44] is an orthogonal mapping layer on a Grassmann manifold, $\mathcal { A }$ denotes the attention matrix and $r$ is the index of subspace projection.
# 3.2.3. Grassmann network learning
Simultaneously, we project the attention matrix into different manifold subspaces, and use the classical Grassmann network to update parameters of the attention matrix:
$$
\mathcal { A } _ { r } ^ { \prime } = \operatorname { P r o j } \left( \operatorname { R e O r t h } \left( \operatorname { F R M a p } \left( \mathcal { A } _ { r } \right) \right) \right) ,
$$
$$
\mathrm { A t t e n t i o n } ( \mathbf { Q } , \mathbf { K } , \mathbf { V } ) = \mathbf { V } ( \mathrm { s o f t m a x } ( \frac { \mathcal { R } _ { r } ^ { \prime } } { \sqrt { d _ { i n p } } } ) ) ,
$$
where FRMap [44] is a full-rank mapping layer that projects attention features into multi-scale subspaces, separating highfrequency details from low-frequency semantics while adaptively preserving structural relationships. Meanwhile, ReOrth [44] is the re-orthonormalization layer, which subsequently enforces orthogonality via QR decomposition and inverse correction. The Proj maps subspace representations back to the original space via matrix projection, reconstructing the original attention matrix dimensionality. These operations collectively ensure geometric stability on the Grassmann manifold.
The $d _ { i n p }$ represents the dimension of the input vector. The forward process of the Attention Module operation is as follows:
$$
\begin{array} { r l } & { \mathbf { X } _ { \mathrm { k } } = \mathbf { X } _ { \mathrm { k } } + \mathsf { A t t e n t i o n } ( \mathsf { N o r m } ( \mathbf { X } _ { \mathrm { k } } ) ) , } \\ & { \mathbf { X } _ { \mathrm { k } } = \mathbf { X } _ { \mathrm { k } } + \mathsf { M L P } ( \mathsf { N o r m } ( \mathbf { X } _ { \mathrm { k } } ) ) , } \\ & { s . t . \ \mathrm { k } \in \{ i r , \nu i \} , } \end{array}
$$
where $\mathrm { { N o r m } ( \cdot ) }$ denotes the normalization operation, and $\mathrm { \mathbf { M L P } ( \cdot ) }$ is a multi-layer perception.
# 3.2.4. Grassmann-based transformer
To fuse multi-modal features, we construct an attention network based on Grassmann manifold subspace. As shown in Fig. 2 (a), GSSM and GSCM project the features onto subspaces through the FRMap layer and integrate information using the attention matrices. We denote $G S S M ^ { C } \left( \cdot \right)$ and $G S S M ^ { S }$ ( ) as the Grassmann-based Transformers in channel and spatial domains of the intra-modality, respectively. Similarly, $G S C M ^ { C }$ (·) and $G S C M ^ { S }$ (·) represent the Grassmann-based Transformers in channel and spatial domains of the inter-modality. We exchange queries between these two modalities, as is done in the cross attention mechanism (CAM). The specific cross-modal fusion strategy is detailed in Section 3.4. By manifold learning through four different spaces, we obtain low-rank features with statistical correlations within and across modalities $\left\{ \Phi _ { I , V } ^ { S M } , \Phi _ { I , V } ^ { C M } \right\}$ , as well as the concatenated features $\Phi _ { I , V } ^ { C }$ , which are defined as below:
$$
\begin{array} { c } { { \Phi _ { I , V } ^ { S M } = \left\{ G S S M ^ { C } \left( \Phi _ { I , V } ^ { D } \right) , G S S M ^ { S } \left( \Phi _ { I , V } ^ { D } \right) \right\} , } } \\ { { { } } } \\ { { \Phi _ { I , V } ^ { C M } = \left\{ G S C M ^ { C } \left( \Phi _ { I , V } ^ { D } \right) , G S C M ^ { S } \left( \Phi _ { I , V } ^ { D } \right) \right\} , } } \\ { { { } } } \\ { { \Phi _ { I , V } ^ { C } = \left\{ \Phi _ { I , V } ^ { S M } , \Phi _ { I , V } ^ { C M } \right\} , } } \end{array}
$$
where $\Phi _ { I , V } ^ { D }$ represents the depth features obtained by concatenating $\Phi _ { I } ^ { D }$ and $\Phi _ { V } ^ { D }$ in Equation $1 . { \overset { \underset { } { \bullet } } { \cdots } } \{ \} ^ { \flat }$ is the channel concatenation operation.
# 3.2.5. Fusion decoder
In the decoder $\mathcal { D } \boldsymbol { C } ( \cdot )$ , features derived from manifold learning along the channel dimension serve as input. The fused image $I _ { f }$ is generated through a series of convolutional layers that progressively reduce dimensionality, thereby enhancing edge and texture preservation. Here, “Feature Reconstruction” refers to the convolutional-layer-based fusion process that refines and integrates multi-source features into the final output. The decoding process can be defined as:
$$
I _ { f } = \mathcal { D C } \left( \Phi _ { I , V } ^ { C } \right) .
$$
# 3.3. Grassmann manifold based attention
We replace the traditional scalar weighting with orthogonal transformations that conform to the Grassmann manifold.
Figure 3: The framework of our cross-modal fusion strategy. It applies the mask matrix inside the covariance matrix to highlight the complementary information with low correlation and suppress the redundant information with strong correlation.
# 3.3.1. OrthMap layer
To ensure that the projected attention matrix satisfies the orthogonality constraint, we apply an OrthMap layer [44] to an attention matrix $\mathbf { Y } _ { k - 1 }$ for the transformation:
$$
\begin{array} { r } { \mathbf { Y } _ { k } = f _ { o m } ^ { ( k ) } ( \mathbf { Y } _ { k - 1 } ) = \mathbf { U } _ { k - 1 , 1 : q } , } \end{array}
$$
where $k$ denotes the number of network layers, and $\mathbf { U } _ { k - 1 , 1 : q }$ is obtained by performing eigenvalue (EIG) decomposition [53] on $\mathbf { Y } _ { k - 1 }$ and extracting the first $q$ largest eigenvectors.
# 3.3.2. FRMap layer
In the FRMap layer [44], we aim to transform $\mathbf { Y } _ { k }$ into a representation $\mathbf { Y } _ { k + 1 }$ in a new space through a linear mapping. It is formulated as:
$$
\mathbf { Y } _ { k + 1 } = f _ { f r } ^ { ( k + 1 ) } ( \mathbf { Y } _ { k } ; \mathbf { W } _ { k + 1 } ) = \mathbf { W } _ { k + 1 } \mathbf { Y } _ { k } ,
$$
where $\mathbf { W } _ { k + 1 }$ is a transformation matrix that maps $\mathbf { Y } _ { k }$ from the $\mathbb { R } ^ { d _ { k } \times q }$ space to the $\mathbb { R } ^ { d _ { k + 1 } \times q }$ space. Since $\mathbf { W } _ { k + 1 }$ is row full-rank, this means it can preserve the structure of the subspace but may not remain the orthogonality.
# 3.3.3. ReOrth layer
Since $\mathbf { Y } _ { k + 1 }$ may no longer be an orthogonal matrix, it is necessary to re-orthogonalize it using QR decomposition, and it is similar to the ReOrth layer [44], i.e.
$$
\mathbf { Y } _ { k + 1 } = \mathbf { Q } _ { k + 1 } \mathbf { R } _ { k + 1 } ,
$$
where $\mathbf { Q } _ { k + 1 }$ is an orthogonal matrix and ${ \bf R } _ { k + 1 }$ is an upper triangular matrix. We then re-orthogonalize $\mathbf { Y } _ { k + 2 }$ by the following manner:
$$
\mathbf { Y } _ { k + 2 } = f _ { r o } ^ { ( k + 2 ) } ( \mathbf { Y } _ { k + 1 } ) = \mathbf { Y } _ { k + 1 } \mathbf { R } _ { k + 1 } ^ { - 1 } .
$$
In this way, $\mathbf { Y } _ { k + 2 }$ is transformed back into an orthogonal matrix, preserving the orthogonality of the subspace.
# 3.3.4. Projection layer
To project $\mathbf { Y } _ { k + 2 }$ into a lower-dimensional space and preserve its geometric structure, we construct a manifold layer based on projection operations:
$$
\mathbf { Y } _ { k + 3 } = f _ { p m } ^ { ( k + 3 ) } ( \mathbf { Y } _ { k + 2 } ) = \mathbf { Y } _ { k + 2 } \mathbf { Y } _ { k + 2 } ^ { T } .
$$
The projection layer [44] uses linear transformations to expand the dimension of the orthogonal matrix, thereby reconstructing the attention weights in a manner that conforms the intrinsic relationships captured in the low-dimensional space.
# 3.3.5. Spatial attention
As illustrated in Fig. 2 (c), to extend the manifold attention to the spatial dimension, we first reorganize input features through shuffling, which redistributes spatial elements into block-wise groupings. This step enables localized Grassmann low-rank projections and attention weighting to capture relationships between adjacent patches. After processing, an unshuffling operation restores the original spatial arrangement, ensuring global coherence while retaining attention-enhanced representations. It is worth noting that the QR decomposition significantly increases the computational complexity when dealing with multiple subspaces, making it necessary to seek an optimal trade-off between the algorithm efficiency and numerical robustness. Thus, we select the most representative low-rank layers, through ablation experiments, as the feature representation of the manifold space attention.
# 3.4. Cross-modal fusion strategy
The natural covariance matrix obtained by projecting onto the manifold through the Proj layer serves as our benchmark, which reflects the statistical correlation between patches of different modalities. However, in image fusion tasks, regions with smaller correlations usually require more attention. Thus, in our method, by adjusting the weights of the covariance matrix, we guide the network to focus on those complementary informations. Fig. 3 illustrates our strategy framework.
We treat the cross-modal attention matrix constructed from the images $I _ { i r }$ and $I _ { \nu i }$ as a metric tensor:
$$
\mathbf { M } = \left[ \begin{array} { c c c c } { 1 } & { - 1 } & { \cdots } & { - 1 } \\ { - 1 } & { 1 } & { \cdots } & { - 1 } \\ { \vdots } & { \vdots } & { \ddots } & { \vdots } \\ { - 1 } & { - 1 } & { \cdots } & { 1 } \end{array} \right] .
$$
After the masking operation, we obtain the modality information-enhanced attention matrix $\Sigma _ { r } ^ { \prime }$ , where $r$ represents the dimensionality of different subspaces:
$$
\boldsymbol { \Sigma } _ { r } ^ { \prime } = \mathbf { M } \odot \boldsymbol { \Sigma } _ { r } ,
$$
where $\mathbf { M }$ represents the mask operation, and $\textstyle \sum _ { r }$ denotes the original attention matrix.
Then, reshaped attention feature maps of different dimensions $\mathbf { A } _ { I , V } ^ { W _ { r } }$ are obtained by performing feature matrix-multiply between $\Sigma _ { r } ^ { \prime }$ and $\mathbf { V } _ { r }$ .
Finally, these feature maps are averaged and concatenated to obtain the fused feature map $\Phi _ { f }$ :
$$
\Phi _ { f } = \left\{ \frac { 1 } { r } \sum _ { i = 1 } ^ { r } \mathbf { A } _ { I } ^ { W _ { r } } , \frac { 1 } { r } \sum _ { i = 1 } ^ { r } \mathbf { A } _ { V } ^ { W _ { r } } \right\} .
$$
This operation increases the “distance” between different modal features, geometrically manifested as forcing data points to expand in directions with large inter-modal differences while maintaining the original structure within each modality.
# 3.5. Loss function
The quality of the fused image is critically influenced by the design of the loss function. To facilitate the attention network in extracting rich, statistically relevant information from the source image across diverse intrinsic subspaces, we propose a detail-semantic complementary loss function. This loss function guides the network to effectively reconstruct the input modalities by balancing fine-grained details and high-level semantic features. The total loss function is defined as:
$$
\begin{array} { r } { L _ { t o t a l } = L _ { i n t } + \alpha L _ { g r a d } + \beta L _ { c o \nu } + \gamma L _ { s s i m } , } \end{array}
$$
where $L _ { i n t }$ computes the $l _ { 1 }$ distance between the fused image and the element-wise maximum of the input images. It is guided by reconstructing the source images at the pixel level to highlight the important regions. Its definition is as follows:
$$
L _ { i n t } = \frac { 1 } { H W } \parallel I _ { f } - m a x ( I _ { i r } , I _ { \nu i s } ) \parallel _ { 1 } ,
$$
where $H$ and $W$ represent the height and width of an image, respectively. The $m a x \left( \cdot \right)$ function takes the maximum value of the corresponding elements in the input matrix, and $| | \mathbf { \partial } \cdot \mathbf { \partial } | | _ { 1 }$ is $l _ { 1 } - n o r m$ .
To achieve a more precise texture representation in the subspace, we introduce gradient-based constraints between the source images and the fusion result, i.e., a set of regularization terms that minimize the discrepancies in gradient magnitudes and orientations:
$$
L _ { g r a d } = \frac { 1 } { H W } \parallel \left| \nabla I _ { f } \right| - m a x ( | \nabla I _ { i r } | , | \nabla I _ { \nu i s } | ) \parallel _ { 1 } ,
$$
where $\nabla$ and $| \cdot |$ represent the Sobel operator.
At the feature level, in order to maximize the retention of deep semantics in the feature subspace, we use the VGG-16 trained on ImageNet for feature extraction and select the deep convolutional blocks to design the loss function. The definition of $L _ { c o \nu }$ is as follows:
$$
L _ { c o \nu } = \sum _ { k = 3 } ^ { w } | | C o \nu ( \Phi ( I _ { f } ) ^ { k } ) - C o \nu ( \Phi ( I _ { i r } ) ^ { k } ) | | _ { 1 } ,
$$
where $C o \nu \left( \cdot \right)$ denotes the covariance matrix of the feature map and $\Phi \left( \cdot \right)$ is the feature extracted from deep network. The $w$ is set to 4.
Figure 4: Infrared and visible image fusion experiment on TNO dataset. The intricate semantic features of highly correlated regions are well-preserved, as exemplified by the distinct outlines of eaves and shrubs in the second and fourth rows. Simultaneously, complementary information from low-correlation regions is sufficiently emphasized, such as the contours of figures, the colors of clothing in the first and third rows, and the precise separation of tree branches from the sky background.
Figure 5: Infrared and visible image fusion experiment on MSRS dataset. Our method effectively extracts the most valuable information from RGB images, as demonstrated in the first and third rows, where the details of the cars are more complete compared to other approaches. Simultaneously, in the second and fourth rows, the thermal infrared targets are prominently highlighted while effectively avoiding artifacts.
Finally, we compute the structural similarity loss between the fused image and the source image to enforce structural consistency, defined as follows:
$$
L _ { s s i m } = ( 1 - S S I M ( I _ { f } , I _ { \nu i s } ) ) + \delta ( 1 - S S I M ( I _ { f } , I _ { i r } ) ) ,
$$
where $S S I M$ is the structural similarity index [65], $\delta$ is the balance term of loss.
# 4. Experiments
In this section, we introduce the implementation and configuration details, and validate the rationality of the proposed method and the effectiveness of the modules with experiments.
# 4.1. Setup
We first introduce the key components of the methodology, including the datasets used, parameter configurations, pipeline design, evaluation methods with quality metrics, and network optimization strategies.
# 4.1.1. Datasets
In our work, we selected 1083 pairs of corresponding infrared and visible images from the MSRS dataset as training data. During the testing phase, we use 40 pairs of images from TNO [54] and 361 pairs of images from MSRS [55] as the test sets, respectively. The dimensions of the test images are typically not fixed.
Table 1: Quantitative Experiments on the TNO and MSRS Dataset. We represent the top three best-performing metrics using RED, BROWN, and BLUE fonts, respectively.
# 4.1.2. Parameter setting
We implemented the algorithm using PyTorch. In the training phase, an end-to-end strategy was employed to train the model on an NVIDIA TITAN RTX GPU, and the size of the training images is standardized to $2 5 6 \times 2 5 6$ pixels to ensure dimensional consistency across the network architecture. Within the manifold module, the Adam optimizer is used to update the weights of the Grassmann layers, with a learning rate set to $1 0 ^ { - 4 }$ . Additionally, the parameters $\alpha , \beta , \gamma _ { ; }$ , and $\delta$ in the loss function are empirically set to 1, 2, 10, and 1, respectively.
# 4.1.3. Pipeline design
The network employs a streamlined fusion architecture, first projecting inputs into higher dimensions via convolutional layers, then flattening the feature map into patches. Four parallel Grassmann manifold-based Transformer modules are used to process distinct attention types: (1) single-modal channel, (2) single-modal spatial, (3) cross-modal channel, and (4) cross-modal spatial attention. The feature dimension of the attention network is set to 64, with each attention head having a dimension of 8. The Channel Transformer assigns subspace coefficients to 2, 3, 4, 5 and aggregates features via summation, while the Spatial Transformer uses a fixed coefficient 100 for efficiency. Cross-modal interactions are explicitly modeled through dual-path attention between infrared/visible streams. During the decoding phase, all features are concatenated and progressively compressed via conv blocks $2 5 6 { } 1 9 2 { } 1 2 8 { } 6 4 { } 1$ channels) to produce the fused output. The design unifies intra-modal relationships (channel/spatial), cross-modal interactions, and Grassmann manifold projections within a consistent framework. Notably, all convolutional blocks are configured with a kernel size of 3 and a stride of 1 to ensure consistency across the architecture.
# 4.1.4. The methods and the quality metrics used
The method presented in this article was compared and evaluated with thirteen different image fusion network approaches, including some classic and latest methods. These are: FusionGAN [28], GANMcC [56], RFN-Nest [24], ReCoNet [57], DeFusion [58], MUFusion [59], SemLA [60], LRRNet [40], CrossFuse [20], VDMUFusion [61], EMMA [62], FusionBooster [63] and GIFNet [64]. Regarding quality metrics, six indices were chosen for performance evaluation, which include: Mutual Information (MI), Spatial Frequency (SF), Visual Information Fidelity (VIF), Average Gradient (AG), $Q ^ { A B / F }$ and structural similarity index measure (SSIM). The descriptions of these metrics can be found in [1].
# 4.1.5. Network optimization
As Grassmann optimization relies on EIG decomposition, we leverage the theoretical results of [53] for gradient calculation. Consider the eigenvalue decomposition of a real symmetric matrix $Y _ { k - 1 } \in \mathbb { R } ^ { D \times D }$ , where $k$ denotes the layer number in the manifold network:
$$
\begin{array} { r } { Y _ { k - 1 } = U \Sigma U ^ { T } , } \end{array}
$$
where $U$ is an orthogonal matrix ( $U ^ { T } U = I )$ and $\Sigma$ is a diagonal matrix containing the eigenvalues. The gradient of the loss function $L ^ { ( k ) }$ with respect to $Y _ { k - 1 }$ is derived as follows.
Under an infinitesimal perturbation $d Y _ { k - 1 }$ , the first-order variations of $U$ and $\Sigma$ are given by:
$$
d \Sigma = ( U ^ { T } d Y _ { k - 1 } U ) _ { \mathrm { d i a g } } ,
$$
$$
d U = U \left( \tilde { K } \circ ( U ^ { T } d Y _ { k - 1 } U ) \right) ,
$$
where $\tilde { K }$ is the kernel matrix defined as:
$$
\tilde { K } _ { i j } = \left\{ \begin{array} { l l } { \frac { 1 } { \sigma _ { i } - \sigma _ { j } } , } & { i \neq j , } \\ { 0 , } & { i = j . } \end{array} \right.
$$
Here, $\sigma _ { i }$ denotes the $i$ -th diagonal element of $\Sigma$ , and the gradient of $L ^ { ( k ) }$ with respect to $Y _ { k - 1 }$ is obtained by applying the chain rule:
$$
{ \frac { \partial L ^ { ( k ) } } { \partial Y _ { k - 1 } } } : d Y _ { k - 1 } = { \frac { \partial L ^ { ( k ) } } { \partial U } } : d U + { \frac { \partial L ^ { ( k ) } } { \partial \Sigma } } : d \Sigma .
$$
Substituting the expressions for $d U$ and $d \Sigma$ :
$$
\frac { \partial L ^ { ( k ) } } { \partial U } : d U = \left( \tilde { K } \circ \left( U ^ { T } \frac { \partial L ^ { ( k ) } } { \partial U } \right) \right) : ( U ^ { T } d Y _ { k - 1 } U ) ,
$$
$$
\frac { \partial L ^ { ( k ) } } { \partial \Sigma } : d \Sigma = \left( \frac { \partial L ^ { ( k ) } } { \partial \Sigma } \right) _ { \mathrm { d i a g } } : ( U ^ { T } d Y _ { k - 1 } U ) .
$$
Combining these terms and projecting back to the $Y _ { k - 1 }$ -space using the orthogonal transformation property $( U ^ { T } d Y _ { k - 1 } U ) \ =$ $U ^ { T } ( \cdot ) U$ , we obtain:
$$
\frac { \partial L ^ { ( k ) } } { \partial Y _ { k - 1 } } = U \left[ \left( \tilde { K } \circ \left( U ^ { T } \frac { \partial L ^ { ( k ) } } { \partial U } \right) \right) + \left( \frac { \partial L ^ { ( k ) } } { \partial \Sigma } \right) _ { \mathrm { d i a g } } \right] U ^ { T } .
$$
# 4.2. Comparison with SOTA methods
In this section, we conducted both qualitative and quantitative experiments on the proposed GrFormer with two classic infrared-visible datasets, TNO and MSRS, to verify the performance of our method.
# 4.2.1. Qualitative comparison
In Fig. 4 and Fig. 5, we present the visualization results of four image pairs from two datasets. The comparative methods can be categorized into four groups. The first group consists of generative model-based approaches, including FusionGAN, GANMcC, and VDMUFusion. These methods tend to suppress outliers, causing high-brightness regions to be smoothed or compressed, resulting in darker fused images with reduced contrast, as seen in the sky in Fig. 4. The second group includes decomposition-based methods such as DeFusion, LRRNet, and FusionBooster. Due to lightweight autoencoders or low-rank constraints compressing feature dimensions, highfrequency details are lost, exemplified by the texture of trees in Fig. 4. The third group comprises training-based methods, including RFN-Nest, CrossFuse, MUFusion, and EMMA. Among them, RFN-Nest and CrossFuse exhibit a bias toward the visible modality, leading to blurred edges of infrared targets. While the memory unit in MUFusion enhances fusion consistency, it propagates noise, as observed in the human targets in Fig. 4. EMMA relies on unsupervised cross-modal consistency constraints but lacks explicit supervision for edge details, as highlighted in the blue box in Fig. 4. The fourth group consists of task-driven methods, including SemLA, ReCoNet, and GIFNet. These approaches overly rely on high-level semantic or functional features, suppressing visible-light details such as the license plate of the car in Fig. 5. In contrast to these methods, our approach successfully integrates the thermal radiation information from the infrared modality with the texture details from the visible modality, preserving complex features in highly correlated regions while clearly separating salient targets in low-correlation regions, achieving an optimal balance.
Figure 6: Results of ablation study in different environments. Compared to traditional Euclidean attention mechanisms, our method successfully separates low-frequency semantics from the background. Meanwhile, the hybrid attention manifold network based on channel and spatial dimensions suppresses redundant information. Furthermore, the use of the cross attention mechanism preserves more high-frequency details in RGB images.
# 4.2.2. Quantitative comparison
We conducted a quantitative analysis of the proposed method using six metrics, as shown in Tab. 1. Our method exhibits significant performance improvements on nearly all metrics, confirming that it is universally applicable to various scenarios, capable of generating images consistent with human visual perception while retaining more complementary information. However, compared to the latest methods, our approach does not achieve the highest scores in sharpness-related metrics (AG, SF). This is due to the introduced noise in their results, which leads to inflated AG and SF values. In contrast, our method preserves more comprehensive information overall while simultaneously reducing noise interference.
# 4.3. Ablation studies
In this section, we analyse each key component of GrFormer, including: the spatial and channel attention modules, cross-modal fusion strategy, cross-modal attention mechanism, manifold network layer configuration, and a comparison with Euclidean-based methods.
# 4.3.1. The influence of “SA” block and “CA” block
Similar to the traditional CBAM [14], our network architecture also incorporates attention operations in both the channel and spatial dimensions, which helps the model adaptively emphasize those feature channels and key regions that are more important for the fusion task.
Channel attention focuses on the most important channels in the feature map for the current task and assigns weights to them. As shown in Fig. 6, the person in the field might be the part that “needs to be emphasized”, but the model overlooks the texture in the edge areas. In contrast, spatial attention focuses on some important spatial locations but loses the interdependencies between channels, leading to color distortion in the image. Our GrFormer, while balancing the relationship between the two, generates clear fused images.
Table 2: The average value of the objective metrics obtained using the CA or SA block on the TNO dataset. The best results are highlighted in BOLD fonts.
We first separately trained the spatial and channel attention modules as the backbone of our network. Tab. 2 shows that although the outcomes based on CA are slightly higher than those based on SA, the model lacks the ability to localize spatial features, which is not conducive to the fusion of pixel-level complementary information. Experiments demonstrate that the training strategy of using both SA and CA can enhance the representational power of pre-fusion features and improve the robustness of training.
# 4.3.2. The influence of CMS
Unlike traditional attention guidance methods, we innovatively added a cross-modal mask strategy after the projection matrix, aiming to force the network to learn deep statistical information across modalities. We sequentially incorporated and removed the CMS strategy in our network to evaluate its effectiveness. As shown in Fig. 7, conventional attention operations assign higher weights to background noise or irrelevant information, leading to the insufficient distinction between intermodal complementary information and redundant information. In contrast, our method highlights salient targets and local textures, preserving the low- correlation detail parts of different modalities. This information may be crucial for distinguishing different objects or scenes.
As shown in Tab. 3, the fusion results obtained by our method are clearer and more natural, preserving the attentionworthy details from different modalities and enhancing complementary features.
# 4.3.3. The influence of CAM
Our cross-modal attention module incorporates the proposed CMS, which is designed to enhance the complementary information between different modalities. To reveal the rationality of the cross-modal attention network on the manifold, we removed the last two cross-modal manifold networks and trained only the first two self-attention manifold layers (GSSM-SA, GSSMCA). As illustrated in Fig. 6, CAM stands for cross-modal attention module. Obviously, compared to the results obtained from the fusion network without CAM, GrFormer retains more details, indicating that the texture parts in the visible images are emphasized, while the infrared thermal radiation information is fully preserved, resulting in clearer images. Moreover, thanks to the CMS operation, this complementary information is further amplified, achieving high-quality multi-modal image fusion.
Figure 7: Comparison of intermediate feature visualisation with and without CMS integration. Our method highlights the low-correlation regions between modalities, which are crucial for the fusion task. At the same time, the highcorrelation regions are appropriately attenuated, achieving effective suppression of redundant information, thereby enhancing the quality of the fusion results.
Table 3: The average value of the objective metrics achieved on the TNO dataset with or without CMS or GSCM. The best results are highlighted in BOLD fonts.
Tab. 3 shows that compared with the network with CAM added, the significance of the target in the fusion result learned by the network without the cross-modal manifold module is significantly reduced. This indicates that in the self-attention mechanism, the model mainly focuses on the information interaction within its own modality, but the mining of the correlation between different modalities is insufficient, and some pixel intensity information in the infrared modality is lost, resulting in an unsatisfactory fusion result.
Figure 8: Visualization of semantic feature maps weighted by Grassmann manifolds at different scales. When the subspace coefficient is set to 100, the topological structure of the image is well-preserved while encapsulating rich semantic information. In quantitative evaluations, our method achieves the best performance.
Table 4: The average value of the objective metrics achieved with different subspace coefficient $q$ on the TNO dataset. The best results are highlighted in BOLD fonts.
# 4.3.4. Analysis of the projection coefficient
During the process of projecting the network into different Grassmann subspaces, we aggregate information across all channel dimensions to obtain a multi-scale low-rank representation. However, the subspace representation based on the spatial dimension incurs a significant computational cost due to largescale EIG and QR decompositions. Therefore, to better perform representation learning, we conducted ablation studies on subspaces of different scales to identify the optimal experimental setup.
Figure 9: Ablation results under different manifold constraints. Removing the orthogonalization constraint and QR decomposition causes the feature space to deviate from the Grassmann manifold. As shown in the first row, the fused images exhibit color degradation and weakened edge information. The results in the second row fail to clearly distinguish between the center and boundaries of the light source. Our design performs well in both aspects.
Table 5: Comparison of average metrics for different manifold constraints on the TNO dataset. The best results are highlighted in BOLD fonts.
We set subspace coefficients in the FRMap layer to 50, 100, 150, and 200, respectively, and we visualized the results. As shown in Fig. 8, when $\scriptstyle { \mathcal { G } } = 5 0$ , the feature map contains fewer detailed information and may fail to capture complex data structures. The pedestrian is almost indistinguishable from the background, losing some of the shape features of the car. At $\scriptstyle { \mathcal { G } } = 1 5 0$ , the attention network retains some texture details from the visible images, but introduces noise. When $\scriptstyle { \mathcal { G } } = 2 0 0$ , the computational efficiency significantly decreases, severe distortion occurs at the edges, and the distinction between pedestrians and the background is greatly reduced. In our method, we set $\scriptstyle { \mathcal { G } } = 1 0 0$ , achieving a balance between feature representation capability and computational efficiency. At this point, the image highlights the pedestrian features while preserving the texture details of the background, achieving enhancement in semantic information from both modalities.
As displayed in Tab. 4, when the subspace coefficients are set too low, their dimensionality becomes insufficient to characterize the high-curvature geometric characteristics of the manifold, leading to aggravated local geometric distortions and significant degradation in image details and topological structures. When the subspace coefficients are set too high, the computational complexity of high-dimensional matrix decomposition grows exponentially, and redundant dimensions introduce spurious curvature noise, resulting in the loss of complementary information. Therefore, we set a moderate coefficient value to achieve the best fused visual effects.
Figure 10: We select the representative MSRS dataset to validate our method. In comparison with other fusion networks, our GrFormer effectively integrates complementary information from infrared and visible images, achieving the highest detection accuracy.
Figure 11: We conducted comparative segmentation experiments on the MSRS dataset. As demonstrated in the examples, GrFormer is capable of effectively segmenting thermally insensitive objects (such as bicycles on the roadside). For objects with high thermal information content (such as cars and people), our method fully leverages these thermal cues, generating more desirable segmentation results.
Table 6: Our module is compared with three Euclidean attention modules. Here, GrAM (Grassmann-based Attention Module) is composed of four attention layers from GrFormer: GSSM-Channel, GSSM-Spatial, GSCM-Channel, and GSCM-Spatial. The best results are highlighted in BOLD fonts.
# 4.3.5. Analysis of the manifold constraints
The core of the Grassmann manifold lies in maintaining the stability and directional consistency of feature subspaces through orthogonality. Orthogonalization constraints ensure the orthogonality of the initial mapping matrix, preventing redundant or ill-conditioned structures in the feature space during decomposition. To validate the effectiveness of the Grassmann manifold network, we replaced the initial orthogonalization constraint with a random mapping matrix and eliminated QR decomposition. As shown in the first row of Fig. 9, removing the orthogonalization constraint leads to chaotic feature directions, resulting in insufficient subspace decomposition, which in turn causes color degradation and edge blurring in the fused image. The role of QR decomposition is to dynamically correct the feature space during optimization, counteracting feature drift caused by numerical errors. When this mechanism is removed, the feature subspace gradually deviates from the orthogonal structure during training, making it difficult for the model to accurately model lighting distribution and texture details, as demonstrated in the second row.
Table 7: Quantitative evaluation of object detection using the MSRS dataset. The best three metrics are highlighted in RED, BROWN, and BLUE fonts, respectively.
Table 8: Quantitative evaluation of segmentation using the MSRS dataset. The best three metrics are highlighted in RED, BROWN, and BLUE fonts, respectively.
The average values of the six metrics are shown in Tab. 5. Compared with the methods that remove the two manifold constraints, our approach demonstrates superior performance in most metrics, indicating that Grassmann manifold constraints stabilize feature space structure and enhance fusion quality.
# 4.3.6. Comparative analysis of manifold-based versus Euclidean attention modules
We conducted comparative experiments on the proposed Grassmann Attention Module, selecting several classical Euclidean attention mechanisms to validate the performance of our method. These include the spatial domain-based SE Block [11], the combined channel and spatial domain CBAM [14], and the classical Transformer architecture [10]. Specifically, we replaced the four-layer Grassmann-based Transformer Module in Fig. 2 (a) with the aforementioned Euclidean attention modules. The results of image fusion are illustrated in Fig. 6. Compared to other architectures, our method demonstrates superior performance in visual effects, effectively preserving salient infrared features and scene details. Furthermore, the superior metrics in Tab. 6 corroborate this observation.
# 4.4. Experiments in object detection
To evaluate the detection performance of fused images, we trained each fusion method’s output on the MSRS dataset [55] using YOLOv7 [66] as the detection network. The evaluation metrics included accuracy, mean precision at 0.5 (AP50), and mean precision at 0.5:0.95 (AP50:95). For the training setup, we configured the following parameters: a batch size of 16, 50 training epochs, 8 dataloader workers, and 2 detection categories (“Person” and “Car”), where “All” denotes their average accuracy. All input images were resized to $6 4 0 \times 6 4 0$ , and the Adam optimizer was employed for parameter updates.
As shown in Fig. 10, among the methods we compared, FusionGAN and GANMcC exhibited redundant detections, failing to accurately distinguish the targets. ReCoNet, MUFusion, and SemLA methods encountered difficulties in detecting the “car” category, resulting in lower accuracy. Additionally, the results from RFN-Nest and EMMA did not accurately detect pedestrians on the road. In contrast, our GrFormer maintained high detection accuracy in challenging scenarios while preserving the significant features and texture details of the targets.
In terms of quantitative performance, as shown in Tab. 7, GrFormer has the best detection performance, especially in the “Person”, “All” and “AP50:95” categories, indicating that GrFormer can highlight infrared thermal radiation information and adaptively adjust environmental brightness to improve detection accuracy.
# 4.5. Experiments in semantic segmentation
To further validate the performance of the proposed GrFormer in downstream tasks, we conducted comparative experiments using the segmentation network Deeplab $\mathrm { V } 3 +$ [67] on the aforementioned 13 fusion methods. Specifically, we performed semantic segmentation on the four basic categories (car, person, bike, and background) provided by the MSRS segmentation dataset. The quantitative results were calculated using the average accuracy and mIoU (Mean Intersection over Union).
We train the segmentation network with SGD for 50 epochs on $4 8 0 \times 6 4 0$ inputs, using an initial learning rate of 7e-3 and a batch size of 8. Moreover, we employ automatic LR scaling based on batch size to maintain training stability and accelerate convergence.
As shown in Tab. 8, our method achieved the best scores across four metrics, demonstrating its advantages in enhancing both the overall target regions and detailed boundaries. This also proves that our Grassmann-based fusion method achieves a balanced optimization of global semantics and local details.
Fig. 11 illustrates the comparison with other competing fusion schemes. Clearly, for heat-insensitive objects such as the bikes in the last two rows, our method effectively preserves the basic shapes and detailed information. Meanwhile, for infrared targets, we also highlight their salient features, as shown in the car examples in the first two rows and the person examples in the last two rows.
In summary, both quantitative and qualitative results demonstrate the strong competitiveness of our Grassmann-based attention network.
Table 9: Efficiency comparison between GrFormer and 13 SOTA methods. The best results are highlighted in BOLD fonts.
# 4.6. Efficiency comparison
Tab. 9 presents a computational efficiency comparison between GrFormer and 13 other methods, evaluated using both parameter count (Params) and floating-point operations (FLOPs). Notably, GAN-based fusion methods typically introduce substantial computational overhead. Methods like CrossFuse, EMMA, and SemLA incorporate Vision Transformer (ViT) architectures, resulting in increased parameter counts. Other approaches such as DeFusion employ complex feature decomposition modules, RFN-Nest adopts a two-stage training strategy, and MUFusion integrates memory units - all contributing additional computational costs. In contrast, lightweight designs in ReCoNet, LRRNet, FusionBooster, and GIFNet achieve relatively lower parameter counts and computational requirements. Compared to these methods, GrFormer’s runtime performance is less competitive due to its transformer architecture and the CPU-dependent eigenvalue decomposition operations in its manifold network, which impact time efficiency. Nevertheless, GrFormer’s simple hierarchical structure design enables it to surpass most existing methods in terms of parameter efficiency. | In the field of image fusion, promising progress has been made by modeling
data from different modalities as linear subspaces.
However, in practice, the source images are often located in a non-Euclidean
space, where the Euclidean methods usually cannot
encapsulate the intrinsic topological structure. Typically, the inner product
performed in the Euclidean space calculates the algebraic
similarity rather than the semantic similarity, which results in undesired
attention output and a decrease in fusion performance.
While the balance of low-level details and high-level semantics should be
considered in infrared and visible image fusion task. To
address this issue, in this paper, we propose a novel attention mechanism
based on Grassmann manifold for infrared and visible
image fusion (GrFormer). Specifically, our method constructs a low-rank
subspace mapping through projection constraints on the
Grassmann manifold, compressing attention features into subspaces of varying
rank levels. This forces the features to decouple into
high-frequency details (local low-rank) and low-frequency semantics (global
low-rank), thereby achieving multi-scale semantic
fusion. Additionally, to effectively integrate the significant information,
we develop a cross-modal fusion strategy (CMS) based on
a covariance mask to maximise the complementary properties between different
modalities and to suppress the features with high
correlation, which are deemed redundant. The experimental results demonstrate
that our network outperforms SOTA methods both
qualitatively and quantitatively on multiple image fusion benchmarks. The
codes are available at https://github.com/Shaoyun2023. | [
"cs.CV",
"I.4"
] |
# 1. Introduction
Data systems are increasingly integrating machine learning functionalities to enhance performance and usability, marking a paradigm shift in how data is managed and processed in databases (Ooi et al., 2024; McGregor, 2021; Li et al., 2021). The integration has transformed key database operations such as query optimization, indexing, and workload forecasting into more precise, efficient, and adaptive processes (Zhang et al., 2024b; Kurmanji and Triantafillou, 2023; Anneser et al., 2023; Ferragina et al., 2020).
Despite these advancements, learned database operations face a persistent challenge: concept drift. Databases are inherently dynamic, undergoing frequent insert, delete, and update operations that result in shifts in data distributions and evolving input-output relationships over time (Zeighami and Shahabi, 2024). These drifts, often subtle but cumulative, can alter the patterns and mappings that traditional machine learning models rely upon, rendering their assumptions of static distributions invalid. This phenomenon requires adaptive methods for maintaining predictive accuracy in dynamic database environments.
Traditional reactive training-based adaptation approaches to handling concept drift, such as transfer learning (Jain et al., 2023; Kurmanji and Triantafillou, 2023; Kurmanji et al., 2024), active learning (Ma et al., 2020; Li et al., 2022), and multi-task learning (Kollias et al., 2024; Wu et al., 2021), come with significant drawbacks in learned database operations. As illustrated in Figure 1, delays and costs in post-deployment data collection and model updates, and reliance on static mappings limit their practicality in dynamic database environments (Kurmanji et al., 2024; Li et al., 2022). In addition, they process each input independently. The negligence of inter-query dependencies and shared contextual information in databases results in poor modeling of database operations. Addressing these limitations raises two critical challenges: (1) How can we support one-the-fly adaptation to constantly evolving data without incurring the overhead of frequent retraining or fine-tuning in databases? (2) How can we dynamically inject contextual information into the modeling process to achieve contextaware prediction for learned database operations?
To address these challenges, we introduce FLAIR, an eFficient and effective onLine AdaptatIon fRamework that establishes a new adaptation paradigm for learned database operations. FLAIR is built on a unique property of database
$\widehat { \textbf { a } }$ Concept Drift in Databases I Insert D Delete c Key Features and Applications Concept Drift U Update S Select Framework FLAIR’s Key Features o。 图 D … I U S I D U … U I D S D U … I U S I ?
SQL Query Database FLAIR Effectiveness Efficiency Transferability Applications
b Adaptation Paradigm System-internal Tasks & User-oriented Tasks TCaolbSultematSnitstaSitciastisctsics UpCdoaltliencgt Sntgataisntdics Update Cardinality Estimation QuAerpyprPorxoicmeastseing DIant-adaAtnaablaytsiecs AR • Index Statistics Inaccurate and SAtsastiustmicpst ownitsh SAtsastiustmicpst ownitsh Inefficient d Overall Performance Retrain Collect Update PCeorliloedcitcanlldy UDpetdeactte, PostgreSQL Latency: 40.4s Drift Offline Adaptation Model 1 EDvetnet-cdtiriovnen High Latency Model 2 Model n (F1L.A9IXR FLatsetnerc)y: 21.4s PostgreSQL GAR Online: In-context Adaptation On-the-fly EAfdfiacpiteatnicoyn:speed Efrfreorctrievdeuncesds: MetFaL-tArIaiRned FLAIR FLAIR by 22.5%
operations: the immediate availability of execution results for predictions in the database. These results, serving as ground-truth labels, provide real-time feedback that enables seamless adaptation. FLAIR leverages this property to dynamically adapt to evolving concepts using such contextual cues from databases. Formally, FLAIR models the mapping as $f : ( \mathbf { x } \mid \mathcal { C } _ { t } ) \mathbf { y }$ , where $\mathbf { x }$ denotes the input query, $\mathcal { C } _ { t }$ is the current context consisting of recent pairs of queries and their execution results, and y is the predicted output.
To achieve in-context adaptation for learned database operations, FLAIR introduces two cascaded modules: the task featurization module (TFM) and the dynamic decision engine (DDE). The TFM encodes database operations into standardized task representations, extracting informative features and producing a unified, structured input format. This ensures consistency and efficiency across diverse tasks within databases. The dynamic decision engine functions as the core of FLAIR, delivering predictions that can adapt to evolving concepts. To this end, we introduce a Bayesian meta-training mechanism that utilizes synthetic prior distributions to pretrain FLAIR with a comprehensive knowledge base, pre-adapting it to handle diverse and dynamic scenarios. Unlike traditional reactive approaches, FLAIR eliminates the need for compute-intensive parameter optimization after deployment. To the best of our knowledge, FLAIR is the first framework to enable on-the-fly and context-aware adaptation in dynamic data systems.
We summarize our main contributions as follows:
• We propose a novel in-context adaptation framework FLAIR, designed to address the persistent challenge of concept drift in dynamic data systems with high efficiency and effectiveness. • FLAIR introduces Bayesian meta-training that enables robust and transferable learning from dynamic distributions, thus eliminating the need for costly parameter retraining or fine-tuning after deployment.
• FLAIR is designed as a task-agnostic framework that enhances a wide range of learned database operations. These include system-internal tasks such as cardinality estimation, and user-oriented applications like approximate query processing and in-database data analytics.
• Extensive experiments show FLAIR’s superior performance in dynamic databases, achieving a $5 . 2 \times$ speedup in adaptation and a $2 2 . 5 \%$ reduction in GMQ error for cardinality estimation. Furthermore, by integrating FLAIR with PostgreSQL, we achieve up to a $1 . 9 \times$ improvement in query execution efficiency.
# 2. Preliminaries
Problem Formulation. Consider a database D consisting of a set of relations (tables) $\{ \mathbf { R } _ { 1 } , . . . , \mathbf { R } _ { \mathbf { N } } \}$ . Each relation ${ \bf { R } } _ { \bf { i } }$ has $n _ { i }$ attribute fields (columns), $\mathbf { R _ { i } } ~ = ~ ( \mathbf { a _ { 1 } ^ { i } } , . . . , \mathbf { a _ { n _ { i } } ^ { i } } )$ where the attributes correspond to either categorical or numerical features in prediction. In this paper, we focus on select-project-join (SPJ) queries executed alongside a mix of insert, delete, and update operations. The challenge addressed is concept drift, an intrinsic property of databases, described as a shift in the relationship between queries and their corresponding predictive outputs over time.
Definition 2.1 (Concept Drift in Databases). Let ${ \textsc { Q } } =$ $\{ \mathbf { x _ { 1 } } , \mathbf { x _ { 2 } } , \cdot \cdot \cdot \}$ represents a sequence of input queries and $\mathrm { Y } = \{ \bf y _ { 1 } , y _ { 2 } , \cdot \cdot \cdot \}$ denote the corresponding output predictions, e.g., estimated row counts in cardinality estimation. Concept drift occurs at time $t$ if the joint probability distribution $P _ { t } \left( \mathbf { x } , \mathbf { y } \right)$ changes from $P _ { t } \left( \mathbf { x } , \mathbf { y } \right)$ to $P _ { t + 1 } \left( \mathbf { x } , \mathbf { y } \right)$ , such that $P _ { t } \left( \mathbf { x } , \mathbf { y } \right) \neq P _ { t + 1 } \left( \mathbf { x } , \mathbf { y } \right)$ .
Figure 2: FLAIR for dynamic data systems.
In concept drift, the change in the joint probability distribution $P ( \mathbf { x } , \mathbf { y } ) = P ( \mathbf { x } ) P ( \mathbf { y } | \mathbf { x } )$ may come from shifts in $P ( \mathbf { x } )$ (covariate shift) or $P ( \mathbf { y } | \mathbf { x } )$ (real shift). Database updates, especially frequent insert, delete and update operations, typically induce shifts in $P ( \mathbf { y } | \mathbf { x } )$ , showing the dynamic nature of the data systems. While individual updates might only marginally affect the underlying distribution, cumulative changes can significantly alter query-prediction relationships. For example, in an e-commerce database, incremental updates, such as new product additions, customer preference changes, or promotional campaigns, can lead to significant concept drift in product recommendation.
Learned Database Operations. Learned database operations employ machine learning models to enhance specific tasks in databases, such as cardinality estimation and approximate query processing. Let $\mathcal { M } _ { D } ( \cdot ; \Theta )$ denote a prediction model parameterized by $\Theta$ in a database D. $\mathcal { M } _ { D } ( \mathbf { x } ; \Theta )$ takes a query $\mathbf { x }$ as input and makes a prediction, e.g., the number of rows matching $\mathbf { x }$ for cardinality estimation.
However, a model becomes stale when concept drift occurs. Formally, the model $\mathcal { M } _ { D _ { t } } ( \mathbf { x } ; \Theta _ { t } )$ trained on data $D _ { t }$ becomes ineffective at time $t + \Delta t$ , if $P _ { t } \left( \mathbf { x } , \mathbf { y } \right) \ \neq$ $P _ { t + \Delta t } \left( \mathbf { x } , \mathbf { y } \right)$ . Traditional approaches require periodic data recollection and model retraining to maintain accuracy. This incurs high costs. Our objective is to ensure that the model $\mathcal { M } _ { D _ { t } } ( \mathbf { x } ; \Theta _ { t } )$ can be efficiently and effectively adapted to evolving data distributions with these resource-intensive processes in database environments.
In-context Learning with Foundation Models. Foundation models have seen rapid advancements in capability and scope (Radford et al., 2019; Raffel et al., 2020; Brown et al., 2020; Achiam et al., 2023), which give rise to a transformative paradigm called in-context learning (ICL). ICL embeds context into the model input, and leverages foundation models’ broad learned representations to make predictions based on limited contextual examples, thus bypassing the need for parameter updates after deployment. This paradigm drastically cuts compute demands and facilitates various applications (Sun et al., 2022; Dong et al., 2022). A notable application for tabular data is Prior-data Fitted Networks (PFNs) (Müller et al., 2022; Hollmann et al., 2023; Helli et al., 2024), which are pre-trained on synthetic datasets sampled from pre-defined priors. This enables PFNs to pre-adapt to dynamic environments by effectively modeling uncertainties and various distributions, making PFNs suitable for scenarios with frequent updates and concept drift. In this paper, we aim to utilize real-time feedback from database environments and explore how to support in-context adaptation for learned database operations.
# 3. FLAIR for Dynamic Data Systems
As illustrated in Figure 2, FLAIR introduces a dualmodule architecture that addresses concept drift in dynamic databases. First, to provide a unified interface across different tasks, the Task Featurization Module (TFM) extracts task-specific features from database operation for the subsequent modeling. Second, the Dynamic Decision Engine (DDE) is pre-trained via Bayesian meta-training on dynamic distributions of tasks, pre-adapting it to diverse tasks encountered during inference. After meta-training, DDE utilizes the real-time feedback from databases as the latest contextual information to dynamically adapt to the current task. The workflow of FLAIR $\boldsymbol { \mathcal { M } } _ { F }$ is outlined as:
$$
\mathcal { M } _ { F } ( \mathbf { x } ; \boldsymbol { \Theta } _ { T } , \boldsymbol { \Theta } _ { \mathcal { D } } ) = \mathcal { M } _ { D D E } \big ( \mathcal { M } _ { T F M } ( \mathbf { x } ; \boldsymbol { \Theta } _ { T } ) ; \boldsymbol { \Theta } _ { \mathcal { D } } \big ) ,
$$
which comprises two cascading modules, the TFM $\mathcal { M } _ { T F M }$ and the DDE $\mathcal { M } _ { D D E }$ parameterized by $\Theta _ { D }$ and $\Theta \tau$ , respectively. We introduce the technical details below.
# 3.1. Task Featurization Module
The TFM is designed to standardize database operations into structured inputs for downstream modeling. It first encodes data and queries of database operations into data vectors and a query vector respectively, and then extracts a task vector via cross-attention that integrates their interactions.
# 3.1.1. DATA AND QUERY ENCODING
Data Encoding. Each attribute (i.e., column) in the database is represented as a histogram, which captures its distribution. Formally, for an attribute $\mathbf { a _ { n } ^ { i } }$ in relation $\mathbf { R _ { i } }$ , the histogram $\mathbf { x } _ { n } ^ { i } = \left[ x _ { 1 } , \cdot \cdot \cdot , x _ { \delta } \right]$ uses $\delta$ bins to discretize the range of the attribute. After scaling to $[ 0 , 1 ]$ , these histograms are aggregated to form comprehensive data vectors $\mathrm { X } _ { \mathrm { D } }$ of dimension $\overline { { \delta } } \times \textstyle \sum _ { i = 1 } ^ { N } n _ { i }$ , where $N$ is the total number of relations, and $n _ { i }$ is the number of attributes in relation $\mathbf { R _ { i } }$ .
Query Encoding. Queries are represented as vectors capturing structural and conditional information. Join predicates, e.g., $\mathbf { R _ { i } a _ { n _ { i } } ^ { i } } \ = \ \mathbf { R _ { j } a _ { n _ { j } } ^ { j } }$ , are encoded into binary vectors $\mathbf { q } _ { J }$ via one-hot encoding, while filter predicates, e.g., $\mathbf { R _ { i } } \mathbf { a _ { n _ { i } } ^ { i } }$ op $\Omega$ with op $\in \{ < , \leqslant , \geqslant , > , = \}$ being the comparison operators and ℧ the condition value, are encoded into boundary vectors $\mathbf { q } _ { F }$ . The final query vector ${ \bf q } _ { \mathcal { Q } } = < { \bf q } _ { J } , { \bf q } _ { F } >$ concatenates these encodings.
# 3.1.2. TASK FEATURIZATION
To derive the task vector, we adopt a lightweight transformer (Vaswani et al., 2017) architecture following (Li et al., 2023b), which employs hybrid attention mechanisms to extract deep latent features. The task featurization process starts with a data modeling phase, where data vectors $\mathrm { X _ { D } }$ are processed through a series of Multi-head Self-attention (MHSA) layers, interleaved with Feed-forward Network (FFN), Layer Normalization (LN), and residual connections. This is to capture implicit joint distributions and complex dependencies among attributes within $\mathrm { X _ { D } }$ :
$$
\begin{array} { r l } & { \hat { \mathbf { Z } } ^ { l } = \mathrm { M H S A } ( \mathrm { L N } ( \mathbf { Z } ^ { l - 1 } ) ) + \mathbf { Z } ^ { l - 1 } } \\ & { \mathbf { Z } ^ { l } = \mathrm { F F N } ( \mathrm { L N } ( \hat { \mathbf { Z } } ^ { l } ) ) + \hat { \mathbf { Z } } ^ { l } } \end{array}
$$
where MHSA operations are formulated as:
$$
\begin{array} { r l } & { \mathbf { Q } ^ { l , m } = \mathbf { Z } ^ { l - 1 } \mathbf { W } _ { q } ^ { l , m } , \mathbf { K } ^ { l , m } = \mathbf { Z } ^ { l - 1 } \mathbf { W } _ { k } ^ { l , m } , \mathbf { V } ^ { l , m } = \mathbf { Z } ^ { l - 1 } \mathbf { W } _ { v } ^ { l , m } } \\ & { \mathbf { Z } ^ { l , m } = \mathrm { s o f t m a x } ( \frac { \mathbf { Q } ^ { l , m } ( \mathbf { K } ^ { l , m } ) ^ { T } } { \sqrt { d _ { k } } } ) \mathbf { V } ^ { l , m } , m = 1 , \cdots , M } \\ & { \mathbf { Z } ^ { l } = \mathrm { c o n c a t } ( \mathbf { Z } ^ { l , 1 } , \cdots , \mathbf { Z } ^ { l , M } ) \mathbf { W } _ { o } ^ { l } } \end{array}
$$
where $\mathbf { Z } ^ { 0 }$ is composed of data vectors from $\mathrm { X _ { D } }$ , and $M$ is the number of attention heads. $\mathbf { Q } ^ { l , m } , \mathbf { K } ^ { l , m }$ , and $\mathbf { V } ^ { l , m }$ denote the query, key, and value of the $m$ -th head in the $l$ -th layer, obtained via transformation matrices $\mathbf { W } _ { q } ^ { l , m }$ , $\mathbf { W } _ { k } ^ { l , m }$ and $\mathbf { W } _ { v } ^ { l , m }$ , respectively. $\mathbf { Z } ^ { l }$ is the output of the $l$ -th layer, and $\mathbf { W } _ { o } ^ { l }$ is the output transformation matrix.
In the subsequent interaction modeling phase, the output of the data modeling phase $\mathbf { Z } _ { \mathcal { O } }$ is further refined via the Multihead Cross-attention (MHCA) mechanism. Unlike MHSA, $\mathbf { Z } _ { \mathcal { O } }$ serves dual roles as both the keys and values, while the query vector $\mathbf { q } _ { \mathcal { Q } }$ acts as the query in MHCA. The query vector $\mathbf { q } _ { \mathcal { Q } }$ interacts with every vector in $\mathbf { Z } _ { \mathcal { O } }$ through key and value transformations, allowing TFM to dynamically focus on the features in $\mathbf { Z } _ { \mathcal { O } }$ pertinent to the query. For each attention head $m$ in MHCA, we have:
$$
\mathbf { z } ^ { m } = \mathrm { s o f t m a x } ( \frac { \mathbf { q } _ { \mathcal { Q } } ( \mathbf { Z } _ { \mathcal { O } } \mathbf { W } _ { k } ^ { m } ) ^ { T } } { \sqrt { d _ { k } } } ) ( \mathbf { Z } _ { \mathcal { O } } \mathbf { W } _ { v } ^ { m } ) .
$$
The final task vector ${ \bf z } _ { T }$ is obtained by further processing the MHCA output through an FFN layer followed by LN with residual connections. In this way, the task vector ${ \bf z } _ { T }$ contains task-specific information of both data attribute relations and query conditions, providing comprehensive task representations for the subsequent modeling in the DDE.
# 3.2. Dynamic Decision Engine
The DDE forms the core module of FLAIR. As illustrated in Figure 3, the DDE takes the task vector prepared by
Figure 3: The architecture of FLAIR.
the TFM to provide real-time, context-aware predictions across various tasks. It comprises two phases: Bayesian meta-training and in-context adaptation.
# 3.2.1. BAYESIAN META-TRAINING
DDE is pre-trained using synthetic datasets sampled from prior distributions, which equips the model with broad generalization capabilities, enabling rapid adaptation to unseen tasks. The meta-training is based on Bayesian inference theory. Formally, for a given sample $\mathbf { x }$ with the evolving concept represented by a set of $c$ observed sample pairs ${ \mathcal { C } } = \{ ( \mathbf { y } _ { i } , \mathbf { x } _ { i } ) \} _ { i = 1 } ^ { c }$ from the current task, the Posterior Predictive Distribution (PPD) of task predictive modeling is:
$$
\begin{array} { c } { { \displaystyle p ( { \bf y } | { \bf x } , { \mathcal C } ) = \int _ { \Phi } p ( { \bf y } | { \bf x } , \phi ) p ( \phi | { \mathcal C } ) d \phi } } \\ { { \displaystyle \propto \int _ { \Phi } p ( { \bf y } | { \bf x } , \phi ) p ( { \mathcal C } | \phi ) p ( \phi ) d \phi } } \end{array}
$$
where the task distribution $p ( \phi )$ is sampled from curated prior distributions $\Phi$ to diversify the adaptability of DDE to different prediction tasks. Notably, to capture complex dependencies and uncover underlying causal mechanisms, we employ Bayesian Neural Networks (BNNs) (Neal, 2012; Gal et al., 2016) and Structural Causal Models (SCMs) (Pearl, 2009; Peters et al., 2017) in constructing the prior distribution following PFNs (Hollmann et al., 2023).
Based on the PPD formulation in Eq. (9), we first generate synthetic datasets, namely the concept $\mathcal { C }$ of observed samples from the task distribution $p ( \phi )$ , i.e., ${ \mathcal { C } } \sim p ( { \mathcal { C } } | \phi )$ . Second, we sample the data points $( \mathbf { x } , \mathbf { y } )$ for predictive modeling from $p ( \mathbf { x } , \mathbf { y } | \phi )$ . Next, we can train DDE using the input-output configuration via the loss:
$$
\mathcal { L } _ { D D E } = \mathbb { E } _ { ( ( \mathbf { x } , \mathcal { C } ) , \mathbf { y } ) \in p ( \phi ) } [ - \log q _ { \theta } ( \mathbf { y } \vert \mathbf { x } , \mathcal { C } ) ]
$$
where the $q _ { \theta } ( \mathbf { y } | \mathbf { x } , \mathcal { C } )$ is the model’s predictive distribution parameterized by $\theta$ . By minimizing this expected negative log probability $\Dot { \mathcal { L } } _ { D D E }$ , DDE is trained to maximize the likelihood of the observed data under the current task distribution $p ( \phi ) _ { \underline { { \mathbf { \lambda } } } }$ . In particular, $\mathcal { L } _ { D D E }$ can be formalized as follows for different types of tasks, corresponding to regression and classification tasks, respectively.
$$
\begin{array} { l } { \displaystyle \mathcal { L } _ { r e g } = \mathbb { E } _ { ( ( \mathbf { x } , \mathcal { C } ) , \mathbf { y } ) \in p ( \phi ) } \left[ \frac { ( \mathbf { y } - \boldsymbol { \mu } ) ^ { 2 } } { 2 \sigma ^ { 2 } } + \log { \sigma } \right] } \\ { \displaystyle \mathcal { L } _ { c l s } = \mathbb { E } _ { ( ( \mathbf { x } , \mathcal { C } ) , \mathbf { y } ) \in p ( \phi ) } \left[ - \sum _ { k = 1 } ^ { K } \mathbb { I } _ { \mathbf { y } = k } \log q _ { \theta } ( \mathbf { y } = k | \mathbf { x } , \mathcal { C } ) \right] } \end{array}
$$
where $\mu$ and $\sigma$ are the mean and standard deviation in regression tasks, $\mathbb { I } ( \cdot )$ is the indicator function and $q _ { \theta } ( \mathbf { y } = k | \mathbf { x } , \mathcal { C } )$ is the predicted probability of class $k$ in classification tasks.
Remark. We note that the Bayesian meta-training is performed only once on the curated prior distributions across various tasks. With Bayesian meta-training, FLAIR is enabled to quickly adapt to new concepts using a limited set of observed samples of the concept. This offers several advantages: (1) Cost-effective Data Collection: Generating synthetic data is significantly more cost-effective and faster than traditional data collection. (2) One-time Effort: The process is a one-time effort, eliminating frequent retraining after deployment. (3) No Privacy Issues: Synthetic data does not contain real user information, thereby circumventing privacy and security concerns. (4) Scalability: This strategy allows for easy adoption of desired prior task distributions instead of rebuilding the entire model from scratch.
# 3.2.2. IN-CONTEXT ADAPTATION
During inference, we query the meta-trained DDE with the tuple $( { \bf z } _ { T } , { \mathcal { C } } )$ as input, where $\mathcal { C } = ( \mathcal { Q } _ { p m t } , \mathcal { Y } _ { p m t } )$ , termed as context memory, contains contextual information of the current task. $\mathcal { Q } _ { p m t }$ and ${ \mathcal { P } } _ { p m t }$ denote the sequences of recent queries and the system feedback, namely true outputs, which are organized into two separate first-in, first-out (FIFO) queues of size $\varrho$ . This strategy enables DDE to dynamically adapt to new concepts guided by the context memory during inference, thus avoiding backpropagation-based adaptation such as fine-tuning or retraining.
Remark. To better understand the in-context adaptation mechanism, we examine the key differences between FLAIR and existing learned approaches. Existing methods like Marcus et al. (2021); Zhao et al. (2022); Wang et al. (2023b) typically learn a static mapping from input to output as in Eq. 13, which assumes a fixed data distribution. When concept drift occurs in the time interval $\Delta t = t ^ { \prime } - t$ , i.e., $\mathcal { D } _ { t } \neq \mathcal { D } _ { t ^ { \prime } }$ and $P _ { t } \left( \mathbf { x } , \mathbf { y } \right) \neq P _ { t ^ { \prime } } \left( \mathbf { x } , \mathbf { y } \right)$ , the mapping $f _ { \mathcal { D } _ { t } , \Theta _ { t } }$ from the input to the output should change accordingly. To handle concept drift, these methods require collecting sufficient samples from the new distribution and updating the mapping $f _ { \mathcal { D } _ { t } , \Theta _ { t } }$ with parameter $\Theta _ { t }$ based on these samples, so as to obtain a new mapping function $f _ { \mathcal { D } _ { t ^ { \prime } } , \Theta _ { t ^ { \prime } } }$ with parameter $\Theta _ { t } ^ { \prime }$ that aligns with the new distribution $\mathcal { D } _ { t ^ { \prime } }$ . In contrast, our new paradigm essentially learns a conditional mapping as formulated in Eq. 14, which explicitly models the evolving concept provided by the context memory $\mathcal { C } _ { t }$ as the context of the current distribution $\mathcal { D } _ { t }$ .
$$
\begin{array} { r l } & { \forall t , \ f _ { { D } _ { t } , \Theta _ { t } } : \mathbf x \to \mathbf y } \\ & { \forall t , \ f _ { { D } _ { t } , \Theta } : ( \mathbf x | \mathcal C _ { t } ) \to \mathbf y } \end{array}
$$
This adaptability via the in-context adaptation mechanism is well-suited for databases. When a query is executed, the corresponding system output becomes immediately available and can be stored in the context memory to provide supervision for contextualized predictions of subsequent queries. Also, For user-oriented tasks like data classification, the context memory within FLAIR allows for online user feedback, which facilitates the development of a customized system better aligned with user preferences.
# 3.3. FLAIR Workflow: Training to Inference
Training. FLAIR is trained in two stages, as outlined in Algorithm 1. (i) First, the $\mathcal { M } _ { D D E }$ module undergoes a one-off meta-training phase using $\mathcal { L } _ { D D E }$ in Eq. 10 across crafted task distributions. Note that the meta-training is not to optimize FLAIR directly on end tasks but to prepare DDE to adapt to new tasks met during inference without further training. (ii) Second, the $\mathcal { M } _ { T F M }$ module is trained to extract informative latent features that are critical for the specific tasks at hand. The training of TFM is tailored to optimize performance on these tasks. This employs a task-specific loss $\mathcal { L } _ { T S }$ to extract informative features for the DDE module.
# Algorithm 1 FLAIR Training
Input: Designed priors $p ( \phi )$ , number of synthetic datasets $\mathcal { H }$ , each with $N _ { o }$ observed samples, queue size $\varrho$ in the context memory, learning rate $\eta _ { T }$ for $\mathcal { M } _ { T F M }$ and $\eta _ { \mathcal { D } }$ for $\mathcal { M } _ { D D E }$ .
Output: FLAIR $\mathcal { M } _ { F } ( \mathbf { x } ; \boldsymbol { \Theta } _ { T } , \boldsymbol { \Theta } _ { \mathcal { D } } )$ constructed by cascading $\mathcal { M } _ { T F M }$ and $\mathcal { M } _ { D D E }$ with parameters $\Theta \tau$ and $\Theta _ { \mathcal { D } }$ .
1: Initialize $\mathcal { M } _ { T F M }$ and $\mathcal { M } _ { D D E }$ with random weights $\Theta \tau$ and $\Theta _ { D }$
2: for $i = 1$ to $\mathcal { H }$ do
3: Sample synthetic datasets $\tilde { D } _ { i } \sim p ( \boldsymbol { \mathcal { C } } | \phi )$
4: Randomly select context $\mathcal { C }$ based on $\{ ( \mathbf { x } _ { j } , \mathbf { y } _ { j } ) \} _ { j = 1 } ^ { \varrho }$ from $\widetilde { D } _ { i }$
5: repeat
6: Randomly select a training batch $\{ ( \mathbf { x } _ { j } , \mathbf { y } _ { j } ) \} _ { j = 1 } ^ { N _ { o } }$ from ${ \widetilde { D } } _ { i }$
7: Compute stochastic loss $\mathcal { L } _ { D D E }$ using Eq. 10
8: Update $\Theta _ { \mathcal { D } }$ using stochastic gradient descent $\Theta _ { \mathcal { D } } \gets$ $\Theta _ { \mathcal { D } } - \eta _ { \mathcal { D } } \nabla _ { \Theta _ { \mathcal { D } } } \mathcal { L } _ { D D E }$
9: until Convergence
10: end for
11: repeat
12: Randomly sample a minibatch
13: Update $\Theta \tau$ by minimizing the loss $\mathcal { L } _ { T S }$ of the specific task $\Theta _ { T } \Theta _ { T } - \eta _ { T } \nabla \Theta _ { T } \mathcal { L } _ { T S }$
14: until Convergence
15: $\mathcal { M } _ { F } ( \mathbf { x } ; \boldsymbol { \Theta } _ { \mathcal { T } } , \mathbf { \tilde { \Theta } } _ { \mathcal { D } } ) = \mathcal { M } _ { D D E } ( \mathcal { M } _ { T F M } ( \mathbf { x } ; \boldsymbol { \Theta } _ { \mathcal { T } } ) ; \boldsymbol { \Theta } _ { \mathcal { D } } ) ;$ ;
16: Return FLAIR $\mathcal { M } _ { F }$
# Algorithm 2 Concurrent FLAIR Inference and Adaptation
Input: $\mathcal { M } _ { T F M }$ and $\mathcal { M } _ { D D E }$ with parameters $\Theta \tau$ and $\Theta _ { \mathcal { D } }$ , input query and data underlying the data system.
Output: Predicted output $\mathbf { y }$
1: Extract latent feature ${ \bf z } _ { T }$ incorporating information from query and data, using $\mathcal { M } _ { T F M }$ as $\mathbf { z } _ { \mathcal { T } } = \mathcal { M } _ { T F M } ( \mathbf { x } ; \Theta _ { \mathcal { T } } )$
2: Gather context memory $\mathcal { C } = ( \mathcal { Q } _ { p m t } , \mathcal { Y } _ { p m t } )$
3: Predict $\mathbf { y }$ by inputting latent feature ${ \bf z } _ { T }$ and context memory $\scriptstyle { \mathcal { C } }$ into $\mathcal { M } _ { D D E }$ as $\mathbf { y } = \mathcal { M } _ { D D E } ( \mathbf { z } _ { \mathcal { T } } , \mathcal { C } ; \boldsymbol { \Theta } _ { \mathcal { D } } )$
4: Store ${ \bf z } _ { T }$ and the corresponding system output $\mathbf { y } ^ { * }$ into queue $\mathcal { Q } _ { p m t }$ and ${ \mathcal { \mathrm { { y } } } _ { p m t } }$ to update the context memory $\mathcal { C }$
5: Remove oldest entries from $\mathcal { Q } _ { p m t }$ , ${ \mathcal { P } } _ { p m t }$ to maintain size $\varrho$
6: Return y
Inference. Once trained, FLAIR is ready for concurrent online inference and adaptation in a real-time environment:
$$
\begin{array} { r l } & { \mathbf { x } \Rightarrow \mathcal { M } _ { T F M } ( \mathbf { x } ; \boldsymbol { \Theta } _ { T } ) = \mathbf { z } _ { \mathcal { T } } \Rightarrow \mathcal { M } _ { D D E } ( \mathbf { z } _ { \mathcal { T } } , \mathcal { C } ; \boldsymbol { \Theta } _ { \mathcal { D } } ) = \mathbf { y } } \\ & { \mathbf { x } \Rightarrow \mathcal { S } _ { e x e c u t e } ( \mathbf { x } ) = \mathbf { y } ^ { * } \Rightarrow ( \mathbf { z } _ { \mathcal { T } } , \mathbf { y } ^ { * } ) \xrightarrow { \mathrm { u p d a t e } } \mathcal { C } } \end{array}
$$
where $S _ { e x e c u t e } ( \cdot )$ is the data system executor that produces the actual system output $\mathbf { y } ^ { * }$ . The process is detailed in Algorithm 2. Fundamentally, FLAIR streamlines the model update process by replacing the traditional, cumbersome backpropagation with an efficient forward pass via metatraining and an in-context adaptation mechanism.
FLAIR efficiently accommodates large dynamic databases through incremental histogram maintenance in $O ( N _ { v } )$ with $N _ { v }$ modified records and adapts to concept drift using a FIFO key-value memory for in-context adaptation. The cross-attention mechanism operates on a single query vector and incurs only a linear overhead of $O ( d _ { a } \varrho )$ , where $d _ { a }$ is the attention dimension in DDE. This flexible and scalable workflow ensures that FLAIR learns effectively from new tasks on-the-fly, adapting to evolving concepts in dynamic databases.
# 3.4. Model Generalization Error Bound Analysis
In this section, we analyze the generalization error bounds of FLAIR against conventional models optimized for static data, when faced with post-training data evolving. We aim to uncover the susceptibility of outdated static models to dynamic environments and showcase FLAIR’s resilience. Consider a model $\hat { f } _ { i }$ trained on dataset $D ^ { i }$ and frozen once training concludes. Subsequent $k$ single-point data operations alter the data from $D ^ { i }$ to $D ^ { j }$ , where each operation is atomic, comprising either insertion or deletion1. $f _ { D ^ { j } }$ refers to the ground-truth mapping to $D ^ { j }$ . We now explore the worst-case bound on expected maximum generalization error for robustness.
Theorem 3.1. Consider a model $\hat { f } _ { i }$ trained on an initial dataset $D ^ { i }$ , where $| D ^ { i } | = i$ . After $k$ data operations, including s insertion and $r$ deletion, we obtain a new dataset $D ^ { j }$ of size $| D ^ { j } | = j$ , where $k = s + r > 1$ and the net difference in data size $| j - i | = | s - r |$ . Suppose data in $D ^ { j }$ are i.i.d from any continuous distribution $\chi$ , we have
$$
\underset { \bf x } { \operatorname* { s u p } } \ \mathbb { E } _ { D ^ { j } \sim \chi } \big [ \big | \hat { f } _ { i } ( { \bf x } ) - f _ { D ^ { j } } ( { \bf x } ) \big | \big ] \ \geqslant \ k - 1
$$
Theorem 3.1 states that the risk of using a stale model to make predictions escalates at a minimum rate of $\Omega ( k )$ as data evolves. Theoretically, to sustain a error at ϵ, ϵκ1 model retraining is needed for every $\varkappa$ data operation. The cost per retraining session generally involves processing the entire dataset or a significant portion thereof in the scale $\mathcal { O } ( \varkappa )$ (Zeighami and Shahabi, 2024). Consequently, the amortized cost per data operation, given that retraining the model every $\epsilon + 1$ data operation, is also $\mathcal { O } ( \varkappa )$ . Thus, maintaining low error rates in such a dynamic setting can be computationally expensive. In contrast, our model defined as ${ \hat { f } } ( \mathbf { x } | { \mathcal { C } } ^ { j } )$ exhibits resilience to changes in data.
Theorem 3.2. Consider FLAIR trained when the underlying database is $D ^ { i }$ and using context memory ${ \mathcal { C } } ^ { j }$ to perform prediction when the database evolves to $D ^ { j }$ , we have
$$
\underset { \mathbf { x } } { \operatorname* { s u p } } \mathbb { E } _ { D ^ { j } \sim \chi } \Big [ \big | \hat { f } ( \mathbf { x } | \mathcal { C } ^ { j } ) - f _ { D ^ { j } } ( \mathbf { x } ) \big | \Big ] \leqslant \frac { \aleph } { \sqrt { \varrho } }
$$
with high probability $1 - \delta$ , where $\aleph = \sqrt { \textstyle { \frac { 1 } { 2 } } ( \kappa + \ln { \frac { 1 } { \delta } } ) } + \sqrt { \textstyle { \frac { \pi } { 2 } } }$ Here, $\varrho$ is the size of the context memory $\mathcal { C } ^ { j }$ , $\kappa$ is a constant reflecting the training adequacy, and data in $D ^ { j }$ is drawn i.i.d from any continuous distribution $\chi$ .
Theorem 3.2 demonstrates that the generalization error of FLAIR can be effectively controlled by the size of context memory $\varrho$ . By ensuring that $\varrho$ is sufficiently large, the generalization error remains well within the bounds of $\begin{array} { r } { \mathcal { O } ( \frac { 1 } { \sqrt { \varrho } } ) } \end{array}$ Unlike traditional models that experience a linear growth in generalization error with each data operation $k$ , FLAIR’s error remains stable regardless of $k$ , showing no performance deterioration with post-training data changes. Specifically, setting $\varrho$ to be at least $\scriptstyle { \big ( } { \frac { \aleph } { k - 1 } } { \big ) } ^ { 2 }$ ensures that the expected worst-case generalization error of FLAIR stays below static models. This aligns with existing research (Namkoong and Duchi, 2016; Sagawa et al., 2020) that considers potential distribution shifts during training bolsters model resilience after deployment. Overall, Theorem 3.2 elucidates FLAIR’s theoretical superiority over static models in maintaining continuous accuracy and operational efficiency, providing a scalable solution with frequent data evolving.
# 4. Experiments
In this section, we systematically evaluate the effectiveness, efficiency, and transferability of FLAIR. Extensive experiments are conducted on real-world benchmarks for cardinality estimation to test the effectiveness of FLAIR across various degrees of concept drift, followed by assessments of training and inference efficiency. We then explore FLAIR’s robustness against long-term concept drift, and its transferability to representative user-oriented tasks within databases. Moreover, we integrate FLAIR with PostgreSQL to confirm its compatibility with operational environments.
# 4.1. Experimental Setup
Benchmarks. We evaluate FLAIR on two established real-world benchmarks: STATS (STA, 2015) and JOBlight (Leis et al., 2018; 2015). STATS contains over 1 million records, while JOB-light, derived from the IMDB dataset, includes 62 million records. We simulate realworld database conditions in our experiments by incorporating varied SQL operations and design scenarios that mirror different levels of concept drift, ranging from mild to severe.
• STATS (STA, 2015), includes 8 relations with 43 attributes. There are 1,029,842 records from the anonymized Stats Stack Exchange network. The benchmark workload includes 146 queries, featuring both PKFK and FK-FK join.
• JOB-light (Leis et al., 2018; 2015), derives from a subset of the IMDB dataset and encompasses 6 relations with 14 attributes. There are 62,118,470 records in total. The benchmark workload consists of 70 queries focusing on the PK-FK join.
As in a recent work (Li et al., 2023b), we randomly generate 2000 diverse queries with sub-queries to form the training set for each benchmark. In the STATS benchmark, we utilize an existing workload of 146 queries with 2603 subqueries as the test set. For JOB-light, the test set comprises 70 queries associated with 696 sub-queries. Additionally, we incorporate a dynamic workload into each benchmark’s training and test sets. This dynamic workload includes a variety of SQL operations, including insert, delete, and update, strategically varied in proportion throughout different phases of the experiment. Notably, the ground truth for the queries is obtained by executing them, as both the dynamic workload and data changes can influence the results over time. For the CE task, queries yielding a ground-truth cardinality of zero are excluded from the analysis to ensure data integrity and relevance.
Downstream Tasks. We primarily assess FLAIR’s core performance through cardinality estimation (CE) tasks, alongside exploring its capabilities in user-oriented activities like approximate query processing (AQP) and in-database data analytics involving data classification and regression.
• Cardinality Estimation (CE) estimates the number of rows a query returns, aiding query planners in optimizing execution plans.
• Approximate Query Processing (AQP) quickly delivers approximate results from large datasets by balancing accuracy with computational efficiency.
• In-database Data Analytics involves data classification and regression tasks executed within the database engine, delivering insights directly from the data source. (i) Data classification boosts business intelligence by using categorical attributes to categorize tuples, such as product types and transaction statuses, supporting analytics in database systems. (ii) Data regression predicts continuous outcomes, enhancing predictive analytics and decisionmaking on platforms like Oracle (Helskyaho et al., 2021) and Microsoft SQL Server (MacLennan et al., 2011; Harinath et al., 2008).
Baselines. We compare FLAIR with predominant families of CE technologies, including the estimator from PostgreSQL (pos, 1996), and SOTA learned approaches for dynamic environments, such as DeepDB (Hilprecht et al., 2019), ALECE (Li et al., 2023b), and DDUp (Kurmanji and Triantafillou, 2023) with NeuroCard (Yang et al., 2020) being used as its base model. We also compare FLAIR with model fine-tuning outlined in (Kurmanji and Triantafillou, 2023), serving as a high-performance baseline despite being computationally intensive. For AQP, our baselines include DBest+ $^ +$ (Ma et al., 2021), which utilizes only frequency tables (FTs) for the update, DBes $+ { + } \mathrm { F T }$ , which updates both FTs and mixture density networks (MDNs), and DDUp, which uses $\mathrm { \ D B e s t { + } }$ as its base model. For in-database data analytics, we compare FLAIR with AutoML system AutoGluon (Erickson et al., 2020) and established ML algorithms, including K-nearest-neighbors (KNN), RandomForest, MLP, and popular boosting methods, XGBoost (Chen and Guestrin, 2016), LightGBM (Ke et al., 2017) and CatBoost (Prokhorenkova et al., 2018) for data classification, and AutoGluon, SVR, MLP, DecisionTree, RandomForest, and GradientBoosting for regression.
Implementation. FLAIR is implemented in Python with Pytorch 2.0.1. The baseline methods are implemented using open-source packages or source code provided by the original researchers, adhering to recommended settings. The experiments involving PostgreSQL are conducted on PostgreSQL 13.1. All the experiments are conducted on a server with a Xeon(R) Silver 4214R CPU $\textcircled { \mathscr { a } } 2 . 4 0 \mathrm { G H z }$ (12 cores), 128G memory, and a GeForce RTX 3090 with CUDA 11.8. The OS is Ubuntu 20.04 with Linux kernel 5.4.0-72.
Evaluation Metrics. We evaluate FLAIR’s effectiveness and efficiency across various tasks using targeted metrics. (1) Effectiveness Metrics: For CE tasks, we report the accuracy by the geometric mean of the Q-error (GMQ) as (Li et al., 2022; Dutt et al., 2019) along with Q-error and P-error across various quantiles, with particular emphasis on the tail performance. For AQP tasks, we use mean relative error (MRE) to evaluate the accuracy of query approximations.
Table 1: Overall performance of cardinality estimation task under concept drift. The best performances are highlighted in bold and underlined, and the second-best are bold only.
Figure 4: Overview of dynamic settings, illustrated by distribution discrepancies confirmed by Kolmogorov-Smirnov test p-values below 0.01 pre- and post-concept drift.
Additionally, we apply accuracy and F1 score for data classification and mean squared error (MSE) and the coefficient of determination $( R ^ { 2 } )$ for data regression. (2) Efficiency Metrics: We assess FLAIR’s efficiency by examining storage overhead, building time, inference time, and adaptation time.
Dynamic Settings and Data Drift. In our study, we explore a dynamic data system marked by variations in both workload and data, which is illustrated in Figure 4. To emulate a real system environment, we introduce significant data drift after training and before testing. This involves sorting each column to alter the joint distribution of attributes and then performing random sampling from this permuted dataset. The impact of these manipulations on data distribution and attribute correlations is visually depicted through histograms and heat maps in Figure 4, showcasing the data characteristics before and after experiencing data drift. This dynamic scenario comprehensively mirrors real-world database operations where frequent insert, delete, and update actions induce gradual changes in data distribution. Over time, these incremental modifications accumulate, resulting in more pronounced shifts in data structures and inter-attribute relationships. To rigorously assess the robustness of our approach, we design two scenarios based on the extent and nature of the changes.
• Mild Drift: We permute and sample the data with $50 \%$ of the dataset experiencing drift, testing the model response to moderate yet significant changes in data without additional data manipulations.
Figure 5: Comparison of model efficiency.
• Severe Drift: We escalate the challenge by not only permuting and sampling $60 \%$ of the data but also integrating $10 \%$ random data manipulations, including additions, deletions, and value replacements to assess model capability under severe data transformations.
# 4.2. Effectiveness
In Table 1, we report the overall performance comparison in CE task. The results reveal that FLAIR consistently delivers superior performance across all datasets and dynamic scenarios, often matching or even surpassing the outcomes of the fine-tune approach. Specifically, FLAIR achieves the best performance in 29 out of 32 quantile metrics. Even when including fine-tune comparisons, FLAIR surpasses nearly half of the evaluations for all metrics, underscoring its considerable precision in dynamic environments. Additionally, FLAIR significantly outperforms PostgreSQL across all datasets and settings, highlighting the limitations of PostgreSQL’s independence assumption that often results in inaccuracies with non-uniform data distributions. Furthermore, our experiments reveal that existing methods, including those using fine-tuning and knowledge distillation, struggle with rapid and complex changes in dynamic systems. In contrast, FLAIR excels by promptly adapting to current concepts during concept drift, without data recollection, offline updates, or separate drift detection processes.
# 4.3. Efficiency
We evaluate the construction efficiency and resource usage of FLAIR alongside baseline models on the JOB-light benchmark. The results in Figure 5 demonstrate that FLAIR is notably efficient in both building and adaptation phases. Remarkably, FLAIR accelerates adaptation speed by $5 . 2 \times$ while reducing the GMQ by $2 2 . 5 \%$ compared with the best baseline. To further improve FLAIR’s inference efficiency, we implement an embedding caching mechanism in FLAIR, which eliminates redundant computations by preventing recomputation on the repeated inputs. This enhancement significantly accelerates the inference process, yielding competitive inference times. Taking the overall performance into consideration, the slightly higher storage requirement imposed by FLAIR is acceptable.
Figure 6: Comparison of model robustness for long-term incremental concept drift.
# 4.4. Long-term Incremental Concept Drift
To further assess FLAIR’s adaptability, we track the performance on STATS and JOB-light, focusing on gradual drift indicated by rising Kullback-Leibler divergence $D _ { K L }$ over extended periods. Figure 6 illustrates that FLAIR effectively handles the challenging conditions of long-term incremental concept drift across both benchmarks, even on par with model fine-tuning. Furthermore, we observe that DDUp based on knowledge distillation is inferior to finetuning under long-term gradual drift. This is in line with the results in Section 4.2, highlighting the inherent limitations of knowledge distillation: it mitigates catastrophic forgetting by preserving prior learned knowledge but can inadvertently replicate past errors, whereas fine-tuning directly adjusts to new data, correcting inaccuracies and adapting to evolving distributions. Conversely, FLAIR’s innovative in-context adaptation paradigm, guided by dynamic context memory, achieves negligible error accumulation and ensures sustained adaptability without further training, distinguishing it from both knowledge distillation and fine-tuning.
# 4.5. Transferability
In data systems, system-internal tasks like CE provide immediate critical outcomes for optimization, while it is often not straightforward for user-oriented tasks. Next, we validate FLAIR’s performance in user-oriented scenarios to showcase its wide applicability, where our context memory establishes a virtuous cycle of user feedback to refine model performance and facilitate system customization.
Approximate Query Processing. The results in Figure 7, measured in MRE, consistently show that FLAIR outper
Figure 7: Performance of AQP task under concept drift.
Figure 8: Decision boundaries and model performance on data classification task under concept drift.
Input Data KNN Gaussian Process XGBoost CatBoost MLP FLAIR
Moons(Original)Moons(Drift) ? 福 Acc: 0.975 Acc: 0.942 Acc: 0.942 Acc: 0.975 Acc: 0.925 Acc: 0.975 F1: 0.975 F1: 0.942 F1: 0.942 F1: 0.975 F1: 0.925 F1: 0.975 5 Acc: 0.620 Acc: 0.602 Acc: 0.574 Acc: 0.593 Acc: 0.463 Acc: 0.759 F1: 0.621 F1: 0.602 F1: 0.574 F1: 0.593 F1: 0.456 F1: 0.756
? : 心 ? : : + 心 3 Acc: 0.717 Acc: 0.733 Acc: 0.717 Acc: 0.717 Acc: 0.750 Acc: 0.750 F1: 0.718 F1: 0.732 F1: 0.717 F1: 0.714 F1: 0.751 F1: 0.752
G : : 南 T Acc: 0.444 Acc: 0.241 Acc: 0.315 Acc: 0.259 Acc: 0.204 Acc: 0.648 F1: 0.392 F1: 0.222 F1: 0.313 F1: 0.266 F1: 0.190 F1: 0.598
forms baseline approaches. Across various relations and dynamic settings, FLAIR achieves significant error reductions, with averages up to or exceeding $1 0 \times$ with $\mathrm { \ D B e s t { + } }$ $3 \times$ with $\mathrm { D B e s t + F T }$ , and $2 \times$ with DDUp. These findings highlight the effectiveness of FLAIR in handling complex query scenarios. Most of the time, FLAIR outperforms methods that rely on fine-tuning and knowledge distillation, such as $\mathrm { D B e s t + + F T }$ and DDUp. This superiority stems from the limitations associated with only updating models during significant data drifts, which may not suffice for the accurate execution of AQP tasks in real and live system scenarios.
In-database Data Analytics. We initially conduct a qualitative evaluation on illustrative toy problems to understand the behavior of FLAIR under concept drift, comparing against standard classifiers as shown in Figure 8. We utilize moons and iris datasets from scikit-learn (Pedregosa et al., 2011). For the drift scenarios, we allocate $10 \%$ of data for model updates and the remaining $90 \%$ for evaluation. In each case, FLAIR effectively captures the decision boundary between samples, delivering well-calibrated predictions. We extend our empirical analysis to real-world tasks, applying data classification for sentiment analysis and data regression for rating prediction on IMDB. (i) Data Classification. We conduct sentiment analysis (Maas et al., 2011) on IMDB, which is a prevalent binary classification task. We allocate $50 \%$ of the original data as the training set, and following prior setups, induced data drift on the remaining data. We designate $20 \%$ of the post-drift data as the update set and
Table 2: Performance of data classification on concept drift.
Z ϱ =20 ϱ =40 B ϱ =60 日 ϱ =80 口 ϱ =100 GMQ (lower is better) ★ Inference Time (lower is better) 5.00 0.30 0.40 56.500 0.230
4.00 5.00 0.20 3.50 4.50 3.00 0.10 4.00 (a) STATS (Mild) (b) STATS (Severe) 7.50 9 0.30 10.00 0.12505
5.00 0.25 8.00 0.20 6.00 0.15 4.00 0.00 0.10 (c) Job-light (Mild) (d) Job-light (Severe)
the remaining post-drift data as the test set. For models that support incremental updates, such as XGBoost, LightGBM, CatBoost, and MLP, we incrementally update the models initially trained on the training set using the update set, while others are retrained on the update set. Finally, we evaluate all models on the test set to measure their effectiveness in adapting to data drift, as summarized in Table 2. The mean time represents the total execution time, integrating building, adaptation, and inference time averaged across two drift scenarios. Our FLAIR distinctly showcases its robustness and adaptability in handling concept drift, resulting in superior performance across both mild and severe drift scenarios. Furthermore, FLAIR achieves this high accuracy while maintaining impressive computational efficiency compared with AutoGluon, making it exceptionally suited for practical dynamic environments where both performance and speed are crucial. (ii) Data Regression. Table 3 offers a comprehensive comparison of representative regression methods in the context of concept drift, focusing on movie rating prediction (IMD, 2024), a scenario typically characterized by evolving concepts. FLAIR excels in both mild and severe drift scenarios, maintaining consistent performance across MSE and $R ^ { 2 }$ metrics while demonstrating comparable efficiency. While AutoGluon delivers the best results under mild drift conditions, its performance noticeably declines under severe drift and requires more than $4 0 \times$ computational time compared to FLAIR.
Table 3: Performance of data regression on concept drift.
ρ =100% ρ =80% ρ =60% ρ =40% ρ =20% Mixed (lower is better) User Feedback Data Only (lower is better) Suboptimal Baseline Performance 1.00 1.50
PE mm 1.00 0.50 0.50 中 0.00 0.00 (a) cast_info (Mild) (b) cast_info (Severe) 1.00 2.00 0.75
0.50 1.50 0.25 闲闲电 化 0.00 1.00 (c) movie_keyword (Mild) (d) movie_keyword (Severe)
# 4.6. Ablation Study
Effects of Queue Size in Context Memory. We further analyze the sensitivity of FLAIR to the critical hyperparameter $\varrho$ , the size of queues in context memory, across various benchmarks and dynamic scenarios, as depicted in Figure 9. The results confirm that increasing the queue size contributes to performance enhancements without escalating system latency, owing to embedding cache optimization. Initially, performance improves significantly with an increase in queue size but eventually plateaus, indicating diminishing returns. Notably, an oversized queue size may introduce information redundancy, potentially leading to a performance decline. For instance, increasing the queue size to 100 results in a minor deterioration in the STATS benchmark’s mild drift scenario. In summary, the optimal queue size $\varrho$ should be tailored based on the complexity of the data to balance performance gains against the risk of redundancy, in order to optimize the model’s efficacy in dynamic environments.
Effects of User Feedback. To delve into the adaptability of FLAIR in user-oriented tasks, we evaluate how varying proportions of user feedback data $\rho$ within queues affect model performance. We use drifted data with ground-truth outputs to simulate user-customized feedback data, assessing the model’s conformity to user-specific requirements. Specifically, the queues comprise a certain proportion of user feedback data combined with the model’s recent inputoutput pairs. We maintain the queue size at 80 and vary the proportion of user feedback data. The results in Figure 10, demonstrate that increasing the proportion $\rho$ within a fixed queue size significantly enhances model performance, confirming the model’s ability to be customized by users. To further explore the impact of integrating recent model interactions into the queue on performance, we conduct comparative experiments using only user feedback data. We observe that mixed queues outperform those containing solely user feedback. Additionally, integrating recent model data mitigates performance decline as the proportion $\rho$ of user feedback decreases. Still, we advise against setting $\rho$ too low due to the risk of introducing noise. It is noteworthy that FLAIR surpasses the suboptimal model DDUp at most times even with very low $\rho$ , underscoring FLAIR’s capability in user-oriented applications.
Figure 11: Comparison of query execution latency.
# 4.7. FLAIR in Action
Given the observation from existing research (Negi et al., 2021; Marcus et al., 2021; Li et al., 2023b) that a smaller Qerror does not necessarily reduce execution times, we extend our investigation by integrating FLAIR into PostgreSQL to assess its efficacy in a full-fledged database system. We evaluate the latency measured as execution time per query on the test set of STATS and JOB-light. As in a recent work (Li et al., 2023b), we substitute PostgreSQL’s default cardinality estimator with FLAIR. Specifically, PostgreSQL uses the cardinality estimated by FLAIR to generate the execution plan for each query in the benchmarks. The optimal baseline is established by replacing PostgreSQL’s built-in estimations with ground-truth cardinalities. As depicted in Figure 11, FLAIR achieves latency that approaches the optimal level based on ground-truth cardinality. Compared to PostgreSQL’s built-in cardinality estimator, FLAIR accelerates query execution by up to $1 . 9 \times$ . This superiority is even more significant in severe drift scenarios.
# 5. Related Work
# 5.1. Advances and Challenges of AI×DB
Database systems are increasingly embracing artificial intelligence (AI), spurring the development of AI-powered databases $( \mathbf { A I } { \times } \mathbf { D B } )$ ) (Ooi et al., 2024; Zhu et al., 2024; Li et al., 2021; McGregor, 2021). This fusion marks a new era for database systems, in which AI functionalities are incorporated to enhance the overall system performance and usability. Consequently, advanced models such as deep neural networks (DNNs) and large language models (LLMs) are increasingly being integrated into database systems and applications, which has improved database management such as database tuning (Lao et al., 2024; Huang et al., 2024; Trummer, 2022), cardinality and selectivity estimation (Lee et al., 2024; Kurmanji and Triantafillou, 2023; Li et al., 2023b; Hilprecht et al., 2019), and indexing (Zhang et al., 2024b; Li et al., 2020; 2023a; Gao et al., 2023; Sun et al., 2023; Zhang et al., 2024a). Recent work (Zeighami and Shahabi, 2024) presents a theoretical foundation for developing machine learning approaches in database systems. However, unlike the data that AI models have been designed for, online transactional processing (OLTP) data is dynamic in nature and such dynamicity affects the robustness of models. Indeed, the phenomenon of concept drift, where the underlying data distributions and relations shift, remains a critical challenge. In this study, our goal is to provide a solution for addressing concept drift in databases, ensuring both accuracy and sustainability in dynamic environments.
# 5.2. Model Adaptation in Concept Drift
Variations in data critically affect the efficacy of AI-powered database systems, also known as learned database systems. Such discrepancies between training data and those encountered post-deployment significantly degrade system performance, challenging model reliability in dynamical environments for the practical deployment (Negi et al., 2023; Zeighami and Shahabi, 2024). Recent cutting-edge machine learning paradigms such as transfer learning (Jain et al., 2023; Kurmanji and Triantafillou, 2023; Kurmanji et al., 2024; Ying et al., 2018), active learning (Ma et al., 2020; Li et al., 2022; Lampinen et al., 2024), and multi-task learning (Kollias et al., 2024; Wu et al., 2021; Hu et al., 2024) have been employed to mitigate challenges of concept drift in AI-powered database systems. Notably, Kurmanji et al. utilize knowledge distillation, guided by loss-based out-ofdistribution data detection for handling data insertions (Kurmanji and Triantafillou, 2023), and explore transfer learning for machine unlearning to address data deletions in database systems (Kurmanji et al., 2024). Additionally, reinforcement learning (RL) has been used to strategically reduce the high costs of data collection by allowing an RL agent to selectively determine which subsequent queries to execute with a more targeted fashion (Zhang et al., 2019; Hilprecht et al., 2020; Zheng et al., 2024; Wang et al., 2023a). These strategies, while aimed at improving generalization in fluctuating environments, inherently face critical issues due to their requirements for data recollection and model retraining. For instance, optimizing query performance necessitates executing numerous query plans, a process that is computationally intensive and significantly extends execution time (Wu et al., 2021; Hilprecht and Binnig, 2021; Li et al., 2022). The need for repetitive executions, whenever new concepts are detected, further compounds the operational challenges.
Inspired by large language models (LLMs), zero-shot learning has been employed to enhance model adaptability to dynamic environments and generalize across different tasks (Hilprecht and Binnig, 2021; Zhou et al., 2023; Urban et al., 2023). While this approach is theoretically promising, it faces practical challenges, as pre-training or fine-tuning large foundation models still requires substantial real-world data collection. Additionally, the quality and relevance of training data to actual workloads remain uncertain until deployment, making post-deployment performance unpredictable. Further, existing methods struggle to keep pace with real-time evolving concepts and overlook inter-query relations, which compromises their effectiveness. To fundamentally address these challenges, we propose a fresh perspective on online adaptation for database systems that supports on-the-fly in-context adaptation to evolving concepts without unnecessary data collection or retraining, ensuring unparalleled effectiveness and efficiency in operational settings. | Machine learning has demonstrated transformative potential for database
operations, such as query optimization and in-database data analytics. However,
dynamic database environments, characterized by frequent updates and evolving
data distributions, introduce concept drift, which leads to performance
degradation for learned models and limits their practical applicability.
Addressing this challenge requires efficient frameworks capable of adapting to
shifting concepts while minimizing the overhead of retraining or fine-tuning.
In this paper, we propose FLAIR, an online adaptation framework that
introduces a new paradigm called \textit{in-context adaptation} for learned
database operations. FLAIR leverages the inherent property of data systems,
i.e., immediate availability of execution results for predictions, to enable
dynamic context construction. By formalizing adaptation as $f:(\mathbf{x} \,|
\,C_t) \to \mathbf{y}$, with $C_t$ representing a dynamic context memory, FLAIR
delivers predictions aligned with the current concept, eliminating the need for
runtime parameter optimization. To achieve this, FLAIR integrates two key
modules: a Task Featurization Module for encoding task-specific features into
standardized representations, and a Dynamic Decision Engine, pre-trained via
Bayesian meta-training, to adapt seamlessly using contextual information at
runtime. Extensive experiments across key database tasks demonstrate that FLAIR
outperforms state-of-the-art baselines, achieving up to 5.2x faster adaptation
and reducing error by 22.5% for cardinality estimation. | [
"cs.DB",
"cs.AI"
] |
# I. INTRODUCTION
Fact checking refers to the process of comparing a claim with other sources of information to verify its accuracy. It has a wide range of applications, including fake news detection [1] and claim verification in scientific publications [2]. As tables are important carriers of high-density information, fact checking in tabular contexts is particularly significant. However, existing table-based fact-checking studies [3]–[5] primarily focus on instance-level verification of individual claims. In instance-level settings, each claim and its supporting table evidence are explicitly provided, allowing the system to focus
Chaoxu Pang, Yixuan Cao, and Ping Luo are with Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences. E-mail: pangchaoxu21b, caoyixuan, luop @ict.ac.cn Ganbin Zhou and Hongwei Li are with Beijing PAI Technology Ltd. E-mail: zhougb $@$ paodingai.com, hw446.ict@gmail.com
on verifying a given claim-table pair. In contrast, documentlevel fact checking requires identifying and verifying relevant claim-table pairs from the entire document, where the number of candidate instances grows combinatorially. Document-level verification remains largely underexplored despite its significant real-world impact. In this work, we address a challenging document-level fact-checking problem: verifying numerical consistency across tables in disclosure documents. This task has broad applications in table-rich domains where numerical consistency is critical [6], [7].
In high-stakes domains such as finance, scientific research, government reporting, and corporate compliance, tables serve as the principal medium for presenting key quantitative indicators. Disclosure documents frequently contain extensive tabular data, where the same numerical fact may recur across different tables. We refer to these recurring numerical mentions as semantically equivalent. Figure 1a illustrates this concept with screenshots of three tables from a corporate annual report. These tables present the indicators in a structured format with rows and columns, allowing readers to more easily comprehend and compare the underlying data. The numerical mentions highlighted with solid boxes across the three tables are semantically equivalent—they all represent the identical fact that the company’s net assets at the end of fiscal year 2024 amounted to $U S \$ 49,120$ million. According to our statistics, over $20 \%$ of numerical facts in disclosure documents are mentioned multiple times.
In practice, numerical inconsistencies among mentions of the same numerical fact can occur due to unintentional errors during document preparation. For instance, if any of the three highlighted numerical mentions in Figure 1 were to deviate from the value “49,120,” an inconsistency would arise. Such errors can negatively impact public’s perception and decisionmaking, potentially resulting in significant consequences. Several studies [8], [9] have documented cases where numerical inaccuracies caused substantial reputational damage and economic losses across various sectors. As disclosure documents often form the backbone of transparency and regulatory compliance across industries, mechanisms for identifying and resolving numerical inconsistencies are essential for ensuring data integrity and public trust.
This situation underscores the pressing need for automated tabular numerical cross-checking systems [10]. The numerical cross-checking process can be decomposed into two sequential tasks: numerical semantic matching, which identifies all semantically equivalent numerical mention pairs within a document, and numerical comparison, which determines whether two such mentions are numerically equal. Pairs that are semantically equivalent but not numerically equal indicate potential
4.2 Key performance indicators
Summary of financial measures
Year ended 30 June
US\$M 2024
Consolidated Income Statement (Financial Statements 1.1) 55,658 155.8
Consolidated Balance Sheet (Financial Statements 1.3) 102,362 49,120
10 Non-IFRS financial information Underlying return on capital employed (ROcE) 120241
Year ended 30June ·
L Profit after taxation excluding net finance costs and exceptional items 16,044 Capital employed at the beginning of the period 59,696 Multi-Faceted Numerical Semantics
iNetassetsat theedoftheperiod 49.120 9,120 The meaning of this value has three Capital employed at the end of the period 58,240 facets: Time, Unit, and 58,968 Indicator Underlying return on capital employed 93 BHP Annual Report 2024 (a)
21 Net debt 2024
US\$M Current Non-current
Interest bearing liabilties
Tota interestbearinglialties Importance of Holistic Table Context
Lesseas lialtys iteits The right table breaks down each
Les: Cash and cash equivalents item under debt into current and non-current categories. For the
Less: Tota cash and cash equivalents net asset item, the current net
es asset is zero (outlined with a red dashed box). Therefore, the value
Less: Totalderivativesincluded in netdebt of the non-current net asset must
■ bpre seqnutaeldtion otthheertottabllense.t assets
169 BHP Annual Report 2024 (b)
inconsistencies. Since numerical comparison can typically be addressed with straightforward rules, this work concentrates on the more challenging task of numerical semantic matching. This task presents two primary challenges:
C1: Scalability for Massive Candidate Spaces. While prior research has mainly investigated table-based fact checking [11] on instance-level datasets such as TabFact [3], there remains a significant gap in addressing document-level verification, where the candidate instance space within a single document grows combinatorially. In the context of numerical semantic matching, typical disclosure documents contain thousands of numerical mentions; because each mention must be compared with every other mention to assess semantic equivalence, a single document can yield millions of candidate mention pairs. This immense scale presents significant challenges for both computational efficiency and service timeliness, demanding methods that can effectively balance performance and efficiency at scale. Previous research [12] has employed heuristic-based filtering techniques, such as grouping mentions along predefined attributes (e.g., time), which may improve efficiency but significantly limit the maximum achievable recall.
C2: Multi-Faceted Numerical Semantics. Each numerical mention encapsulates multiple semantic dimensions, such as temporal aspects and subject entities. The complete semantics extend beyond the surface-level values themselves, being distributed throughout the surrounding contexts—particularly within the table where the mention appears, as illustrated in Figure 1a. Previous research [12] typically addresses the challenge by extracting simplified key contexts (e.g., row and column headers) with hand-crafted rules or shallow neural encoders. However, incorporating information from the complete table context is essential for comprehensive numerical semantic understanding. For example, Figure 1b demonstrates that the information in one cell may influence the interpretation of another cell.
Recently, large language models (LLMs) such as GPT [13] and Qwen [14] have made remarkable progress in understanding context across both textual and semi-structured data [5], [15], [16], creating exciting opportunities to tackle Challenge C2 at the instance level. However, the significant computational overhead and memory demands of LLMs [17] introduce new efficiency bottlenecks for Challenge C1, particularly when processing real-world documents at scale and under service latency constraints. Moreover, general-purpose LLMs typically lack the specialized professional knowledge [18] required to accurately interpret numerical semantics within domainspecific contexts. For example, identifying the precise meaning of a numerical indicator may necessitate specific expertise in financial accounting, which is generally lacking in generic models.
To address these challenges, we introduce an efficient and high-performing LLM-based solution at the document level. We propose a novel Coarse-to-Fine Tabular Numerical CrossChecking framework (CoFiTCheck), which operates through two sequential stages:
Embedding-based Filtering. We introduce an efficient embedding-based approach for filtering candidate numerical mention pairs. Each mention is encoded as a dense embedding, allowing us to prune potential pairs based on embedding similarity. To address the high computational cost of encoding large numbers of numerical mentions with LLMs [17], we introduce a contextualized instructional parallel encoding strategy that jointly encodes all numerical mentions within a table in a single forward pass. For training, we propose a novel decoupled InfoNCE objective tailored to the unique characteristics of numerical semantic matching, where isolated mentions (mentions without any semantic equivalent) are common and can distort the learning process. Our decoupled approach explicitly accounts for both isolated and non-isolated mentions, enabling high-recall filtering while substantially reducing candidate pairs.
Discriminative Classification. We employ a larger, specialized LLM (ClsLLM) for fine-grained classification of remaining candidate mention pairs. To equip ClsLLM with domain-specific knowledge, we introduce Cross-table Numerical Alignment Pretraining (CNAP), a new pretraining paradigm that leverages cross-table numerical equality relationships as weak supervision signals, enabling the model to learn semantic equivalence patterns without manual annotation.
Comprehensive evaluation across three diverse types of realworld disclosure documents demonstrates the effectiveness and scalability of CoFiTCheck. Using a 7B parameter ClsLLM, our approach achieves approximately $90 \%$ F1 score, surpassing previous methods by around 10 points. The framework exhibits remarkable efficiency, processing each document in just 40.8 seconds when deployed on four NVIDIA GeForce RTX 4090 GPUs. Notably, our CNAP approach delivers consistent performance gains without requiring manual annotations, highlighting its practical applicability. Overall, CoFiTCheck offers an effective solution for automated tabular numerical crosschecking in disclosure documents, delivering valuable insights for document-level fact checking in real-world applications.
# II. RELATED WORK
# A. Table-based Fact Checking
Table-based fact checking (verification) has emerged as a critical research area in machine learning and natural language processing, serving as a primary defense against misinformation. Previous studies mainly focus on statementto-table checking, which aims to determine whether natural language statements are entailed or refuted by tabular data. A significant line of research focuses on open-domain table fact checking. Datasets such as TabFact [3] and FEVEROUS [4] have catalyzed progress in this area, providing standardized benchmarks for developing and evaluating systems. Recently, Dater [11] proposed using large language models (LLMs) as versatile decomposers that break down complex statements into simpler components, combining this with a parsingexecution-filling strategy to decouple logic from numerical computation, achieving human-surpassing performance on the TabFact benchmark for the first time. These studies typically focus on the instance level, verifying statements against corresponding semi-structured tables from Wikipedia, and have achieved remarkable success in instance-level fact-checking.
Our work focuses on a distinct and challenging documentlevel table-to-table checking task: verifying the equivalence of numerical mentions in documents. This task presents two significant challenges: handling a large volume of candidate instances and understanding multi-faceted numerical semantics, as detailed in Section I. The most recent work, AutoCheck [12], addressed the first challenge by employing several grouping and deduplication rules to pre-filter candidate pairs. For the second challenge, the system first extracts key components of each numerical mention (such as row and column headers) and then encodes these components with a specialized cell embedding network. In real-world applications, AutoCheck demonstrated remarkable effectiveness, reducing auditing work hours by $5 2 \%$ . Despite its practical success, AutoCheck employed simplifications that limited its effectiveness. Specifically, it reduced complex table contexts to key parts and relied on heuristic rules to pre-filter candidate pairs. While these techniques enhanced system efficiency, they significantly compromised overall performance. To overcome these limitations, our current work introduces a coarse-to-fine approach that harnesses the power of LLMs, enabling us to preserve contextual richness without sacrificing computational efficiency.
# B. Large Language Models
Large Language Models (LLMs) [19]–[22] have emerged as a transformative force in recent years, demonstrating extensive world knowledge, strong contextual understanding, and sophisticated instruction-following capabilities. Our research intersects with two key sub-domains:
LLMs for Representation Learning. Recent research [23] has revealed LLMs’ exceptional potential as backbone encoders over small models (e.g. BERT-based [24]) for dense retrieval tasks, largely due to their massive parameter counts and comprehensive pre-training regimes [25]. Several approaches [26], [27] employ LLMs as unsupervised dense embedders—while computationally efficient, these methods often fail to fully leverage the models’ inherent capabilities. More sophisticated strategies [28], [29] explicitly pre-train or fine-tune LLMs to optimize performance on retrieval tasks. For instance, Ma et al. [28] fine-tuned LLaMA [30] models for multi-stage text retrieval, demonstrating significant improvements over smaller models and exhibiting impressive zeroshot capabilities. While previous studies focus on encoding entire contexts or individual elements (queries, passages), our work focuses on the problem of simultaneously encoding multiple fine-grained facts (numerical mentions) within a shared context. We introduce a instructional parallel encoding approach that jointly represents all numerical mentions within a single table in one forward pass, substantially improving computational efficiency. Furthermore, we fine-tune LLMs using a decoupled InfoNCE objective specifically designed for numerical semantic matching tasks.
LLMs for Table Understanding. Recent studies have demonstrated that LLMs exhibit remarkable capabilities in understanding table semantics. Several comprehensive investigations provide systematic evaluations on table understanding abilities. Zhao et al. [31] and Pang et al. [32] highlight LLMs effectiveness in information seeking from tabular data. Akhtar et al. [33] evaluate LLMs’ numerical reasoning capabilities across a hierarchical taxonomy of skills, finding that models such as GPT-3.5 [34] excel particularly in tabular natural language inference tasks, demonstrating their potential for numerical reasoning in structured contexts. Beyond evaluations, numerous research efforts focus on practical applications that leverage and enhance LLMs’ table understanding capabilities. Zhang et al. [35] show that LLMs significantly outperform smaller specialized models in table understanding tasks, with their TableLlama (fine-tuned on LLaMA 2 [30]) achieving comparable or superior performance to state-ofthe-art task-specific models across diverse table-based tasks. These findings collectively establish a strong foundation for our approach, which utilizes LLMs as powerful tools for interpreting and reasoning with tabular numerical semantics.
Numerical mention Representation Embedding-Based Matching Raw Tables a b c ad eb c a a c ✘ (Pa,ircs) Table 12 b a stPilhveealsfyeoclroenwpsirinedgseenrinuntmgtechrioecnastlexmtaeunatl cofnacschtbaoryrascostuemcriphsrtaeishc setinom-fe, c c f a e (fb, nm)) #2: Discriminative Classification Prompt Management CNAP Can(Pada,irdcsa)te Task Description itnsd icoartre tshpaotndtihnegi rtavbaleuecsosntheoxultd, wbiethcoanllsivsatleunets. mEacskhevdaluseinisg aplcacoemhpoaldnierds b…y W OPuatiprust (a, e) Input Context (a, c) (b, m) Table-2: Chapter Text Before Table Table Title Table Content (f, n) Pretrain S (f, n) Numerical Mention “[T1R8C9]” is located in Table-1 (row 8, column 9), : Output Instruction Numerical Mention “[T2R2C2]” is located in Table-2 (row 2, column 2), Please output “yes” if their values should be consistent; otherwise output “no”.
# III. METHOD
# A. Overview
Given a set of numerical mentions $\mathbb { \gamma } = \left\{ v _ { k } \right\} _ { k = 1 } ^ { | \mathcal { V } | }$ and their associated table contexts $\mathcal { C } \ = \ \{ c _ { k } \} _ { k = 1 } ^ { | \mathcal { V } | }$ wi{thin} a=1document, the numerical semantic matching task aims to identify semantically equivalent pairs of numerical mentions. The context $\scriptstyle c _ { k }$ of a numerical mention $v _ { k }$ is a string that encompasses all relevant textual information required to interpret its semantics. Specifically, this context string $c _ { k }$ comprises the table containing $\boldsymbol { v } _ { k }$ (typically linearized into markdown format [36]), the chapter title, surrounding text (limited to 500 characters) of the table, and the precise position of $v _ { k }$ within the tabular structure.
As illustrated in Figure 2, CoFiTCheck addresses the numerical semantic matching task through two consecutive stages: Embedding-based Filtering (Section III-B) and Discriminative Classification (Section III-C). Each stage is powered by a specialized large language model - EmbLLM and ClsLLM respectively. Additionally, we introduce cross-table numerical alignment pretraining in Section III-D to further enhance ClsLLM’s performance.
# B. Embedding-based Filtering
When handling a large number of candidate numerical mention pairs within a single document (Challenge C1), it is crucial to efficiently reduce the search space before performing fine-grained classification (Section III-C). This is analogous to candidate item retrieval in recommendation systems [37]– [39], where an initial subset of potentially relevant items is retrieved before applying more computationally intensive reranking methods.
To prune candidate pairs, we propose an embedding-based approach that first encodes each numerical mention as a dense embedding. These embeddings capture compact semantic representations of the numerical mentions, enabling efficient retrieval of semantically equivalent pairs. Notably, we observe that a single table often contains multiple numerical mentions that share the same table context, differing only in their positions within the table. This motivates us to obtain embeddings for all mentions simultaneously rather than processing each one separately [23], [28]. Furthermore, it is crucial to adhere to the instruction-tuning format that modern LLMs are predominantly trained on [40]. To this end, we propose a Contextualized Instructional Parallel Encoding (CIPE) strategy.
Specifically, as illustrated in Figure 2, we leverage an EmbLLM to encode all numerical mentions $\{ v _ { 1 } , . . . , v _ { n } \}$ within a given table context $c$ in a single forward pass. We construct the input by concatenating the table context with a prompt $p _ { \mathrm { e m b } }$ that instructs the LLM to encode the subsequent numerical mentions. All the numerical mentions within the table are then sequentially appended after the prompt. For each numerical mention $v _ { j }$ , we extract its representation $e _ { j }$ by taking the last hidden state of its final token:
$$
[ e _ { 1 } , . . . , e _ { n } ] = f _ { \mathrm { E m b L L M } } ( c \oplus p _ { \mathrm { e m b } } \oplus v _ { 1 } \oplus . . . \oplus v _ { n } ) ,
$$
where each component in Equation 1 is first tokenized into a sequence of tokens before being fed into the model and $\oplus$ denotes sequence concatenation. To prevent cross-contamination between numerical mentions after the prompt, we implement a specialized attention masking and positional encoding mechanism. Formally, for each numerical mention $\boldsymbol { v } _ { i }$ consisting of $N _ { i }$ tokens $\{ t _ { i , 1 } , . . . , t _ { i , N _ { i } } \}$ , we modify the attention mask $M$ such that for $i \in [ 1 , n ]$ , $m \in [ 1 , N _ { i } ]$ :
$$
M [ t _ { i , m } , t ^ { \prime } ] = \left\{ { 1 , } \atop { \mathrm { ~ o t h e r w i s e } } \right. \cup \left. { \mathrm { T } } ( p _ { \mathrm { e m b } } ) \cup \left\{ { t _ { i , 1 } , . . . , t _ { i , m } } \right\} \right.
$$
where $\operatorname { T } ( \cdot )$ denotes the token set. This ensures that, after the prompt, tokens within numerical mentions can only attend to the table context $\boldsymbol { c }$ , the prompt $p _ { \mathrm { e m b } }$ , and preceding tokens within the same numerical mention. Additionally, we reset position indices for tokens of these numerical mentions to start after the end of the prompt $p _ { \mathrm { e m b } }$ , regardless of their absolute positions in the sequence:
$$
\mathrm { P o s i t i o n } ( t _ { i , m } ) = | \mathrm { T } ( c ) \cup \mathrm { T } ( p _ { \mathrm { e m b } } ) | + m - 1 .
$$
These adjustments preserve the contextual understanding while isolating the representations of individual numerical mentions.
meAntfitoer $\mathcal { E } ~ = ~ \left\{ e _ { k } \right\} _ { k = 1 } ^ { | \mathcal { V } | }$ .haWve tehmenb pdrduineg onf adlal nupamiresr bayl retaining only those whose embeddings exhibit a similarity above a given threshold $t$ :
$$
\mathcal { P } _ { \mathrm { c a n d } } = \{ ( i , j ) | \cos ( e _ { i } , e _ { j } ) > t , ( v _ { i } , v _ { j } ) \in \mathcal { V } _ { i } \times \mathcal { V } _ { j } , i \neq j \} .
$$
To efficiently identify candidate pairs at scale, we leverage the HNSW algorithm [41] implemented in the FAISS library [42] for approximate nearest neighbor searches across embeddings. This approach significantly reduces the computational complexity of constructing $\mathcal { P } _ { \mathrm { c a n d } }$ from a naive $O ( | \mathcal { E } | ^ { 2 } )$ to approximately $O ( | \mathcal { E } | \log | \mathcal { E } | )$ , enabling efficient large-scale retrieval in practice.
To train the EmbLLM effectively, we propose a decoupled InfoNCE objective utilizing in-batch negatives [43], [44]. Each training batch consists of a collection of table contexts and their corresponding numerical mentions. For each mention $i$ in the batch, we define $\mathcal { P } ( i )$ as the set of indices of mentions that are semantically equivalent to it, while treating the remaining mentions as negatives.
Notably, numerical semantic matching differs from traditional retrieval tasks in two key aspects: (1) mentions serve as both queries and passages, and (2) most mentions are isolated without semantic equivalents. To address this, we propose a decoupled objective as follows. Let $\mathcal { N } \mathrm { n } , \mathcal { N } \mathrm { i }$ denote the set of non-isolated and isolated numerical mentions in a batch, respectively. Our training objective comprises two components:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { n } } = - \frac { 1 } { \left| \mathcal { N } _ { \mathrm { n } } \right| } \displaystyle \sum _ { i = 1 } ^ { | \mathcal { N } _ { \mathrm { n } } | } \log \frac { \sum _ { j \in \mathcal { P } ( i ) } \exp ( \sin ( e _ { i } , e _ { j } ) / \tau ) } { \sum _ { k \in \mathcal { N } _ { \mathrm { n } } } \exp ( \sin ( e _ { i } , e _ { k } ) / \tau ) } , } \\ { \mathcal { L } _ { \mathrm { i } } = - \log \frac { \epsilon } { \epsilon + \sum _ { \stackrel { \scriptstyle ( t , q ) \in \mathcal { N } _ { \mathrm { i } } \times \mathcal { N } _ { \mathrm { i } } } { t \neq q } } \exp ( \sin ( e _ { t } , e _ { q } ) / \tau ) } , } \end{array}
$$
where $\sin ( \cdot , \cdot )$ denotes the cosine similarity between two embeddings, $\tau$ is a temperature parameter, and $\epsilon$ is a small constant. The loss ${ \mathcal { L } } _ { \mathrm { n } }$ encourages semantically equivalent mentions to have similar representations while pushing apart non-equivalent pairs, whereas $\mathcal { L } _ { \mathrm { i } }$ explicitly enforces dissimilarity between isolated numerical mentions. The final training objective is a weighted combination of these two losses: ${ \mathcal { L } } = \alpha _ { 1 } { \mathcal { L } } _ { \mathrm { n } } + \alpha _ { 2 } { \mathcal { L } } _ { \mathrm { i } }$ .
# C. Discriminative Classification
Following the coarse-grained embedding-based filtering stage, we perform fine-grained classification on each candidate pair to accurately determine their semantic equivalence. While embedding models effectively represent numerical mentions across the entire document space, they may not capture the nuanced semantic relationships between specific pairs. In contrast, discriminative classification examines one pair at a time, placing the contexts of both mentions together to form a query, allowing for more fine-grained comparative analysis of their semantic features. We conduct prompt management to facilitate LLMs in comprehending the task and producing the desired outputs. As illustrated in Figure 2, the prompts are composed of the following three components:
• Task Description $\begin{array} { r } { p _ { \mathrm { c l s } } } \end{array}$ : Provides explicit instructions for the task, including explanations of the input-output format and essential definitions to clarify specialized concepts relevant to the task.
• Input Context: Supplies the complete contextual information surrounding the two numerical mentions. Additionally, all numerical mentions in the table are masked with placeholders to prevent LLMs from relying on value equality as a shortcut for determining semantic equivalence.
• Output Instruction $p _ { \mathrm { o u t } }$ : Specifies the locations (row and column) of the two target values in the table and instructs the LLM to make a binary decision on whether the two values are semantically equivalent.
Formally, for each pair $( i , j ) \in \mathcal { P } _ { \mathrm { c a n d } }$ , we prompt an LLM to perform fine-grained binary classification as follows:
$$
r _ { i , j } = f _ { \mathrm { C l s L L M } } ( p _ { \mathrm { c l s } } \oplus c _ { i } \oplus c _ { j } \oplus p _ { \mathrm { o u t } } ( v _ { i } , v _ { j } ) ) .
$$
We then parse the response $\boldsymbol { r } _ { i , j }$ to obtain the final prediction for each numerical mention pair. We train the ClsLLM using standard cross-entropy loss [45].
# D. Cross-Table Numerical Alignment Pretraining
Though large language models (LLMs) exhibit broad world knowledge, recent studies [18] have shown that they still lack knowledge in professional domains. To enhance LLMs’ understanding of semantically equivalent numerical mentions in professional documents, we propose a Cross-table Numerical Alignment Pretraining (CNAP) approach.
The key idea is that, rather than aligning numerical mentions with descriptions in natural languages, we aim to teach the model to identify patterns of semantically equivalent numerical pairs. Intuitively, we observe that the equality relationship between numerical mentions naturally provides weak supervision signals of semantic equivalence. For example, in the bottomright part of Figure 3, when LLMs are pretrained to perform next-token prediction on the second (right) table, they are required to learn to identify and duplicate correct semantically equivalent values from the first (left) table. Therefore, we aim to reorder the tables in a document to construct pretraining sequences such that tables with more equal numerical mentions are positioned closer to each other.
#1. Build Table Relevance Graph 1 #2. Construct Pretraining Sequence 1 : Document Table Relevance Graph Path: Tab 5 0.2 Tab 4 0 Tab 3 0.6 Tab 1 0.4 Tab 2 Tab 1 Tab 1 21 Net debt Tab 5 Tab 4 Tab 2 0.6 0.4 US\$M Interest bearing liabilties Curent24 on uren Net debt and gearing ratio Year ended 30 June 202 2,084 Tab 3 → Tab 3 0.3 Tab 2 interestbearing liabilities [2.084] T18,6347- --Comprtsng: 18.638 Les:easeliablityasoaedwitindex-linkdfreghtacts Less: Cash and cash equival … Tab 4 0.2 Tab 4 es Tab 5 12.51 Tab 5 Edge: Relevance Score Net debt (2 Netassets ...EE\*\*\*\* 20 : Net assets 0 Gearing . 15.7% Gearing 15.7% 169 BHP Annual Report 2024 90 BHP Annual Report 2024
We formulate the problem as the maximum traveling salesman problem [46] which aims to find the maximum weight path that traverses all nodes in a graph exactly once. Since exactly solving the problem is NP-hard, inspired by [47], we apply a greedy algorithm as an efficient approximation.
As described in Algorithm 1, CNAP begins by representing tables in a document, denoted as $T = \{ t _ { i } \} _ { i = 1 } ^ { | T | }$ , as nodes in an undirected graph. The edges between nodes are weighted using relevance scores, which are defined as the Intersection over Union (IoU) of the numerical mention lists. Mathematically, for two tables $t _ { i }$ and $t _ { j }$ with numerical mention lists $V _ { i }$ and $V _ { j }$ , the relevance score can be expressed as:
$$
R ( t _ { i } , t _ { j } ) = \frac { \mathsf { e q u a l } ( V _ { i } , V _ { j } ) } { | V _ { i } | + | V _ { j } | } ,
$$
where the equal function returns the number of equal numerical mentions.
The workflow of CNAP is depicted in Figure 3. CNAP traverses the graph by first selecting an unvisited table with the minimum degree as the starting node (Tab 5). Then the current path is iteratively extended by selecting the unvisited neighboring table with the highest weight (Tab 4). This process continues until reaching a node where all neighboring tables have been visited—a situation that could occur because the graph is not complete and only contains edges between tables sharing equal numerical mentions. In such cases, CNAP extends the graph with a zero-weight edge to a randomly selected unvisited table with the minimum degree (Tab 3) and resumes the process. The preference for selecting minimumdegree tables as starting points is because that they are most likely to have all their neighbors visited first, resulting in connections to irrelevant tables in the final path. Finally, the resulting traversal path is truncated to create fixed-sized input contexts appropriate for pretraining. We use the standard next token prediction loss [45] for pretraining.
# IV. EXPERIMENTS
# A. Experimental Setups
We first introduce basic experimental settings, including datasets, evaluation metrics, implementations, and baselines.
# Algorithm 1 CNAP
Require: A document with tables $T ~ = ~ \{ t _ { i } \} _ { i = 1 } ^ { | T | }$ and their numerical mention lists $\{ V _ { i } \} _ { i = 1 } ^ { | T | }$
Ensure: A pretraining dataset $\mathcal { P } _ { p t }$
1: Initialize graph $G = ( T , E )$ , where each table is a nod
$E \{ \}$
2: for each pair of tables $( t _ { i } , t _ { j } )$ do
3: Compute the relevance score R(ti, tj) = equVail(+Vi,Vjj)
4: if $R ( t _ { i } , t _ { j } ) > 0$ then
5: Add an edge $( t _ { i } , t _ { j } )$ to $E$ with weight $R ( t _ { i } , t _ { j } )$
6: end if
7: end for
8: Initialize path $P [ ]$
9: while $\vert T \vert > 0$ do
10: $t _ { i } \gets \operatorname* { m i n } _ { - } \mathrm { d e g } ( T )$
11: P.append $( t _ { i } )$
12: T.remove $\left( t _ { i } \right)$
13: while A $\mathbf { l j } ( t _ { i } ) \cap T \neq \emptyset$ do
14: tj ← arg maxt Adj(ti) T edge weight(ti, t)
15: ti ← tj
16: P.append(ti)
17: T.remove(ti)
18: end while
19: end while
20: Truncate $P$ to create fixed-sized input contexts $\mathcal { P } _ { p t }$
21: return $\mathcal { P } _ { p t }$
1) Datasets: We collect three sets of Chinese disclosure documents: IPO prospectuses, auditor’s reports, and annual reports. These documents are widely used in financial disclosure and contain extensive tabular data, requiring a high degree of numerical consistency. IPO prospectuses provide detailed information about a company’s financials and risks to ensure transparency and regulatory compliance during public offerings. Auditor’s reports offer independent evaluations of financial statements, verifying their accuracy and enhancing stakeholder confidence. Annual reports present a comprehensive summary of a company’s yearly performance, operations, and future outlook to inform shareholders and stakeholders.
Audit reports cover the financials of a single year, and the financial sections of annual reports are mainly based on the data from the audit reports. Each document is manually annotated using a pipeline similar to prior work [12]. The statistics of these datasets are shown in Table I. Notably, the ratios of positive to negative pairs are highly imbalanced, particularly in auditor’s reports, which exhibit a pos-neg ratio of 1:73,362. This extreme imbalance poses significant challenges to the system’s performance and efficiency. After annotation, we split each dataset into training, validation, and test sets in an 8:1:1 ratio at the document level. Additionally, we crawled 11,635 annual reports from a stock exchange website 1 for pretraining purposes.
TABLE I DATASET STATISTICS. THE ABBREVIATION ”PD.” STANDS FOR ”PER DOCUMENT”.
2) Metrics: The numerical semantic matching task aims to identify a set of semantically equivalent numerical pairs from a document. Due to the extreme imbalance in the ratios of positive to negative pairs, we adopt set-level precision, recall, and F1 as evaluation metrics, following prior work [12]. Specifically, given a set of golden pairs $\{ g _ { 1 } , . . . , g _ { n } \}$ and predicted pairs $\{ p _ { 1 } , . . . , p _ { n } \}$ of $n$ documents, we define the following metrics:
$$
{ \mathrm { P r e c i s i o n ~ ( P ) } } = { \frac { \sum _ { i = 1 } ^ { n } \left| g _ { i } \cap p _ { i } \right| } { \sum _ { i = 1 } ^ { n } \left| p _ { i } \right| } } ,
$$
$$
\mathrm { R e c a l l ~ ( R ) } = \frac { \sum _ { i = 1 } ^ { n } \left| g _ { i } \cap p _ { i } \right| } { \sum _ { i = 1 } ^ { n } \left| g _ { i } \right| } ,
$$
$$
\mathrm { F } 1 = \frac { 2 \cdot \mathbf { P } \cdot \mathbf { R } } { \mathbf { P } + \mathbf { R } } .
$$
3) Implementations: We select the Qwen2.5 series [21] as our backbone due to its exceptional Chinese language understanding capabilities. For the embedding-based filtering stage, we utilize Qwen2.5-0.5B-Instruct as the backbone for EmbLLM. For the decoupled InfoNCE loss, we set the temperature $\tau$ to 0.15, $\alpha _ { 1 }$ to 0.75, and $\alpha _ { 2 }$ to 0.25. The model is trained for 3 epochs with a learning rate of $1 \times 1 0 ^ { - 5 }$ and a batch size of 12 tables per GPU. During inference, we set the embedding similarity threshold to 0.5, which provides an effective balance between recall and efficiency (see Section IV-E for detailed analysis).
For the discriminative classification stage, we adopt the 0.5B, 1.5B, 3B, and 7B instruct versions as backbones for ClsLLM, leveraging the increased model capacity to ensure more accurate classification. These models are trained for 2 epochs with a learning rate of $2 \times 1 0 ^ { - 5 }$ and a batch size of 20 per GPU. For CNAP implementation on ClsLLM, we first pretrain the backbone for 2 epochs with a learning rate of
$2 \times 1 0 ^ { - 5 }$ and a batch size of 16 per GPU, followed by the same fine-tuning procedure.
All training procedures are conducted on a cluster of 24 H100 GPUs. We employ the Huggingface Transformers library [48] and DeepSpeed ZeRO [49] for efficient distributed training, and utilize vLLM [50] for efficient inference. A cosine learning rate scheduler with linear warmup over the first 0.02 epochs is used. For both training and inference, the input length is set to 4096.
4) Baselines: For overall performance, we compare with the most recent work, AutoCheck [12]. AutoCheck provides an end-to-end solution for the numerical semantic matching task. It first employs a cell embedding network to generate cell embeddings, followed by a cell pair classification step to determine whether each pair is semantically equivalent. To enhance efficiency, it applies heuristc-based filtering techniques, reducing the number of candidate pairs by a factor of four. We report the performance of AutoCheck as presented in the original paper [12]. We use the same training and test splits, ensuring that the results are directly comparable.
Notably, AutoCheck is the only existing baseline specifically designed for this task. Recent studies have shown that for specific tasks, such as information extraction [51] and text classification [52], cutting-edge LLMs under zero/fewshot settings are comparable to or even surpass smaller expert models specifically trained on specialized tasks. For further comparison, we also evaluate two categories of state-of-theart large language models without task-specific finetuning:
General-purpose LLMs, including GPT-4o-mini [53], GPT-4o [19], and DeepSeek-V3 [22], which demonstrate broad world knowledge and remarkable table understanding capabilites.
• Reasoning-specialized LLMs, including OpenAI-o3- mini [54], DeepSeek-R1 [55], and OpenAI-o1 [20], which are specifically optimized for advanced reasoning and excel in complex reasoning tasks.
The baselines for embedding-based filtering and CNAP are described in detail in Sections IV-E and IV-F, respectively.
# B. Overall Performance Comparison
In this section, we present the overall performance of the numerical semantic matching task on three types of documents. As shown in Table II and Figure 4, we can observe several key findings:
• Superior performance of CoFiTCheck: Our proposed CoFiTCheck significantly outperforms AutoCheck across all document types. With the 0.5B ClsLLM backbone, CoFiTCheck achieves an F1 score of $8 3 . 8 \%$ and $8 6 . 7 \%$ on auditor’s reports and IPO prospectuses, surpassing AutoCheck by 8.4 and 5.4 points, respectively. Increasing the size of ClsLLM significantly improves performance. CoFiTCheck with 7B ClsLLM achieves highest performance, reaching $8 7 . 0 \%$ , $9 0 . 3 \%$ , and $9 0 . 8 \%$ F1 scores on auditor’s reports, IPO prospectuses, and annual reports, respectively. This represents an improvement of 11.6 points over AutoCheck on auditor’s reports and 9.0 points on IPO prospectuses.
TABLE II OVERALL PERFORMANCE COMPARISON ACROSS DIFFERENT DOCUMENT TYPES
Fig. 4. F1 scores of CoFiTCheck across three document types with varying ClsLLM sizes. CoFiTCheck w. CNAP generally outperforms CoFiTCheck, with performance consistently improving as ClsLLM size increases from 0.5B to 7B parameters.
Effectiveness of CNAP: Our proposed Cross-Table Numerical Alignment Pretraining (CNAP) method demonstrates effectiveness in further boosting overall performance without manual annotations. For example, when applied to the 1.5B ClsLLM, CNAP improves the F1 scores by 0.7, 1.2, and 0.3 points across the three document types compared to the standard CoFiTCheck with the same backbone size. CNAP enables smaller ClsLLM models to achieve performance comparable to their larger counterparts without CNAP. As shown in Figure 4, the 3B model with CNAP achieves $8 6 . 7 \%$ and $9 0 . 7 \%$ F1 scores on auditor’s reports and annual reports respectively, which is comparable to the 7B model without CNAP ( $8 7 . 0 \%$ and $9 0 . 8 \%$ ). Notably, CoFiTCheck with CNAP using the 7B ClsLLM backbone achieves the best overall performance on auditor’s reports and annual reports. CNAP demonstrates more substantial performance gains on annual reports and auditor’s reports compared to IPO prospectuses. This likely stems from our pretraining corpus composition, which predominantly consists of 11,635 annual reports2. We consider expanding our pretraining to incorporate a more diverse range of document types as an important direction for future
work.
These results validate the effectiveness of the CoFiTCheck framework and the proposed pretraining method CNAP for numerical semantic matching tasks over disclosure documents, particularly when combined with larger model capacities.
# C. Performance Comparison with SOTA LLMs
In this section, we compare our ClsLLM with state-ofthe-art LLMs on the discriminative classification task. We randomly select 1k samples from the test set of discriminative classification as the test bench3. For all models, we use greedy decoding (temperature $\mathit { \Theta } = \ 0$ ) with zero-shot prompting. The results are shown in Table III.
The experimental results reveal several important findings:
• SOTA LLMs show promising performance: SOTA LLMs demonstrate strong capabilities in the discriminative classification task without specific fine-tuning, with OpenAI-o1 achieving an F1 score of $7 7 . 4 \%$ . This indicates that recent advancements in LLMs have equipped these models with great numerical understanding abilities in tables. Notably, reasoning-specialized models consistently outperform general-purpose counterparts from the same provider. This performance gap likely stems from the nature of the discriminative classification task, which requires analyzing and comparing numerical semantics in tables—a process inherently demanding reasoning capabilities.
TABLE III PERFORMANCE COMPARISON OF VARIOUS LLMS ON DISCRIMINATIVE CLASSIFICATION
Task-specific fine-tuning remains crucial: The 0.5B ClsLLM significantly outperforms the best reasoningspecialized model, OpenAI-o1. The advantage becomes even more pronounced with ClsLLM-7B w. CNAP, which achieves an F1 score improvement of 14 points. Examining the false positive rate (i.e., 1−precision) further highlights this gap: OpenAI-o1 exhibits a false positive rate of $2 2 . 6 \%$ , whereas ClsLLM-7B w. CNAP reduces this to just $8 . 6 \%$ , representing an almost threefold decrease. This considerable performance gap underscores that discriminative classification demands specialized knowledge and domain expertise that current LLMs lack, highlighting the importance of task-specific fine-tuning even in the era of powerful foundation models.
# D. Overall Efficiency Comparison
In this section, we evaluate the efficiency of CoFiTCheck using 126 test documents on 4 NVIDIA GeForce RTX 4090 GPUs. We deploy the EmbLLM and ClsLLM sequentially as 4 distributed workers across these GPUs, reporting the averaged per-document processing time for each stage. Our ablation studies examine: (1) Removing Parallel Encoding, removing the parallel encoding strategy of CIPE, which forces encoding one numerical mention per forward pass; (2) Heuristic-based Filtering, replacing stage 1 with heuristicbased filtering from AutoCheck [12]; and (3) Removing Stage 1, removing embedding-based filtering entirely, which processes all candidate pairs in stage 2. For the latter two computationally intensive scenarios, we estimate4 runtimes based on average processing times.
TABLE IV RUNTIME COMPARISON OF COFITCHECK SYSTEM ACROSS DIFFERENT CLSLLM SIZES, SHOWING AVERAGE PROCESSING TIME PER DOCUMENT.
As presented in Table IV, CoFiTCheck demonstrates remarkable efficiency. It processes a document in just 15.7 seconds with the 0.5B ClsLLM and 40.8 seconds with the 7B ClsLLM. Considering that manual verification typically requires tens of hours of expert review per document [12], CoFiTCheck’s processing speed is well-suited for practical deployment in real-world scenarios.
Our ablation study reveals several key efficiency insights:
Superior efficiency of Parallel Encoding: When the parallel encoding strategy is removed, the processing time for stage 1 increases dramatically from 12.4 seconds to 309.9 seconds—a $2 5 \times$ slowdown—highlighting the effectiveness of our parallel encoding approach. A similar acceleration is observed during training: with parallel encoding, the training process takes approximately 1 day, whereas without it, training would require about 25 days. • Necessity of embedding-based filtering: When stage 1 is removed entirely, the processing time increases to approximately 1.5 days for the 0.5B model and 12.9 days for the 7B model. Besides, CoFiTCheck with embeddingbased filtering is approximately $4 2 0 \times$ faster than using heuristic-based filtering with the 0.5B model and $1 { , } 4 0 0 \times$ faster with the 7B model.
These improvements collectively address Challenge C1 for tabular numerical cross-checking, making CoFiTCheck practical for real-world applications.
# E. Analysis of Embedding-Based Filtering
Embedding-based filtering plays a crucial role in enhancing system efficiency by pruning candidate pairs. However, this process may inadvertently exclude true positive pairs, thereby affecting the overall system recall. This section analyzes the trade-off between computational efficiency and recall across various embedding similarity thresholds.
We conduct the following comparison experiments to validate our design choices for EmbLLM: (1) standard InfoNCE. We compare decoupled InfoNCE objective with the standard InfoNCE objective [56], which is formulated as:
$$
\mathcal { L } _ { \mathrm { s t a n d a r d } } = - \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \log \frac { \sum _ { j \in \mathcal { P } ( i ) } \exp ( \sin ( e _ { i } , e _ { j } ) / \tau ) } { \sum _ { k = 1 } ^ { N } \exp ( \sin ( e _ { i } , e _ { k } ) / \tau ) } ,
$$
Fig. 5. Comparison of EmbLLM performance under different settings: (a) comparison between our decoupled InfoNCE objective and the standard InfoNCE objective; (b) comparison between our CIPE strategy and the EPE strategy; and (c) ablation study of the decoupled InfoNCE objective with and without the loss term $\mathcal { L } _ { i }$ . In all plots, the $\mathbf { x }$ -axis represents recall (higher is better), and the $\mathbf { y }$ -axis indicates the number of candidate pairs per document (lower is better).
where $N$ is the number of numerical mentions in a batch and $\mathcal { P } ( i )$ represents the set of positive numerical mentions for the $i$ -th mention. We treat each mention as a positive mention of itself, ensuring that every mention has at least one positive mention. Standard InfoNCE treats all mentions equally and double-counts isolated mention pairs, which may disproportionately focus on distinguishing isolated mentions rather than bringing positive pairs closer. (2) Extractive parallel encoding (EPE). We compare our contextualized instructional parallel encoding (CIPE) strategy with the extractive parallel encoding (EPE) strategy [29], which directly encodes the table context and uses the embedding of the last token of each numerical mention as its numerical representation, without adding additional prompt and mention tokens in our method. (3) Decoupled InfoNCE w/o. $\mathcal { L } _ { i }$ . We remove the loss term $\mathcal { L } _ { i }$ in Equation 5.
Using 126 test set documents as benchmark, we vary the embedding similarity threshold from 0.1 to 0.9, measuring both the remaining candidate pairs per document and the recall. As illustrated in Figure 5, more aggressive filtering (higher threshold) reduces candidate pairs but potentially decreases recall by filtering out true positives. A practical balance between recall and efficiency lies in the lower-right region, indicating relatively high recall while keeping the number of remaining candidate pairs low.
Our analysis yields the following key findings:
• Superior performance of decoupled InfoNCE: The decoupled InfoNCE objective consistently outperforms the standard InfoNCE objective (Figure 5a), producing fewer candidate pairs to achieve equivalent recall levels. This advantage is especially pronounced at higher recall settings, which are critical for practical applications. At a $9 5 \%$ recall level, our decoupled objective outputs only one-tenth of the candidate pairs compared to the standard objective. This substantial reduction would deliver a nearly $1 0 \times$ speedup in both training and inference for the downstream ClsLLM module, significantly enhancing overall efficiency. Additionally, removing the loss term $\mathcal { L } _ { i }$ (Figure 5c) results in a marked performance decline, underscoring the critical role of $\mathscr { L } _ { i }$ in effectively pushing apart isolated mentions.
CIPE outperforms EPE: The CIPE strategy consistently surpasses EPE across all recall levels (Figure 5b). At a $9 5 \%$ recall level, our CIPE strategy outputs only twothirds of the candidate pairs compared to the EPE strategy. This improvement is likely because our proposed CIPE strategy employs an instruction format that is more closely aligned with the training paradigms of LLMs, such as instruction tuning [40].
# F. Analysis of CNAP
As described in Sections IV-B, our proposed Cross-Table Numerical Alignment Pretraining (CNAP) method consistently boosts overall performance without requiring manual annotations. In this section, we investigate the contributions of two key components to CNAP’s effectiveness: (1) the additional pretraining process and (2) the advanced pretraining sequence construction strategy. Specifically, we compare CNAP with a Reading Order-aware PreTraining (ROPT) strategy, which is widely adopted as a robust recipe for pretraining generative language models [21], [57], [58]. For each document, ROPT constructs pretraining sequences using the same tables as CNAP but traverses them following the document’s reading order. The training recipe for ClsLLM remains identical between ROPT and CNAP. We employ the 1.5B backbone to compare these methods and report overall F1 scores.
Fig. 6. Overall F1 scores of various pretraining strategies (1.5B parameters) evaluated across three document types.
As shown in Figure 6, ROPT improves over the baseline without pretraining by 0.4, 0.8, and 0.2 F1 points on auditor’s reports, IPO prospectuses, and annual reports, respectively, demonstrating that leveraging additional training resources for pretraining on disclosure documents consistently enhances model performance. CNAP consistently outperforms ROPT across all three document types, with further F1 score improvements of 0.3, 0.4, and 0.1 points on auditor’s reports, IPO prospectuses, and annual reports, respectively. This performance gap can be attributed to the fact that tables, when processed separately following reading order, fail to provide sufficient supervision for identifying semantically equivalent numerical pairs. | Numerical consistency across tables in disclosure documents is critical for
ensuring accuracy, maintaining credibility, and avoiding reputational and
economic risks. Automated tabular numerical cross-checking presents two
significant challenges: (C1) managing the combinatorial explosion of candidate
instances at the document level and (C2) comprehending multi-faceted numerical
semantics. Previous research typically depends on heuristic-based filtering or
simplified context extraction, often struggling to balance performance and
efficiency. Recently, large language models (LLMs) have demonstrated remarkable
contextual understanding capabilities that helps address C2 at the instance
level, yet they remain hampered by computational inefficiency (C1) and limited
domain expertise. This paper introduces CoFiTCheck, a novel LLM-based
coarse-to-fine framework that addresses these challenges through two sequential
stages: embedding-based filtering and discriminative classification. The
embedding-based filtering stage introduces an instructional parallel encoding
method to efficiently represent all numerical mentions in a table with LLMs, as
well as a decoupled InfoNCE objective to mitigate the isolated mention problem.
The discriminative classification stage employs a specialized LLM for
fine-grained analysis of the remaining candidate pairs. This stage is further
enhanced by our crosstable numerical alignment pretraining paradigm, which
leverages weak supervision from cross-table numerical equality relationships to
enrich task-specific priors without requiring manual annotation. Comprehensive
evaluation across three types of real-world disclosure documents demonstrates
that CoFiTCheck significantly outperforms previous methods while maintaining
practical efficiency. | [
"cs.CL"
] |
# 1 Introduction
Deep learning has revolutionized medical imaging by delivering substantial gains in segmentation accuracy, lesion detection, and diagnostic classification across radiology and endoscopy.[1–3] Yet, translating these advances into surgical domains remains challenging due to the complex, dynamic environment of the operating room, where lighting variations, instrument occlusions, and rapid tissue deformations complicate robust model performance [4–6]. In particular, intraoperative bleeding, an event that can threaten patient safety if not detected promptly, poses a unique challenge: bleeding patterns vary unpredictably across patients, anatomical sites, and surgical techniques [7, 8]. Compounding these technical hurdles are severe data limitations [9], as acquiring and annotating high-fidelity surgical videos is resource-intensive, ethically constrained, and often restricted by patient privacy regulations [10].
To mitigate clinical challenges and data limitations [11, 12], our laboratory previously developed a multilayer silicone-based mimicking organ system capable of reproducing realistic bleeding under endoscopic imaging conditions [13]. By carefully layering silicone substrates and embedding colored fluid channels, we generated a dataset of annotated bleeding events that enabled the training of a Bleeding Alert Map (BAM) model with promising localization accuracy on ex vivo and in vivo test cases. Despite these successes, the manual fabrication process, which requires precise control of layer thickness, pigmentation, and channel geometry, demands several hours per sample, resulting in limited anatomical diversity and prohibitive scaling costs for larger datasets.
Addressing this bottleneck, we propose a structured data augmentation framework that seamlessly orchestrates generative modeling, relational positional encoding, automated label extraction, and inpainting into a unified pipeline for synthetic surgical image creation. By embedding anatomically plausible bleeding coordinates within a modified StyleGAN3 generator, extracting these coordinates via an automated detection algorithm, and applying advanced inpainting to remove residual artifacts, our approach yields high-quality, artifact-free images annotated with precise pointcoordinate labels. This scalable pipeline overcomes ethical and logistical barriers to data acquisition and enables training of localization models under severe data scarcity. Experimental validation demonstrates that models trained on our synthetic data outperform those trained using conventional augmentation techniques, highlighting the potential of our method for advancing surgical AI applications.
# 2 Current Scenario & Related Work
The integration of AI into medical imaging has revolutionized diagnostics, treatment planning, and patient monitoring [1, 14]. However, the application of AI, particularly deep learning, in medical imaging is constrained by the need for large, diverse, and accurately labeled datasets [15]. In surgical imaging, these challenges are compounded by the invasive nature of data collection, ethical restrictions, and the inherent complexity of operative scenes [16].
# 2.1 Challenges in Medical Image Data Acquisition and Labeling
Medical imaging data acquisition faces numerous challenges, primarily due to strict privacy protections, regulatory constraints, and the necessity of specialized expert annotations [17]. Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) place significant barriers on data sharing practices, thus complicating dataset compilation and dissemination [18]. Additionally, medical image annotation demands specialized expertise, rendering the annotation process both expensive and time-intensive [19]. In surgical contexts, acquiring data is further complicated by dynamic intraoperative variables including variations in patient anatomy, surgical technique, lighting conditions, smoke interference, and instrument presence, each of which introduces significant variability [20]. Furthermore, obtaining precise ground truth annotations (e.g., exact bleeding locations) in real-time surgical conditions is challenging, resulting in datasets that are typically small, imbalanced, and unrepresentative of the entire scope of surgical complexity [21, 22].
# 2.2 Use of Physical Phantoms and Mimicking Organs
To mitigate data scarcity and overcome ethical concerns, researchers have developed physical phantoms or ”mimicking organs”, which replicate human tissue properties for imaging studies and surgical training applications [23, 24]. These models are often fabricated using materials such as silicone, hydrogels, or 3D-printed polymers to simulate the mechanical, optical, and acoustic properties of human tissues. For instance, [13] developed multilayer silicone-based mimicking organs to perform controlled bleeding simulations, generating images coupled with accurate ground truth annotations. While mimicking organs reduces ethical barriers and facilitates experimental reproducibility, their production remains costly and labor-intensive, requiring detailed layering and coloring techniques to achieve realistic textures [13, 25]. Furthermore, these models might not fully capture the anatomical variability and pathological complexity observed in real patients [26]. Practical constraints limit the volume and diversity of image data generated, and the lack of spontaneous biological variation can also hinder the representativeness and generalizability of resulting datasets [27].
# 2.3 Generative Adversarial Networks (GANs) in Medical Image Synthesis
Generative Adversarial Networks (GANs), introduced by Goodfellow et al. [28], have demonstrated effectiveness in generating synthetic yet realistic images through adversarial learning between generator and discriminator networks. Within medical imaging, GANs have been utilized extensively to expand datasets, rectify class imbalances, and synthesize images representing rare medical conditions [29]. For example, GAN-generated liver lesion images have significantly enhanced the performance of classification models [30], and synthetic brain MRI images produced by GANs have improved segmentation accuracy [31]. Despite these successes, applying GAN-based methods to medical imaging must ensure not only visual realism but also anatomical correctness and clinical validity, requirements that present considerable challenges [32]. Moreover, GAN training is often prone to instability and mode collapse, complicating their consistent application and requiring careful balancing between generator and discriminator [33].
# 2.4 Advancements of StyleGAN and Its Role in Medical Imaging
To address some traditional GAN limitations, StyleGAN and its subsequent iterations were developed. StyleGAN2 and StyleGAN3 introduced style-based architectures that provide enhanced fine-grained control over the synthesis process, significantly improving training stability and reducing visual artifacts [34, 35]. In medical imaging, StyleGAN has successfully generated synthetic histopathological images for augmenting cancer detection datasets [36] and improved diabetic retinopathy classification through retinal image synthesis [37]. Despite these achievements, deploying StyleGAN specifically in surgical contexts remains challenging due to the inherent variability, dynamic interactions between tissues and surgical instruments, and significant domain differences between surgical and traditional medical imaging scenarios [38, 39].
# 2.5 Synthetic Data Generation for Surgical Applications
Generating synthetic surgical images represents a critical need given ethical constraints and the practical difficulties of obtaining extensive real surgical data [40]. Existing research primarily utilizes GAN-based methods targeting specific tasks such as instrument segmentation and workflow analysis; however, these methods frequently struggle to render realistic tissue textures and accurately depict instrument-tissue interactions [41–43]. The specific task of bleeding detection, essential yet largely underexplored in synthetic data generation, still relies predominantly on handcrafted features or conventional computer vision approaches, often resulting in high false-positive rates due to variability in lighting and tissue appearances [44, 45]. These limitations underscore an urgent need for synthetic surgical images that realistically depict clinical bleeding scenarios and associated annotations [46].
Fig. 1 Generated Sample Images for Various Versions of GAN models within orGAN system
# 2.6 Limitations and Gap Analysis of Current Synthetic Data Approaches
Despite encouraging progress, current methods in synthetic data generation for surgical imaging exhibit several significant shortcomings. Firstly, most methods primarily focus on image generation without simultaneously generating corresponding ground truth annotations, limiting their applicability for supervised AI tasks, particularly those requiring precise localization, such as bleeding detection [47, 48]. Secondly, current synthetic images often lack sufficient realism and diversity to capture the full complexity and variability inherent to real surgical environments, including different organ anatomies, pathological conditions, and intraoperative dynamics involving surgical tools [49]. Another critical gap involves embedding precise positional information (e.g., bleeding locations) during the synthetic image generation process, a capability not adequately addressed by existing methodologies [48].
Additionally, standardized evaluation metrics designed specifically for the medical domain are notably lacking. Traditional image quality metrics fail to capture clinical relevance adequately, leading to insufficient validation of synthetic images’ diagnostic value [50]. Finally, ethical considerations and potential biases introduced through the generation and use of synthetic data remain under-addressed, posing challenges for fair, effective clinical deployment of AI models.
These highlighted gaps collectively point to the urgent necessity for novel synthetic image generation approaches explicitly designed for surgical applications. Such approaches must simultaneously produce high-quality synthetic images with accurate, embedded ground truth annotations, fully capture the intricate complexity of surgical scenes, and employ robust, domain-specific validation metrics. Addressing these critical limitations is paramount to developing AI models capable of robust performance and generalization in real-world surgical scenarios.
# 3 Contributions of This Research
In response to previously discussed challenges, we introduce orGAN, a multi-stage GAN-based framework specifically developed to generate synthetic surgical images annotated precisely for bleeding detection. Our core contributions include:
1. Novel GAN Framework with Embedded Positional Labeling: We integrate Relational Positional Learning (RPL) into a modified StyleGAN3 architecture, embedding accurate bleeding coordinates into synthetic images. These annotations are reliably extracted using our proposed Surgical Label Detection Algorithm (SLDA), creating ready-to-use annotated data for training localization models.
2. Artifact-Free Image Generation through Optimized Inpainting: Leveraging advanced LaMa-based image inpainting, we effectively remove embedded labels post-extraction, ensuring realistic, artifact-free images suitable for diverse surgical applications beyond bleeding detection.
3. Empirical Validation and Ethical Scalability: Extensive experimentation demonstrates significant performance improvements, achieving approximately $9 0 \%$ accuracy when synthetic orGAN data is combined with real data. This strategy addresses ethical challenges by reducing reliance on patient data, animal experiments, and manual annotations, establishing a scalable benchmark for surgical image synthesis.
4. Ethical Scalability and Benchmarking Synthetic Surgical Imaging: By substantially reducing reliance on real patient data and animal experimentation, orGAN provides an ethically sound and scalable solution to data scarcity. Moreover, the framework establishes new benchmarks in synthetic medical image generation, facilitating further advances in surgical AI applications.
# 4 Methodology
We employ the mimicking organ dataset [13] as our primary source for predicting bleeding locations in real intraoperative bleeding scenarios. Our proposed framework, referred to as $o r G A N$ , combines the strengths of Generative Adversarial Networks (GANs), Relational Positional Learning (RPL), a Surgical Label Detection Algorithm (SLDA), and advanced inpainting techniques to expand and refine the dataset.
# 4.1 orGAN System
The term ‘orGAN’ reflects our core objective of generating organ-like synthetic images for medical applications, while drawing attention to the indispensable role of GANs in our pipeline. Traditional approaches often require labor-intensive physical setups or purely manual generation of synthetic data. By contrast, orGAN automates this process with a multi-stage GAN pipeline, effectively discovering and recreating the complex variability hidden within the underlying dataset [51, 52].
Through the orGAN pipeline, we substantially enlarge the scope of synthetic data generation, producing organ-like images annotated with clinically relevant features. This approach not only conserves resources but also paves the way for robust, large-scale medical image datasets necessary for developing AI-based surgical support systems.
Fig. 2 Overview of the proposed orGAN framework illustrating synthetic data generation, label embedding, extraction via SLDA, and subsequent inpainting to create realistic surgical training data.
# 4.2 Generating Synthetic Dataset: The Mimicking Organ Setup
High-quality medical datasets are notoriously difficult to obtain due to various logistical, ethical, and privacy-related constraints. To address these challenges, prior studies have demonstrated that artificial organs made of layered silicone can closely replicate real tissues, including the realistic appearance of hemorrhagic events [13]. These ”mimicking organs” are meticulously crafted, layer by layer, with accurate textures and colors, ensuring fidelity to true surgical scenes under an endoscope.
By leveraging these mimicking organs, we can systematically induce bleeding events under controlled conditions, facilitating the acquisition of images that mimic authentic intraoperative environments. As a result, we collect a diverse range of bleeding patterns without ethical hurdles or limitations on patient availability. This cost-effective strategy guarantees consistent image quality and standardized labeling, supporting robust AI training. The final evaluation of our system still targets real intraoperative bleeding scenarios, but the labeled training data primarily comes from these realistic, ethically sourced organ replicas.
# 4.3 GAN over Mimic Data
Although data from mimicking organs are remarkably close to real surgical scenes, their finite production can restrict both coverage and variability. Generative Adversarial Networks (GANs) offer a powerful solution for augmenting and diversifying such datasets. In particular, structure-aware variants (SA-GAN) better maintain the geometry and arrangement of irregular anatomical features [9], thus helping the generator produce images consistent with real-world organ complexity.
# 4.3.1 Training StyleGAN Models
We performed initial experiments by training StyleGAN2 and StyleGAN3 on our mimic-organ-derived image set. StyleGAN2 is reputed for its ability to generate highfidelity images while preserving a well-structured latent space [34, 38, 53]. Despite promising early trials, it occasionally showed inconsistencies that required vigilant monitoring, especially regarding temporal coherence.
To address these issues, we adopted a two-phase training approach, in which the first phase (PI) focused on establishing stable and consistent image generation, while the second phase (PII) aimed to improve robustness and ensure better generalization of the model.
Subsequent experiments using StyleGAN3 demonstrated enhanced temporal consistency and fewer aliasing artifacts, which are particularly important in medical imaging applications. Based on these observations, we selected the StyleGAN3 Phase II (SG3 PII) model for downstream components in our pipeline due to its visual stability and domain adaptability.
# 4.4 Inception Score
The Inception Score [54] evaluates both image quality and output diversity. It utilizes a pre-trained Inception network to classify generated images, assigning a high score when the per-image conditional class distribution is sharp (indicating realistic content) and when the overall marginal distribution is broad (reflecting diversity). Formally,
$$
\mathrm { I S } = \exp \bigl ( \mathbb { E } _ { x \sim p _ { g } } \left[ D _ { \mathrm { K L } } ( p ( y | x ) \| p ( y ) ) \right] \bigr )
$$
where $\mathbf { x }$ denotes a generated sample, $p ( y \mid \mathbf { x } )$ is the conditional class distribution for $\mathbf { x }$ , $p ( y )$ is the marginal class distribution, and $D _ { \mathrm { K L } }$ is the Kullback–Leibler divergence.
# 4.5 RPL – Relational Positional Learning
# 4.5.1 Concept of RPL
Relational Positional Learning (RPL) directly embeds label coordinates (e.g., bleeding points) into the generated images, providing explicit spatial cues. This step is particularly valuable for medical applications that demand accurate localization of clinically relevant features such as hemorrhage sites.
Fig. 3 Labeled organ images generated by the orGAN system. White arrows highlight color spreading outside the designated label boundaries.
# 4.5.2 Implementation of RPL
To implement RPL, each synthetic image is accompanied by coordinates marking important features (e.g., “X” marks for bleeding labels). Let $I ( x , y )$ represent the grayscale or RGB intensity at pixel $( x , y )$ . We augment each pixel by appending $( x , y )$ , forming:
$$
I ^ { \prime } ( x , y ) = ( I ( x , y ) , x , y )
$$
The GAN’s generator $G$ thus learns to produce both high-fidelity textures and corresponding spatial relationships, while the discriminator $D$ evaluates not only realism but also positional correctness.
# 4.5.3 Modified Loss Function
Let ${ \mathcal { L } } _ { \mathrm { G A N 1 } }$ be the adversarial loss and $\mathcal { L } _ { \mathrm { G A N 2 } }$ the positional-consistency loss introduced by RPL. The total loss becomes:
$$
\mathcal { L } = \mathcal { L } _ { \mathrm { G A N 1 } } + \lambda \mathcal { L } _ { \mathrm { G A N 2 } }
$$
where $\lambda$ adjusts the importance of spatial alignment versus image realism.
# 4.5.4 Transfer Learning and Phase Training
To expedite convergence and stabilize training, we initialize the RPL network using weights from the best-performing StyleGAN3 Phase II (SG3 PII) model. This transfer learning setup allows the network to focus on relational encoding, building on a generator already proficient in anatomical structure synthesis. This is analogous to the final tuning stage of radial basis function (RBF) networks [55], where structural parameters are fixed and only final layers are refined.
The resulting model shows substantial improvement in spatial label fidelity, particularly for bleeding localization tasks like generating Bleeding Alert Maps (BAMs) [13].
# 4.5.5 Challenges and Optimization
Although RPL supports embedding multiple spatial markers, its performance degrades with dense labeling. We observed color leakage, particularly when using red markers, due to overlap with tissue tones. Switching to black (RGB: 0,0,0) significantly improved label contrast and reduced pixel interference. As evidenced in Figure 3, using fewer, high-contrast markers yielded better performance in SLDA and downstream analyses.
# 4.6 Surgical Label Detection Algorithm (SLDA)
# 4.6.1 Purpose
The Surgical Label Detection Algorithm (SLDA) is designed to extract bleeding point coordinates from GAN-generated images. These points are embedded during RPL using “X” markers, and SLDA provides an automated, accurate retrieval mechanism. Segment Map Generator (SMG) is a part of SLDA that processes and extracts the markers.
# 4.6.2 Mathematical Formulation
Let $I$ be an input image, and $\mathcal { T }$ the set of all images. SLDA is a mapping $B : \mathcal { T } \to \mathcal { P } ( \mathbb { R } ^ { 2 } )$ where $\mathcal { P } ( \mathbb { R } ^ { 2 } )$ is the power set of 2D coordinate space, and the output is a set of bleeding label locations $\mathcal { C } \subset \mathbb { R } ^ { 2 }$ .
# 4.6.3 Algorithm Steps
1. Image Filtering: Images with missing, poorly visible, or low-quality labels are discarded using automated heuristics, yielding $>$ 99% accuracy. This was validated via manual inspection [56].
2. Thresholding: Convert $I$ to a binary image $B$ using threshold $T$ :
$$
B ( x , y ) = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f ~ } } I ( x , y ) \geq T } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. }
$$
3. Morphological Operations: Apply dilation $\mathcal { D }$ followed by erosion $\boldsymbol { \xi }$ to clean up noise: $B ^ { \prime } = \mathcal { E } ( \mathcal { D } ( B ) )$
4. Contour Detection: Identify contours in the processed binary mask: $\begin{array} { r l } { \mathcal { C } } & { { } = } \end{array}$ Contours( $B ^ { \prime }$ )
5. Centroid Calculation: For each contour $c \in { \mathcal { C } }$ with $N$ points $( x _ { i } , y _ { i } )$ , compute its centroid:
$$
( x _ { c } , y _ { c } ) = \left( \frac { 1 } { N } \sum _ { i = 1 } ^ { N } x _ { i } , \frac { 1 } { N } \sum _ { i = 1 } ^ { N } y _ { i } \right)
$$
6. Output: Return the set of centroids as the label locations: ${ \mathcal C } = \{ ( x _ { c } , y _ { c } ) ~ | ~ c \in$ Contours( $B ^ { \prime }$ )}
# 4.7 Image Inpainting
# 4.7.1 Purpose
After SLDA extracts the coordinates of bleeding point labels (e.g., “X” marks), we remove these visible artifacts from the images to obtain realistic, label-free synthetic images. This is essential for training segmentation or classification models that require clean visual input without overlaid annotations.
# 4.7.2 LaMa Inpainting Architecture
We employ LaMa, a state-of-the-art image inpainting architecture that combines convolutional encoders with fast Fourier convolution (FFC) layers [57, 58]. LaMa is known for its ability to seamlessly fill large masked regions while preserving structural and textural consistency. We fine-tune the pre-trained LaMa model on our mimicking organ dataset to optimize label removal under surgical domain constraints.
# 4.7.3 Inpainting Process
# Mask Generation:
First, a binary mask $M$ is created over the label regions detected by SLDA:
$$
M ( x , y ) = { \left\{ \begin{array} { l l } { 1 } & { { \mathrm { i f ~ } } ( x , y ) \in { \mathrm { l a b e l ~ r e g i o n } } } \\ { 0 } & { { \mathrm { o t h e r w i s e } } } \end{array} \right. }
$$
# Image Restoration:
The masked image $I$ , along with the binary mask $M$ , is passed into the LaMa model to produce a clean inpainted image: ${ \cal I } _ { \mathrm { c l e a n } } = \mathrm { L a M a } ( I , M )$
This process eliminates embedded markers while preserving natural textures, ensuring that the resulting images are visually indistinguishable from unannotated real surgical images. These inpainted outputs are then suitable for training downstream medical AI models.
# 5 Experimental Results
This section details the experimental evaluation of the proposed orGAN system, focusing on the quality of synthetic image generation, the precision of the integrated Relational Positional Learning (RPL) and Surgical Label Detection Algorithm (SLDA) pipeline, the effectiveness of the LaMa-based image inpainting, and the overall system’s impact on the downstream task of Bleeding Alert Map (BAM) generation. Performance is further validated using actual surgical video datasets.
# 5.1 Performance & Analysis of GAN Models
The performance of StyleGAN2 (SG2) and StyleGAN3 (SG3) models, trained on the mimicking organ dataset across two training phases (PI and PII), was compared using Inception Score (IS) and model size.
Among the models tested, SG2 PI and SG2 PII achieved Inception Scores (IS) of 1.64 and 2.22, with model sizes of 7.8 GB each. SG3 PI and SG3 PII yielded IS scores of 2.21 and 2.24, respectively, with model sizes of 8.5 GB. IS acts as a quantitative comparison between the generated images, when compared with the base Inception model. ImageNet-trained Inception-v3 is out-of-domain for endoscopic scenes, hence IS should be read relatively, not on the 1-to-10 natural-image scale. Based on these results, SG3 PII was selected for use in subsequent components of our pipeline due to its combination of the highest IS and stable generative performance, particularly in terms of visual consistency and temporal coherence, which are crucial for medical imaging applications.
# 5.2 Evaluation of Marker Color Configurations in RPL-based Label Extraction
To determine the optimal configuration for extracting clean, unlabeled images from those generated with embedded markers, we evaluated the performance of the system under multiple label color conditions. Specifically, we assessed how the choice of marker color affected both the image generation quality and training efficiency.
We tested four marker color configurations—green, red, dotted green, and black. The Inception Scores (IS) for each were as follows: green achieved 2.16, red 1.93, dotted green 2.02, and black 2.27. In terms of training time, the black marker required only 41 hours, which was substantially less than dotted green (312 hours) and green (175 hours), while being moderately slower than the red marker (21 hours), given its superior quality. The difference in training time can be attributed to the model’s difference in ease of learning the features within a single GPU for each marker type, which was noted to differ vastly based on the color of the marker. The models were trained until satisfactory visuals were noted, along with consistent positive feedback from the metrics.
Based on these results, the black marker configuration was selected for use in subsequent experiments, as it demonstrated the best trade-off between generation quality and computational efficiency.
# 5.3 Image Inpainting Results
By utilizing the customized LaMa model for image inpainting, we effectively removed the embedded labels from the images without introducing noticeable artifacts.
We measured the Structural Similarity Index Measure (SSIM) between the inpainted images and the original images (without labels). The average SSIM score was 0.98, indicating a high degree of similarity and confirming the effectiveness of the inpainting process.
# 5.4 Evaluation of properties within orGAN Images
To evaluate the effectiveness of the orGAN model, we randomly selected 5,000 images from both the orGAN-generated dataset and the original mimicking organ dataset. For each image, key statistical properties were computed and subsequently visualized in the following figures 4 and 5.
Fig. 4 Variance vs. Mean Graph Comparison
Figure 4 displays the relationship between image brightness and contrast by plotting the mean pixel intensity against the variance of pixel intensities. The similar central tendencies of the orGAN-generated images and original images indicate that the overall brightness levels are well preserved, while the observed variance confirms that the synthetic images exhibit an acceptable degree of contrast variation, suggesting that orGAN is capable of generating novel images within the same domain.
Figure 5 presents a kernel density estimation (KDE) of the variance distribution, providing a smooth representation of contrast dispersion. The original dataset exhibits a bimodal distribution, which may reflect inherent subpopulations in image contrast, whereas the orGAN-generated images display a more uniform density, indicating that the GAN model is learning and synthesizing images with a consistent range of contrast values.
Collectively, these evaluations demonstrate that the orGAN-generated images not only adhere to the statistical properties of the original mimicking organ images but also enrich the dataset with additional spatial detail, which is crucial for improved bleeding detection in surgical applications.
# 5.5 Bleeding Alert Map (BAM) Generation and Evaluation
In this study, we evaluated the efficacy of the orGAN system in estimating bleeding locations during endoscopic surgeries, a task complicated by the difficulty of obtaining labeled datasets.
Fig. 5 Kernel Density Estimation (KDE) of Variance.
# 5.5.1 BAM Generation Using Different Training Datasets
We trained three BAM models using NVIDIA’s Pix2PixHD architecture [59] on datasets with varying ratios:
• Original $1 0 0 \%$ : Only mimicking organ images.
• orGAN $1 0 0 \%$ : Only images generated by orGAN.
• $5 0 \% { : } 5 0 \%$ Blend: An equal mix of the two datasets.
Each model was trained for around 100 epochs. Figure 6A showcases the BAM outputs on two randomly selected test images not used in the training process.
# 5.5.2 Analysis of BAM Results
As shown in Figure 6A, the model trained on the 50%:50% blended dataset produced the most accurate BAMs, effectively identifying bleeding sources. The model trained exclusively on the original dataset performed adequately but less consistently, while the model trained solely on the orGAN dataset failed to produce precise BAMs.
# 5.5.3 Quantitative Evaluation Using SSIM
Figure 6B presents the average SSIM scores for each model. The 50%:50% blended model achieved the highest average SSIM score of 0.912, outperforming the other models.
Fig. 6 (A) Results of the BAM generated when training with varying ratios of datasets produced by orGAN (orGAN dataset) and primary datasets derived from mimicking organs (original). As input images, data from mimicking organs not used in the training process were utilized. (B) The average SSIM score, a measure of accuracy for the generated BAM, is shown. The error bars represent the standard error.
# 5.6 Validation of Dataset Performance Using Actual Surgical Videos
To evaluate the efficacy of BAM in detecting bleeding in real surgical scenarios, we employed a subset of two publicly available datasets.
# 5.6.1 Surgical Scene Datasets
The Hamlyn 1 Dataset [60] consists of dissection recordings of swine diaphragms. These videos were captured at a resolution of 640 by 480 pixels and recorded at 30 frames per second. For our evaluation, we selected 500 consecutive frames with minimal smoke interference.
The Hamlyn 2 Dataset [61] comprises recordings of totally endoscopic coronary artery bypass graft (TECAB) procedures performed on human hearts using roboticassisted endoscopy. In this dataset, the videos have a resolution of 348 by 284 pixels, and the initial 500 frames were chosen for examination.
A surgeon with expertise in surgery identified the precise locations of bleeding which serve as the ground truth to evaluate the model’s accuracy. When generating BAM, we used a criterion where features larger than 20 pixels were taken into account as this level of pixel accuracy could guarantee visual support. A 20-pixel blob corresponds to approximately 6–10 mm in our videos—roughly a small bleed a surgeon can act on. A true positive was defined as the presence of BAM at locations where bleeding was apparent. A true negative was defined as the absence of BAM in images where no visible bleeding was observed, even if bleeding points were covered by devices like forceps.
Figure 7 shows the BAM generated using actual surgical scenes as input videos.
Fig. 7 Results of BAM generated using actual surgical scenes as input videos: The Hamlyn 1 dataset consists of dissection scenes of a porcine diaphragm, while the Hamlyn 2 dataset includes scenes from a robotic-assisted totally endoscopic coronary artery bypass graft (TECAB) surgery on a human. The datasets used were as follows: ‘Original $1 0 0 \%$ ’ comprising only mimicking organ images, ‘orGAN $1 0 0 \%$ ’ consisting solely of images generated by orGAN, and $5 0 \% { : } 5 0 \%$ ’ featuring an equal mix of the two datasets.
# 5.6.2 Analysis of Results
The BAM generator trained exclusively on the orGAN dataset failed to produce precise BAMs in actual surgical scenes. When exclusively utilizing the original dataset, BAM could be produced, albeit not for all frames, and the precision notably diminished when the dataset was altered. In contrast, using generated data from orGAN blended with the original dataset from mimicking organs, BAM was successfully generated for almost all frames.
In the orGAN $1 0 0 \%$ group, the accuracy rate of BAM generation was extremely low (0.001 for Hamlyn 1, 0.156 for Hamlyn 2), and relying solely on the original data led to substantial fluctuations in accuracy (0.586 for Hamlyn 1, 0.924 for Hamlyn 2). The group with a 50%:50% blend of orGAN and original dataset data achieved significantly higher performance, achieving accuracy rates of 0.858 (Hamlyn 1) and 0.999 (Hamlyn 2). This highlights the efficacy of blending orGAN-generated data with the original dataset.
Tables 1 and 2 provide a detailed comparison of the training outcomes for various surgical label detection algorithms using different datasets. These tables illustrate the effectiveness of each model configuration across several metrics.
These tables highlight the comparative performance of different configurations and are crucial for understanding the impact of dataset composition on the accuracy and efficiency of the bleeding detection models. It is clear that the BAM model trained with 50:50 orGAN: Original dataset outperforms all other versions, implying the benefit of orGAN-generated datasets in improving a wide array of AI models for medical purposes.
Table 1 Performance for Hamlyn 1 (swine dissection).
Table 2 Performance for Hamlyn 2 (human TECAB).
# 6 Discussion: Limitations and Future Work
Despite the significant advancements presented, several limitations warrant discussion. The generalization of the orGAN system to diverse surgical environments remains uncertain, as the current dataset may not encompass all real-world variations, such as differences in lighting conditions, organ textures, and surgical artifacts. Furthermore, the practical applicability is constrained by the specific type of synthetic organ used, necessitating the development of additional datasets to improve training diversity. The accuracy of labels generated by the relational positional learning (RPL) mechanism and SLDA may also be influenced by noise and annotation inaccuracies, thereby affecting the downstream AI performance.
To address these limitations, future work will focus on (i) expanding dataset diversity to better represent a wide range of surgical scenarios, (ii) improving computational efficiency to reduce overhead and increasing training speed without compromising output quality, and (iii) mitigating synthetic data bias by closely approximating the complexity of real-world clinical data.
Since generated data may ultimately be used in medical training or even clinical applications, ensuring its accuracy and reliability is critical. Errors in synthetic labels or features could lead to downstream misinterpretation or incorrect clinical decisionmaking.
In this study, we investigated the influence of marker color on image generation quality. To further enhance accuracy, future work will explore the impact of additional factors such as the shape and size of markers on SLDA and RPL performance. A systematic evaluation of these attributes is expected to contribute to the generation of more precise labeled data.
In addition to the conventional Inception Score (IS), we plan to incorporate alternative evaluation metrics for more comprehensive assessment. For instance, Conditional Fr´echet Inception Distance (CFID) measures how well generated images conform to conditional input classes. Kernel Inception Distance (KID) offers a more stable alternative to FID, particularly for small datasets. Furthermore, CLIP-based Maximum Mean Discrepancy (CMMD) leverages semantic embeddings and has demonstrated strong correlation with human visual judgment. The integration of these metrics will allow a more nuanced evaluation of generative quality and label reliability.
Enhancements in RPL and SLDA will also be pursued to improve spatial precision and robustness of label extraction. Moreover, establishing ethical guidelines and best practices for the development and dissemination of synthetic and mimicking-organbased data will be essential to ensure transparency, reproducibility, and responsible use. Source code and datasets will be made available upon reasonable request. | Deep learning in medical imaging faces obstacles: limited data diversity,
ethical issues, high acquisition costs, and the need for precise annotations.
Bleeding detection and localization during surgery is especially challenging
due to the scarcity of high-quality datasets that reflect real surgical
scenarios. We propose orGAN, a GAN-based system for generating high-fidelity,
annotated surgical images of bleeding. By leveraging small "mimicking organ"
datasets, synthetic models that replicate tissue properties and bleeding, our
approach reduces ethical concerns and data-collection costs. orGAN builds on
StyleGAN with Relational Positional Learning to simulate bleeding events
realistically and mark bleeding coordinates. A LaMa-based inpainting module
then restores clean, pre-bleed visuals, enabling precise pixel-level
annotations. In evaluations, a balanced dataset of orGAN and mimicking-organ
images achieved 90% detection accuracy in surgical settings and up to 99%
frame-level accuracy. While our development data lack diverse organ
morphologies and contain intraoperative artifacts, orGAN markedly advances
ethical, efficient, and cost-effective creation of realistic annotated bleeding
datasets, supporting broader integration of AI in surgical practice. | [
"eess.IV",
"cs.AI",
"cs.CV"
] |
# 1 Introduction
Egocentric videos, which capture human daily lives from a first-person perspective, are inherently long - often spanning hours to days or even weeks [75]. Understanding these videos is crucial for supporting practical tasks such as memory recall, multi-step activity tracking, and goal monitoring [5, 23, 40]. But the ensuing problem poses significant challenges due to the video length, multi-modality, and the need for long-horizon reasoning across diverse temporal contexts and dependencies.
Recent advances in multimodal long-context modeling have led to promising progress, extending video understanding capabilities from minutes to hours [7, 29, 81, 82]. However, these models still face significant computational challenges and scale poorly when applied to videos of extended durations, such as those spanning a day or longer. To this end, prior works have proposed token compression [25, 52–54, 68] or sampling-based strategies that reframe video understanding as a temporal retrieval task [48, 79]. Nevertheless, these approaches risk missing key events due to the lossy representations or incomplete temporal localization. Another line of works, commonly referred to as video agents, leverages external language models as high-level control and reasoning entities to call specialized vision modules/tools for video reasoning [63, 79, 84]. While allowing more flexible and more granular perception, these approaches still rely on predefined reasoning pipelines or fixed-order tool invocations, limiting the video lengths they can handle, i.e., up to hour-long.
To address these limitations, we propose Ego-R1, a novel framework that leverages fine-tuned large language models (LLMs) and reinforcement learning (RL) for dynamic tool-driven reasoning of ultra-long (i.e., in days and weeks) egocentric videos. The key distinction from prior video agents [63, 79, 84] designed for long-form video understanding is the dynamic tool calling of our Ego-R1 Agent, which iteratively processes both visual information and contexts to select and execute specialized perception tools on demand, based solely on previously observed content and thought to preceding sub-questions. We call such a video understanding paradigm Chain-of-Tool-Thought (CoTT) reasoning. Furthermore, unlike traditional methods that either feed the entire video to the model or select a subset of the frames, Ego-R1 utilizes a structured toolkit for perception which consist of three core modules designed specifically to facilitate efficient temporal retrieval and detailed visual comprehension. For retrieval, Hierarchical Retrieval-Augmented Generation (H-RAG) extracts timestamped, question-relevant information in the language space. For visual analysis, a specialized Video-LLM interprets localized visual contexts, while a general-purpose Vision-Language Model (VLM) extracts fine-grained visual details. Coordinated by an orchestrating LLM trained through RL, Ego-R1 enables scalable, step-by-step compositional reasoning over ultra-long videos. The modular design of our framework enables easy integration with a wide range of state-of-the-art visual understanding models, allowing the visual perception components, i.e., the Video-LLM and VLM, to seamlessly integrate into our framework.
To facilitate the training of Ego-R1, which consists of a supervised fine-tuning (SFT) stage and an RL stage, we construct Ego-R1 Data, a comprehensive hybrid-source dataset consists of 25K CoTT reasoning traces and 4.4K annotated question-answer (QA) instances to support SFT of a pretrained LLM and RL training of our Ego-R1 agent, respectively. Each task within the dataset requires reasoning over substantial temporal spans, with an average of 7.42 tool-calling steps per task. Additionally, we introduce Ego-R1 Bench, a carefully curated evaluation framework consisting of week-long egocentric videos that combine human-annotated and post-verified synthetic data, designed specifically to assess long-horizon reasoning capabilities in the egocentric setting.
Extensive experiments across diverse long-video benchmarks demonstrate that the dynamic, toolaugmented chain-of-thought reasoning by our Ego-R1 Agent can effectively tackle the unique challenges of understanding ultra-long egocentric videos, significantly extending the time coverage from few hours to a week. We also perform ablation studies to replace the visual modules in Ego-R1 to showcase that our framework is customized to integrate current MLLMs scope, validating our method’s robustness and generalization. At last, while we focus on egocentric long videos in this work, we show that our framework generalizes well in the exocentric setting as well.
# 2 Related Work
Egocentric long video understanding. Existing large-scale egocentric datasets such as Ego4D [22], EgoExo4D [23], Epic-Kitchens [10], and HD-Epic [45] have established comprehensive benchmarks [6, 8, 40] focused on temporal understanding of daily activities, object interactions, and episodic memory tasks [12, 19, 31, 49, 55, 57]. While these benchmarks typically span only minutes, recent extensions have reached hours [4, 79] but multi-personal interactions and cross-day behavioral patterns remain unexplored. Recently, EgoLife [75] provides a week-long egocentric dataset; however, its question-answering tasks remain vanilla, lacking requirements for deep visual reasoning. Our benchmark addresses these limitations with more challenging tasks requiring sophisticated reasoning about visual details across diverse scenarios.
Table 1: Comparison between Ego-R1 and other frameworks. Ego-R1 develops an agentic tool-calling schema that enables interpretable reasoning over ultra-long videos while preserving critical temporal information.
While egocentric datasets and benchmarks continue to expand in temporal scope, methods specifically designed for egocentric long video understanding remain absent. As shown in Table 1, existing approaches face critical limitations: proprietary models [1, 58] and some MLLMs [3, 28] usually process videos as unified inputs, which becomes prohibitively token-intensive for hour-long videos; general frame sampling approaches [34, 36, 64, 81, 82] cannot guarantee question-relevant frames selection; and sophisticated video agents [54, 63, 65, 66, 79] analyze frames in isolation, missing narrative structure and temporal dynamics. Though RAG shows promising direction for long video understanding [37, 72], existing approaches often lack contextual specificity for multi-day egocentric videos, where personal routines and social dynamics evolve over time. To address this challenge, our Ego-R1 implements multi-step reasoning upon a hierarchical RAG paradigm, enabling comprehensive understanding of evolving contexts beyond previous single thinking step approach Video-R1 [16]. A detailed qualitative results comparison is shown in Fig. 5.
Multimodal agentic tool-use. Agentic systems with Tool-Integrated Reasoning (TIR) effectively enhance LLMs’ complex problem-solving and reasoning capabilities [44, 78], particularly in mathematical domains [21, 61, 73, 85] through search engines [26, 83] and code interpreters [32, 74, 77]. For training paradigms in tool-integrated learning, RL has emerged as a promising approach offering more scalable and generalizable tool utilization strategies [15, 30, 46, 60], compared to traditional SFT [47, 50]. Recent research has extended tool-augmented foundation models to multimodal domains, exploring the integration of diverse tool-use for visual reasoning tasks [11, 27, 38, 39, 56, 84]. These initial efforts leverage specialized visual perception modules [14, 63], to enhance grounded and context-aware reasoning in complex visual environments [7, 33]. Coinciding with OpenAI’s o3 [43], Ego-R1 Agent employs dynamic tool-calling mechanisms, enabling multi-step reasoning and contextual tool selection, determining the appropriate tool for optimal problem-solving.
CoT reasoning. Chain-of-Thought (CoT) reasoning [67] has emerged as a fundamental mechanism to enhance the reasoning capabilities of both LLM and VLM [35, 59, 62, 70, 71]. RL-based reasoning approaches further require high-quality CoT samples to advance multimodal reasoning capabilities [13, 24, 76, 80]. However, existing datasets lack adequate, high-quality CoT annotations for long video understanding tasks. To fill this gap, we introduce Ego-CoTT-25K, featuring CoT reasoning with dynamic tool-calling capabilities.
# 3 Egocentric Long Video Reasoning via Dynamic Tool-Calling
The egocentric long-video reasoning task represents a crucial frontier beyond understanding, as firstperson perspectives capture complex, temporally interdependent human behaviors over ultra-long durations. Actions that occur many hours or even days apart may be guided by consistent personal strategies and habits; thus, correctly answering a query often relies on recognizing enduring human traits and linking them to cues dispersed across the entire timeline. This requires the models to therefore maintain long-range temporal dependencies, identify subtle evidence in earlier segments, and reason about the actor’s underlying preferences to generate dynamic context-aware solutions.
Although recent MLLMs demonstrate promising performance in general video understanding, they still struggle in answering questions in truly long-context videos with extended temporal relationships. This underscores the importance of egocentric long-video reasoning as a fundamental challenge for multimodal systems. In this section, we introduce Ego-R1, a novel framework that unifies visual content comprehension and contextual reasoning by combining chain-of-thought prompting with dynamic tool calling. We provide a formal task definition in Section 3.1, followed by a comprehensive presentation of our specialized toolkit architecture designed for dynamic tool call in Section 3.2.
# 3.1 Egocentric Long Video Reasoning Tasks
Compared to general exocentric videos, egocentric videos offer continuous, context-rich recordings from a first-person perspective, naturally documenting extensive temporal experiences including daily routines, social interactions, and object manipulations. This unique viewpoint requires sophisticated high-order inference to interpret actions, intentions, and contexts across substantial temporal spans, making it require reasoning models with strong temporal understanding and contextual integration capabilities. This necessitates a flexible reasoning framework that dynamically processes both visual information and contextual details through an intelligent tool-calling mechanism, determining which analytical approaches are most relevant for comprehending complex temporal narratives spanning multiple days of recorded experience.
In our task, we provide egocentric video spanning several days alongside questions posed at a specific query time. The system analyzes all preceding video content to generate accurate responses, simulating human temporal reasoning in real-life scenarios. This tool-based approach enables multimodal reasoning by leveraging contextual information across extended periods, requiring the system to choose optimal tools during the thinking process to effectively integrate perception, memory, and action when generating responses based solely on previously observed content.
# 3.2 Dynamic Tool-Calling
Current MLLMs struggle with extended egocentric content due to limited context windows, inadequate temporal understanding, and insufficient structured reasoning capabilities, preventing effective analysis of long-duration egocentric videos containing sparse events that require multi-step, contextaware interpretation. To address the inherent difficulty posed by the overly long context of long-form egocentric video reasoning, we adopt a dynamic tool-calling framework that empowers the LLM rather than an MLLM to invoke specialized perception tools on demand. Our approach enables the LLM to actively decompose complex queries, selectively retrieve relevant segments, and iteratively perform stepwise reasoning grounded in video observations. This modular design overcomes the context-length bottleneck of MLLMs while enabling the fine-grained, multi-turn reasoning essential for practical egocentric video understanding. Our framework leverages three complementary tools - one text-based and two visual-based - each addressing distinct temporal and perceptual dimensions of egocentric understanding. The text-based hierarchical RAG system handles longer temporal information retrieval, while the visual-based tools (Video-LLM and VLM) perform detailed visual analysis at different visual granularities.
h-rag $\gimel$ : Our hierarchical system efficiently localizes relevant temporal information from the memory bank. Videos are first segmented into 30-second clips, each summarized via a video captioning model and temporally aligned with the ASR results as clip logs. These clip logs are hierarchically aggregated through a bottom-up generation process into multi-level granularity, creating comprehensive temporal summaries. The hierarchical structure facilitates effective top-down inference to locate and retrieve logs of relevant video segments, thus reducing computational load while preserving accuracy and temporal coherence across long egocentric videos spanning days. The system accepts specific search parameters, including temporal granularity, keywords, and time ranges for retrieval, returning the most relevant observations that match the query constraints.
video-llm : Our video-llm is a short-horizon visual-perception module that operates on local temporal windows ranging from a few seconds up to ten minutes. We sample each clip within the proposed time range at 1 FPS, keeping the input size compatible with modern multimodal language models and thus maintaining broad architectural flexibility. Given a question and its corresponding video segment, the tool correlates visual content with temporal context to produce
Raw QA Data Collection CoTT Generation EgoLife 7 Raw Videos Question: Why is it so important to monitor the power D (A1-A6 views) banks and keep their lights on? Gemini Video to Text <timestamp>DAY3_11175412</timestamp> 30s segments of A1 – A6’s log S "view": "A1","date": "DAY4","time": "17240000-17243000", "text": "I saw Shure standing nearby, dressed in a blue hoodie, while Nicous <think> <thoionlk>> THo iaenrsarwcehritchails_qRuAeGs(t"iowne,eI kn"e,ed[t"op…o w<e/trhinbka>nks"], was sitting on a chair playing the guitar, … Her expression was serious, as if ["DAY1_00000000", "DAY3_11175412"]) </tool> she was facing a tricky problem." OBSERVATION: "DAY1": … Technical preparations like <tool> power bank distribution, device checks were addressed … Gemini E Reasoning Chains S
滤 MCRQaPwairs human Annotated <think> <thoionlk>> BaHsierdaornchtihceaol_bRsAerGv(a"tdioany,"I, ne[e"dptow…e r</thibnakn>ks"], 国 ↓\* Claude verificatio MCQ Pairs ["DAY1_00000000", "DAY1_24000000"]) </tool> AI Optimized T human <tool> tOhBeSpEoRwVeAr TbIaOnNk :h"aDsAtYe1c_h1n2ic1a0l0i0ss0u0"e:s…inTcheatregianmg f…ound that
generated MCQ Pairs × 6 annotated √ : ×𝒏 / /Gemini
"Query Time": ["DAY3", "11175412"] final MCQ pairs <think> Based on the above observations, the power bank "Question": "Why is it so important to monitor the power banks and keep their <think> was not plugged in correctly before, results in … Final
lights on?” Answer: B) Prevent data loss and re-recording. </think> A) Avoid losing work time waiting ✅B) Prevent data loss and re-recording S re-generate C) Maintain light proper functioning D) Save battery life for later use "Target Time": ["DAY1", "12193000 - 12200000"] "Reason": "...Shure said, 'That one isn't plugged in… 'Oh no, it's out of 🔠 battery. ' He replied, 'It’s dead already. … ' You need to record it again.'" <answer> Verification
detailed observations that capture dynamic interactions and sequential events, and, when possible, directly answers the query for the specified time range.
vlm : This general-purpose vlm operates at the finest temporal granularity, analyzing individual frames to extract high-resolution details like text on packaging, object attributes or specific visual elements missed in broader video analysis. It augments the temporal reasoning of video-llm with precise visual evidence for comprehensive egocentric understanding.
# 4 Ego-R1 Data: Chain-of-Tool-Thought (CoTT) for Video Reasoning
To unleash the reasoning capabilities of LLM under the CoT prompting paradigm and to enable dynamic tool selection conditioned on current observations and past actions, we introduce Ego-R1 Data, a dataset designed to enable agentic tool-use with Chain-of-Tool-Thought (CoTT) reasoning chains. Figure 2 illustrates the data generation pipeline of the Ego-R1 Data, including raw QA data collection and CoTT generation. In this section, we define the structure of CoTT in Section 4.1, and provide details of Ego-R1 Data generation in Section 4.2.
# 4.1 Chain-of-Tool-Thought (CoTT)
Our goal is to generate synthetic CoTT data and use it to train multi-turn tool-use language models. We define a CoTT data $C$ as a sequence of steps $S _ { i }$ , where each step consists of a thought $\bar { T } _ { i } ^ { \mathrm { t h } }$ , a tool $T _ { i } ^ { \mathrm { t o } }$ , and an observation $o _ { i }$ . A CoTT trajectory is defined as follows:
$$
C = ( S ^ { 0 } , S ^ { 1 } , \ldots , S ^ { n } ) , \quad S ^ { i } = \big ( T _ { i } ^ { \mathrm { t h } } , T _ { i } ^ { \mathrm { t o } } , o _ { i } \big )
$$
where $C$ is a sequence of $n$ reasoning steps. At each step $i$ , the agent will generate a thought $T _ { i } ^ { \mathrm { t h } }$ and a tool call $T _ { i } ^ { \mathrm { t o } }$ based on all the previous steps’ observations $\left\{ o _ { 0 } , o _ { 1 } , \dotsc , o _ { i - 1 } \right\}$ and the query $q$ .
To formalize this reasoning process, we define two essential components that characterize how the agent operates: the action space, which specifies the available tools the agent can utilize, and the observation space, which captures the structured outputs returned from tool executions.
Action space. We define the action space $A = F _ { j }$ as a union of available tools to be used during reasoning. We use the three fundamental tools defined in Section 3.2: 1) h-rag for text-based long-range temporal retrieval, 2) video-llm for short-range video understanding, and 3) vlm for framewise image understanding, plus an auxiliary terminate tool for data generation only. The h-rag tool retrieves relevant information from the current-view knowledge base by querying specified keywords within a target time window. By projecting long videos into a semantically and temporally structured language space, it rapidly pinpoints the approximate temporal interval of an event while summarizing sparse visual cues into a concise textual summary. The video-llm tool analyses short video segments specified by a query and an associated time window, providing detailed interpretations of local visual–temporal content. The vlm tool performs image-level analysis on a single frame selected by timestamp and query, providing precise, frame-specific visual details.
Observation space. At each reasoning step $i$ , the agent receives an observation $\begin{array} { r l } { o _ { i } } & { { } = } \end{array}$ $\left( o _ { i } ^ { \mathrm { r a g } } , o _ { i } ^ { \mathrm { v i d } } , o _ { i } ^ { \mathrm { v l m } } \right) ^ { \mathrm { - } } \in \mathcal { O }$ , where each component $o _ { i } ^ { \mathrm { r a g } } , o _ { i } ^ { \mathrm { v i d } } ,$ $o _ { i } ^ { \mathrm { v l m } }$ represents the output of corresponding tool rag, video-llm, and vlm. The observation space ${ \cal O } = \{ { \cal O } ^ { 0 } , { \cal O } ^ { 1 } , . . . , { \cal O } ^ { n } \}$ encompasses the collection of all tool outputs. Each tool call executes via the parsed arguments, producing observations that guide subsequent reasoning steps.
# 4.2 Data Generation
We carefully curate Ego-R1 Data, comprising 4.4K annotated question-answer pairs sourced from over 500 hours of egocentric videos recorded across six distinct first-person perspectives. We select $2 . 9 \mathsf { K }$ high-quality questions for CoTT generation. For each selected QA pair, we construct a CoTT trace that decomposes the reasoning process into interpretable steps, yielding an average of 7.42 tool calls per task. In total, 25K CoTT traces are generated, and subsequently used during the SFT stage to train our multi-turn tool-use language model.
Ego-QA-4.4K. Long-form egocentric videos are hard to collect in nature. Following the dataset construction pipeline of EgoLifeQA [75], we collected 2.9K high-quality human-annotated data from 6 videos with distinct viewpoints. To expand the dataset scale, we employ proprietary models to analyze Automatic Speech Recognition (ASR) transcripts with video captioning outputs from the 30-second segments. These textual logs were combined and examined across various temporal granularities, spanning single or multiple days, to generate candidate questions with answers. Human annotators subsequently selected and cross-validated those QA pairs using Fleiss’ kappa [17], refining each query and its ground-truth answer according to a unified criteria of rationale coherence, importance, relevance, and difficulty level. In total, Ego-R1 Data comprises 4.4K question-answer pairs from both human-labeled and synthetic data sources.
Ego-CoTT-25K. We develop a systematic CoTT generation system to automatically generate CoTT data based on the selected question-answer pairs. By leveraging proprietary LLMs with longer context windows and stronger instruction-following capabilities, we enable the automatic generation of comprehensive reasoning chains that would otherwise be challenging to produce manually. In the CoTT generation system, each tool is exposed to the model as an executable function whose signature and semantics are implicitly embedded in the system. This design, paired with a textual system prompt (Table 5), prevents parsing errors during execution. The prompt also encodes the current viewpoint identity and enumerates the available tools. Given an input question $q$ , the model iteratively generates reasoning steps $S _ { i } = ( T _ { i } ^ { t h } , T _ { i } ^ { t o } )$ , where $T _ { i } ^ { t h }$ denotes the thought and $T _ { i } ^ { t o }$ denotes the corresponding tool call with fully specified arguments (e.g., time ranges, keywords, sub-questions). All the proposed arguments are validated by a pre-verification module to ensure syntactic correctness. Once a call is emitted, its name and arguments are extracted via special tokens and dispatched to an external server for execution. The returned observation is then fed back to the model, guiding the next step and enabling dynamic, multi-turn tool use for egocentric long-video reasoning.
# 5 Ego-R1 Agent: Towards Tools Integrated Video Understanding Agent
Our goal is to train a language model capable of performing long-form video reasoning via a structured long-chain reasoning schema that automatically invokes multi-turn tool calls to collaboratively solve the problem. Inspired by the recent post-training techniques [9], we design our training framework with a two-stage strategy, with an illustration in Fig. 3.
# 5.1 Stage 1: Supervised fine-tuning (SFT)
In the first stage, we perform SFT on a pretrained language model using the synthetic CoTT dataset. This "cold-start" initialization equips the model with the foundational ability to produce correctly formatted tool calls as prescribed by the CoTT reasoning schema. The CoTT data, presented in a structured, multi-turn conversational format, simulates realistic stepwise tool interactions, explicitly combining natural language reasoning with structured tool invocation. Each step in the reasoning trajectory consists of a thought enclosed within the special token <think>...</think>, followed by either a proposed tool call, enclosed within $< \mathrm { t o o l } > . . . < / \mathrm { t o o l } >$ , or an answer, enclosed with in <answer>...</answer>. The tool call is automatically parsed and executed by an external environment, which then returns an observation. This observation is formatted and fed back into the model as part of the input for the next reasoning step. After fine-tuning, the resulting Ego-R1-SFT model reliably produces well-formed tool calls and coherent step-by-step reasoning, laying the groundwork for subsequent reinforcement learning stage.
Figure 3: Overview of the two-stage training strategies in Ego-R1. Ego-R1 employs a two-stage training approach: Stage 1 utilizes supervised fine-tuning with CoTT data to establish structured tool-calling capabilities, while Stage 2 applies multi-turn reinforcement learning with rule-based rewards to optimize iterative reasoning and tool execution across diverse question types.
# 5.2 Stage 2: Reinforcement learning (RL)
To further improve the multi-turn tool-calling capabilities of our fine-tuned Ego-R1-SFT model, we adopt Gradient-Regularized Policy Optimization (GRPO) [51] to train the model. GRPO optimizes the model to maximize the expected final task reward while regularizing the variance of policy gradients across reasoning steps to encourage stable and coherent decision-making. Specifically, we define the GRPO objective as follows:
$$
\begin{array} { r l r } & { } & { \mathcal { I } _ { \mathrm { G R P O } } ( \theta ) = \mathbb { E } _ { [ q \sim P ( Q ) , \{ \boldsymbol { \sigma } _ { t } \} _ { i = 1 } ^ { G } \sim \pi _ { \theta _ { \mathrm { d d } } } ( \boldsymbol { O } | q ) ] } \biggl [ \displaystyle \frac { 1 } { G } \sum _ { i = 1 } ^ { G } \sum _ { y = 1 } ^ { T } \frac { \lvert { S } _ { i } ^ { y } \rvert } { \lvert { S } _ { i } ^ { y } \rvert } \sum _ { t = 1 } ^ { \lvert { S } _ { i } ^ { y } \rvert } \Bigl \{ \operatorname* { m i n } \left[ \frac { \pi _ { \theta } \left( S _ { i , t } \lvert q , I _ { y } , S _ { i , < t } \right) } { \pi _ { \theta _ { \mathrm { e l d } } } \left( S _ { i , t } \lvert q , I _ { y } , S _ { i , < t } \right) } \hat { A } _ { i , t } ^ { y } , \right. } \\ & { } & { \left. \mathrm { c l i p } ( \frac { \pi _ { \theta } \left( S _ { i , t } \lvert q , I _ { y } , S _ { i , < t } \right) } { \pi _ { \theta _ { \mathrm { d d } } } \left( S _ { i , t } \lvert q , I _ { y } , S _ { i , < t } \right) } , 1 - \varepsilon , 1 + \varepsilon ) \hat { A } _ { i , t } ^ { y } - \beta \mathbb { D } _ { \mathbf { K L } } [ \pi _ { \theta } \lvert \pi _ { 0 } \rvert ] \Bigr \} \right] } \end{array}
$$
In this equation, $\pi _ { \boldsymbol { \theta } }$ represents the policy model that generates reasoning tokens $S _ { i } ^ { y }$ sequentially at turn $y$ , where $i$ denotes the token position. The generation is conditioned on the preceding sequence $S ^ { y } * < i$ , the observation $I _ { y }$ at turn $y$ , and the question $q$ . The final reward $R * { \bar { \mathrm { f i n a l } } } ( C , q )$ evaluates the correctness of the answer at the end of the reasoning chain $C$ . The reference policy $\pi _ { 0 }$ denotes the original model, and the KL divergence term $\mathrm { K L } ( \pi _ { \boldsymbol { \theta } } | \bar { \pi _ { 0 } } )$ regularizes the policy to prevent excessive drift from the initial parameters. The advantage estimates $\hat { A } _ { i , t } ^ { y }$ are computed by standardizing rewards within each group $G$ , subtracting the group mean and dividing by the group standard deviation.
During training, we generate rollout trajectories by sequentially executing tools based on the model’s reasoning outputs, providing realistic stepwise observations that inform subsequent reasoning steps. Each rollout terminates when either a valid final answer is produced or the maximum step limit $N$ is reached. This training procedure enables the model to effectively generalize multi-turn tool usage, reflecting the iterative nature of egocentric long-video reasoning tasks. The resulting model after second-stage reinforcement learning training constitutes our final system, termed the Ego-R1 Agent.
Table 2: Quantitative results on video question-answering benchmarks. The proposed Ego-R1 model demonstrates superior performance across multiple metrics. Bold indicates best performance, underscored values show second best. The results from the 72B version of the model or using less frames are marked in gray. As some of the QA pairs in EgoLifeQA were used for CoTT generation and training, we excluded these from evaluation and retained only a clean subset for fair testing.
# 6 Experiments
# 6.1 Experiment Setup
To evaluate the effectiveness of the CoTT reasoning traces in answering the ultra-long video understanding question, we utilize Qwen-2.5-3B-Instruct as our base model. To mitigate the hallucination problem caused by the increasing CoTT length, we introduce an additional summary model with a longer context window length to help conclude the reasoning trace to answer the question.
Benchmarks. We evaluate the performance of Ego-R1 Agent on three existing long video understanding benchmarks covering both exocentric and egocentric views: Video-MME (long w/o subtitle) [18], EgoSchema [40], EgoLifeQA [75]. Among them, Vide-MME has a third-person view, and the rest have the first-person view. We follow the same paradigm as h-rag to generate the knowledge base for each video in these benchmarks. The hierarchy depth of each memory bank varies by datasets: only EgoLifeQA contains videos long enough to necessitate day-level summaries, while others extend to 10-minute-level or hour-level summaries at most. To further evaluate the capability of Ego-R1 Agent in handling multi-perspective and long temporal reasoning question answering tasks, we establish Ego-R1 Bench, a reasoning based benchmark for ultra-long egocentric video understanding. Distinct from Ego-R1 Data, Ego-R1 Bench comprises 300 QAs evenly distributed across six first-person perspectives. For each perspective, Ego-R1 Bench includes a balanced mixture of human-labeled and human verified QAs.
Comparison Methods. We benchmark Ego-R1 Agent against recent representative approaches, including MLLM-based video understanding methods [28, 58, 64, 81, 82], RAG-based method [37], reasoning model [16] and video agents [63, 79]. For each question, we restrict the input to video content occurring before the query timestamp, ensuring causal consistency in all comparisons. To ensure fair comparison across methods with different architectural constraints, we adopt an adaptive frame-sampling protocol: 1) Standard frame-based MLLMs [16, 81, 82] and LLaVA-OneVision [28] receive 64 uniformly sampled frames per query; 2) Video-RAG [37] uses its native setting of 64 frames; 3) Higher-capacity models such as InternVideo2.5 [64] and Gemini 1.5 Pro [58] are provided with 512 uniformly sampled frames; 4) Agent-based methods that rely on caption-guided key-frame selection [63, 79] are supplied with 1 024 uniformly sampled frames, recomposed into 1 FPS videos. This protocol equalizes input budgets while respecting each model’s architectural constraints.
# 6.2 Results
Table 2 presents a quantitative comparison of Ego-R1 with state-of-the-art video understanding models on both exocentric and egocentric benchmarks. Ego-R1 achieves the best or second-best score on three of the four datasets, despite using far fewer parameters than most competitors.
Exocentric setting. On VideoMME (long), whose clips average $4 1 ~ \mathrm { m i n }$ , Ego-R1 achieves $6 4 . 9 \%$ accuracy, which is the highest score among open-weight models and second overall, falling behind only the proprietary Gemini-1.5-Pro $( 6 7 . 4 \% )$ . It surpasses other public MLLMs, such as LLaVAVideo $( 6 1 . 5 \% )$ and InternVideo2.5 $( 5 3 . 4 \% )$ , while using less than half their parameter count. These results indicate that, although Ego-R1 is trained in an egocentric regime, it generalizes effectively to exocentric settings.
Egocentric settings. Ego-R1 achieves the highest accuracy on the proposed egocentric long video reasoning benchmark - Ego-R1 Bench (with an average time of $4 4 . 3 \mathrm { h }$ ), achieves $4 6 . 0 \%$ accuracy. This result exceeds Gemini-1.5-Pro by $7 . 7 \%$ and surpasses the strongest open baseline, LLaVA-Video, by $1 7 . 0 \%$ , underscoring the benefit of hierarchical retrieval and multi-turn tool calling for reasoning tasks with sparsely distributed events. On EgoSchema (3 min clips), Ego-R1 records $6 8 . 2 \%$ , second only to Gemini $( 7 2 . 2 \% )$ ; on EgoLifeQA we obtain $3 6 . 0 \%$ after removing any training overlap, comparable with LLaVA-Video $( 3 6 . 4 \% )$ and approaching Gemini $( 3 6 . 9 \% )$ .
Analysis. Both frame-based MLLMs and RAG variants exhibit marked performance drops on Ego-R1 Bench, and agent-based approaches remain in the $3 2 \cdot 3 6 \%$ range, well below the $46 \%$ achieved by Ego-R1. These findings indicate that agent-based approaches provide a more effective solution for long-video reasoning tasks, while our CoTT style dynamic tool calling, enables even a compact 3B model to conduct reliable, long-horizon reasoning over hours-long egocentric video.
# 6.3 Ablation Study
To better understand the contribution of different training components in Ego-R1, we conduct ablation studies using identical base models under varying training regimes. Specifically, we compare models trained with: (1) SFT only, (2) RL only, and (3) a combination of both. Quantitative results are reported in Table 3.
Table 3: Ablation study on different training regimes. We use Qwen-2.5-3B-Instruct as our base model to validate the effectiveness of the two training components.
The zero-shot base model achieves
only $1 . 4 \%$ task accuracy on Ego-R1 Bench and $4 . 3 \%$ format accuracy for intermediate tool calls. Interestingly, after applying vanilla RL training using GRPO without any intermediate CoTT supervision, the task accuracy drops to $0 \%$ , while tool-call format accuracy improves by $9 \%$ . This indicates that although the model can learn the structural format of tool calls during RL from their emergent capabilities, the absence of reasoning trace supervision leads to unstable or ungrounded predictions, ultimately harming task performance.
In contrast, applying SFT with CoTT data, even in limited epochs (e.g., 3 epochs), significantly improves both task and format accuracy. This highlights the importance of structured reasoning demonstrations during pretraining: they not only teach the model to produce correctly formatted tool calls, but also establish a foundation for multi-step reasoning in long-horizon tasks. | We introduce Ego-R1, a novel framework for reasoning over ultra-long (i.e.,
in days and weeks) egocentric videos, which leverages a structured
Chain-of-Tool-Thought (CoTT) process, orchestrated by an Ego-R1 Agent trained
via reinforcement learning (RL). Inspired by human problem-solving strategies,
CoTT decomposes complex reasoning into modular steps, with the RL agent
invoking specific tools, one per step, to iteratively and collaboratively
answer sub-questions tackling such tasks as temporal retrieval and multi-modal
understanding. We design a two-stage training paradigm involving supervised
finetuning (SFT) of a pretrained language model using CoTT data and RL to
enable our agent to dynamically propose step-by-step tools for long-range
reasoning. To facilitate training, we construct a dataset called Ego-R1 Data,
which consists of Ego-CoTT-25K for SFT and Ego-QA-4.4K for RL. Furthermore, our
Ego-R1 agent is evaluated on a newly curated week-long video QA benchmark,
Ego-R1 Bench, which contains human-verified QA pairs from hybrid sources.
Extensive results demonstrate that the dynamic, tool-augmented chain-of-thought
reasoning by our Ego-R1 Agent can effectively tackle the unique challenges of
understanding ultra-long egocentric videos, significantly extending the time
coverage from few hours to a week. | [
"cs.CV",
"cs.AI"
] |
# I. INTRODUCTION
Data governance has become increasingly crucial as data is becoming larger and more complex in enterprise data warehouses. For example, in an organization’s data pipeline, data flows from upstream artifacts to downstream services, which may be built by various teams that know little about other teams’ work and often introduce challenges when anyone wants to change their data. In this case, lineage [9], [10], especially finer-grained column-level lineage, is often needed for simplifying the impact analysis of such a change, i.e., how a change in the upstream would affect the downstream. In another real-world scenario, column-level lineage can help identify how sensitive data flows throughout the entire pipeline, thereby improving the overall data quality and validating data compliance with regulations, such as GDPR and HIPAA [7].
While capturing lineage information in DBMS has been studied extensively in the database community [1], [2], [11], the need remains to curate the lineage information from static analysis of queries (without executing the queries). On the one hand, existing systems or tools would introduce large overheads by either modifying the database internals [1], [2] or rewriting the queries to store the lineage information [11], [12]. On the other hand, different data warehouse users may need to disaggregate the lineage extraction workflow from query execution to simplify their collaboration, as shown in the following example.
Fig. 1. Lineage extraction from query logs without a database connection.
Example 1: An online shop uses a data warehouse to store and analyze its customer and transaction data. There is a view, webinfo, which keeps track of user activities, and another view, info, connects the users’ website activities (stored in view webact) to their orders, which may be used for recommendation purposes. However, the online shop owner decides to edit the page column of the web table and requests an impact analysis from the data warehouse provider.
1 $Q _ { 1 } =$ CREATE VIEW info AS
2 SELECT c.name, c.age, o.oid, $\textrm { w } . \star$
3 FROM customers c JOIN orders o ON c.cid $\mathrm { ~ ~ { ~ \ b ~ = ~ \ b ~ \circ ~ } ~ }$ .cid
4 JOIN webact w ON c.cid $\mathbf { \Omega } = \mathbf { \Omega } _ { \mathsf { W } }$ .wcid;
5 $Q _ { 2 } =$ CREATE VIEW webact AS
6 SELECT w.wcid, w.wdate, w.wpage, w.wreg
7 FROM webinfo w
8 INTERSECT
9 SELECT w1.cid, w1.date, w1.page, w1.reg
10 FROM web w1;
11 $Q _ { 3 } =$ CREATE VIEW webinfo AS
12 SELECT c.cid AS wcid, w.date AS wdate,
13 w.page AS wpage, w.reg AS wreg
14 FROM customers c JOIN web w ON c.cid $= 1$ w.cid
15 WHERE EXTRACT(YEAR from w.date) $= \ 2 0 2 2$ ;
Due to access control and privacy regulations, the engineer from the data warehouse provider can only access the log of database queries instead of the DBMS. The task is prone to being time-consuming and may involve tracing unnecessary columns without a comprehensive data flow overview. To address this, the engineer considers using tools like SQLLineage [6] to extract and visualize the lineage graph.
Although it can generate a lineage graph as shown in Figure 2, there are a few issues with the column lineage. One is that the node of webact erroneously includes four extra columns, highlighted in a solid red rectangle. Another error arises for view info due to the SELECT $\star$ operation, which makes it unable to match the output columns to columns in webact. Instead, it would return an erroneous entry of webact. $\star$ to info. $\star$ (in solid red rectangle) while omitting the four correct columns from webact. It would also return fewer columns for the view info (in dashed red rectangle) and completely ignore the edges connecting webact to it (the yellow dashed arrows). If the engineer used the information from this lineage graph, then not only an erroneous column (webact.page) is provided, but the results also miss actual impacted columns from the webact and info table. As we will demonstrate, our approach is able to handle statements like SELECT w. $\star$ and capture all columns and their dependencies missed by prior tools.
Fig. 2. The lineage graph for Example 1. Existing tools like SQLLineage [6] would miss columns in the dashed red rectangle and return wrong entries in the solid red rectangle, while the yellow is the correct lineage
Curating lineage information from query logs is also advantageous for debugging data quality issues, enhancing data governance, refactoring data, and providing impact analysis. However, existing tools [5], [6] often fail to accurately infer column lineage due to the absence of metadata. To support developers and analysts in extracting lineage without the overhead of running queries in DBMS, we develop a lightweight Python library, LINEAGEX, which constructs a column-level lineage graph from the set of query definitions and provides concise visualizations of how data flows in the DBMS.
Challenges. LINEAGEX achieves accurate column-level lineage extraction by addressing the following two challenges. First is the variety of SQL features, especially for features that involve intermediate results or introduce column ambiguity. For example, Common Table Expressions (CTEs) and subqueries generate intermediate results that the output columns depend on, while the desired lineage should only reveal the source tables and columns. Set operations may introduce column ambiguity, primarily due to the lack of table prefixes. Second, when there is an absence of metadata from the DBMS on each table’s columns, e.g., when the query uses SELECT $\star$ or refers to a column without its table prefix, it may introduce ambiguities. Thus, prior works fail to trace the output columns when the $\star$ symbol exists and cannot identify their original table without an explicit table prefix.
Our contributions. For the first challenge, LINEAGEX uses a SQL parser to obtain the queries’ abstract syntax trees (AST) and perform an intelligently designed traversal on the AST with a comprehensive set of rules to identify column dependencies. LINEAGEX addresses the second challenge by dynamically adjusting the processing order for queries when it identifies ambiguities in the source of tables or columns. Moreover, to accommodate the majority of data practitioners, we integrate
Fig. 3. An illustration of LINEAGEX.
LINEAGEX with the popular Python data science ecosystem by providing a simple API that directly takes the SQL statements and outputs the lineage graph. Besides the API, we provide a UI that visualizes the column lineage for users to examine.
In this demonstration, we will showcase the impact analysis scenario and illustrate how LINEAGEX provides accurate column-level lineage to further help users monitor their data flow. The user can compare the lineage extraction results by LINEAGEX with prior tools. Since pre-trained large language models (LLMs) have shown impressive performance in understanding code, we will also demonstrate using state-of-the-art LLMs like GPT-4o for impact analysis and how to augment their results with the column-level lineage from LINEAGEX.
# II. BACKGROUND AND RELATED WORK
Data lineage tracks how data flows between each step in a data processing pipeline. Consider each processing step as a query $Q$ , the table-level lineage $\intercal$ of $Q$ encodes which input tables contribute to its output; and the column-level lineage C is a mapping from $Q$ ’s output columns $\mathcal { C } ^ { o u t p u t }$ to $Q$ ’s input columns $C ^ { s o u r c e }$ , which encodes for each output column which specific columns in the input tables it relies on. More specifically, for an output column $c ^ { o u t } \ \in \ C ^ { o u t p u t }$ of $Q$ , an input column $C ^ { s r c } \in \mathcal { C } ^ { s o u r c e }$ is included in $\mathsf { C } ( c ^ { o u t } )$ if any changes to $C ^ { s r c }$ will lead to a potential change in the values in $c ^ { o u t }$ — we may not only include the input columns directly contributing to the output value but also take any column referred in the query into consideration.
Then, consider a set of queries $\mathcal { Q } = \{ Q _ { i } \}$ , lineage extraction is to find the pair $( \mathsf { T } _ { i } , \mathsf { C } _ { i } )$ for each $Q _ { i }$ . Note that queries in $\mathcal { Q }$ may be table/view creation queries, hence $\mathbb { T } _ { i }$ and $\mathsf { C } _ { i }$ may map the outputs of $Q _ { i }$ to the outputs of other queries. In practice, to make the lineage graph easy to read, we can combine these two graphs and group all columns’ output by the same query to visualize this graph.
Related work. Data lineage [9], [10] has been studied extensively in the database research community. To track finegrained lineage information down to the tuple level or cell level, people have extended relational database engines like in ProvSQL [1] and PERM [2] or built middlewares that rewrite queries [11], [12], which are often ”overkill” for column-level lineage. Various industry-leading tools, including Linkedin’s Datahub [8], Microsoft’s Purview [4], and Apache Atlas [3], are more than capable of handling data pipelines and relational databases, but they may incur high operational and
TABLE I KEYWORD RULES.
$c _ { c o n }$ for each column: wcid= {c.cid} wdate $\mathbf { \Sigma } = \mathbf { \Sigma }$ {w.date} ⑤ 元 wpage $\ c =$ {w.page} wreg $\mathbf { \Psi } = \mathbf { \Psi }$ {w.reg} c.cid,w.date, Cref={c.cid,w.cid, w.page,w.reg w.date} 4 0 Cref ={c.cid,w.cid}③ X EXTRACT(YEAR from w.date)= 2022
T={customers}
Cpos={c.cid} ①p c.cid = w.cid PTps= cdwae w.cid,w.reg,w.page} C customers W web
maintenance costs. Vamsa [13] annotates columns used to train machine learning models for Python scripts. There are also Python libraries like SQLGlot [5] and SQLLineage [6] that parse SQL queries statically; however, they focus on lineage for individual files, lacking the ability to find the dependency across queries, especially when there are ambiguities in table or column names. None of the methods above provides lightweight and accurate lineage extraction at the column level, like what LINEAGEX offers, without running the database queries; LINEAGEX can also visualize related tables and the data flow between columns in an interactive graph.
# III. SYSTEM AND IMPLEMENTATION
The overview of the LINEAGEX system is shown in Figure 3. LINEAGEX allows users to input a list of SQL statements or query logs. Below are details of each module.
SQL Preprocessing Module. The first step is to scan each query and record the mappings from the query’s identifier to its query body. For CREATE statements, we use the created table/view’s name as the query identifier, while for SELECT statementonly queries, we use a randomly generated id1. Then, each identifier is mapped to the body of the SELECT statement, forming a key-value pair. For instance, for $Q _ { 3 }$ in Example 1, our module would have webinfo as the key and the SELECT statement . . . (line 12 to 15) as the value. These key-value pairs are stored in a Query Dictionary (QD), which will be further used to facilitate the inference between queries and identify the query dependencies.
SQL Transformation Module. Then, the Transformation Module reads each entry in the dictionary QD from the Preprocessing Module, generating an abstract syntax tree (AST) using a SQL parser (in the implementation, we used SQLGlot). The SQL AST captures all keywords and expressions in the query in a tree-like format, where the leaf nodes represent the initial scanning of source tables or the parameters of each operator, the root represents the final step, and intermediate nodes represent relational operators in the query.
SQL Lineage Information Extraction Module. The final module takes each query AST as input and builds the mappings from the result view/table to its lineage $\mathrm { ~ T ~ }$ and the mapping from output columns $C ^ { o u t p u t }$ to input columns $C ^ { s o u r c e }$ . We consider three types of columns in the lineage: $1 ) C ^ { c o n }$ : columns that directly contribute to $C ^ { o u t p u t }$ ; 2) $C ^ { r e f }$ : columns referenced in the query, e.g., columns used in the join predicate or the WHERE clause; and 3) $C ^ { b o t h }$ : columns in both $C ^ { c o n }$ and $C ^ { r e f }$ . The extraction process involves traversing the AST with a post-order Depth-First Search (DFS), for which we create some temporary variables: $M _ { C T E }$ is a mapping for the table and column lineage information from WITH/subquery, $C ^ { p o s }$ denotes column candidates, and $\mathcal { P }$ denotes the resulting columns of the most recent projection. When encountering different keywords, the lineage information and temporary variables will be updated according to the rules in Table I.
An example for traversing the AST of $Q _ { 3 }$ is shown in Figure 4. $\textcircled{1}$ : The traversal starts with the leaf node, scanning of customers, so it follows the FROM Rule by adding it to $\mathrm { ~ T ~ }$ and its columns to $C ^ { p o s }$ . $\textcircled{2}$ : The next node is scanning of web, so it is added to $\mathrm { ~ T ~ }$ and its columns to $C ^ { p o s }$ . $\textcircled{3}$ : The next node is a JOIN, following the Other keywords Rule: customers.cid and web.cid are added to to $C ^ { r e f }$ . $\textcircled{4}$ : For the WHERE node $( \sigma )$ , same rule applies, hence adding web.date to $C ^ { r e f }$ . $\textcircled{5}$ : The last node is the SELECT $( \pi )$ , applying the SELECT Rule. Each output column’s $C ^ { c o n }$ only has one column, e.g., wcid has $C ^ { c o n }$ of customers.cid. Table/View Auto-Inference. In the Lineage Information Extraction module, the system gives priority to SQL statements identified by keys in QD from the Preprocessing Module. This procedure leverages a stack to reorder the query ASTs to traverse, where current traversal is temporarily deferred and placed onto the stack. That is, in cases where the tables or views encountered during the traversal have not been processed yet, they are pushed to the stack. Once the lineage information of missing tables is extracted, the deferred operation is popped from the stack following a Last-In-FirstOut protocol and resumes. This strategic approach plays a pivotal role in handling SELECT $\star$ statements and resolving ambiguities related to columns without a prefixed table name.
When the database connection is available. While primarily focusing on static lineage extraction from query logs, LIN
Fig. 5. The User Interface of LINEAGEX.
EAGEX can also incorporate the extraction with a database connection. We extended LINEAGEX using PostgreSQL’s EXPLAIN command to obtain the physical query plan instead of the AST from the parser, which provides accurate metadata to deal with table and column reference ambiguities. Similar to the absent views or tables in the static extraction, an error may occur due to missing dependencies when running the EXPLAIN command. This requires the stack mechanism and performing an additional step to create the views first to ensure the presence of the necessary dependencies.
# IV. DEMONSTRATION
We will walk through the audience with the use cases like Example 1, employing multiple datasets, such as the MIMIC dataset 2 in the healthcare domain. The MIMIC dataset has a reasonably complex schema with more than 300 columns in 26 base tables and 700 columns in 70 view definitions. We demonstrate in detail each step of using LINEAGEX for our running example in the environment of a Jupyter Notebook.
Step 1: Get started. Users have the flexibility to store their SQL queries in either files or a Python list. In this example, all SQL queries are stored in the file customer.sql. Then the function call is straightforward, as outlined in Figure 5 $\textcircled{1}$ , the users simply install and import the library, then call the LINEAGEX function. The result will be returned in a JSON file (lineage information) and an HTML file (lineage graph).
Step 2: Locating the table. Next, users can visualize the graph using the show function in the notebook or the show_tab to open a webpage. Moreover, users can select the table of interest through a dropdown menu, as shown in Figure 5 $\textcircled{2}$ . Subsequently, the target table web and its corresponding columns are displayed.
Step 3: Navigating column dependency. Users can click the explore button on the top right of the table to reveal the table’s upstream and downstream tables, presenting the initial table lineage. The data flows from left to right on the visualization — tables on the right are dependent on tables on the left. Since we are doing an impact analysis, that is to find all the downstream columns and their downstream columns and so on. The first explore action would only show webinfo and webact tables, since they are the only ones that are directly dependent on the web table. The next explore action would reveal the info table, and there would be no more downstreams for info. With the lineage graph, hovering over the page column highlights all of its downstream columns, as shown in Figure 5 $\textcircled{3}$ .
Step 4: Solving the case. The page column directly contributes to wpage from webinfo(shown in red), so it is definitely impacted. The webact table is a result of a set operation from web and webinfo, therefore all of the webact’s columns will reference the page column and thus all get impacted(shown in blue and orange when it is both referenced and contributed). Since the wcid column is impacted, it is also in the JOIN operation for the info table, then all of the columns would reference the wcid column and potentially get impacted. Therefore, the end result for the impact analysis would be webinfo.wpage and all of the columns from the webact and info tables.
Comparison with existing methods. In our demonstration, users can compare results from LINEAGEX with those from SQLLineage [6]. SQLLineage returns incorrect columns for info and lacks lineage information for columns derived from webinfo, as shown in Figure 2. The users can also see how state-of-the-art LLMs respond to their questions about impact analysis: for example, GPT-4o is able to correctly identify all contributing columns impacted by changes to page—specifically, the wpage columns in webinfo, webact, and info tables (highlighted in red or orange), but it is not able to reveal the columns that are referenced (not directly contributing to) in the SQL (such as the webact.wcid in the JOIN condition).
# REFERENCES
[1] P. Senellart et al., “ProvSQL: Provenance and Probability Management in PostgreSQL,” PVLDB, vol. 11, no. 12, pp. 2034–2037, 2018.
[2] B. Glavic and G. Alonso, “Perm: Processing Provenance and Data on the Same Data Model through Query Rewriting,” in Proc. ICDE, Shanghai, China, Mar. 29 - Apr. 2, 2009, pp. 174–185.
[3] M. Tang et al., “SAC: A System for Big Data Lineage Tracking,” in Proc. ICDE, Macao, China, Apr. 8-11, 2019, pp. 1964–1967.
[4] S. Ahmad et al., “Microsoft Purview: A System for Central Governance of Data,” PVLDB, vol. 16, no. 12, pp. 3624–3635, 2023.
[5] T. Mao, “sqlglot,” GitHub repository, 2024, [Online]. Available: https: //github.com/tobymao/sqlglot.
[6] J. Hu, “sqllineage,” GitHub repository, 2024, [Online]. Available: https: //github.com/reata/sqllineage.
[7] C. Dai et al., “An Approach to Evaluate Data Trustworthiness Based on Data Provenance,” in Secure Data Management, Berlin, Heidelberg, 2008, pp. 82–98.
[8] A. P. Bhardwaj et al., “DataHub: Collaborative Data Science & Dataset Version Management at Scale,” in CIDR, 2015.
[9] P. Buneman et al., “Why and where: A characterization of data provenance,” in ICDT, London, UK, Jan. 4–6, 2001, pp. 316–330.
[10] Y. Cui and J. Widom, “Lineage tracing for general data warehouse transformations,” The VLDB Journal, vol. 12, no. 1, pp. 41–58, 2003.
[11] B. S. Arab et al., “GProM—a swiss army knife for your provenance needs,” IEEE Data Eng. Bull., vol. 41, no. 1, 2018.
[12] D. Hern´andez et al., “Computing how-provenance for SPARQL queries via query rewriting,” PVLDB, vol. 14, no. 13, pp. 3389–3401, 2021.
[13] M. H. Namaki et al., “Vamsa: Automated provenance tracking in data science scripts,” in Proc. KDD, 2020. | As enterprise data grows in size and complexity, column-level data lineage,
which records the creation, transformation, and reference of each column in the
warehouse, has been the key to effective data governance that assists tasks
like data quality monitoring, storage refactoring, and workflow migration.
Unfortunately, existing systems introduce overheads by integration with query
execution or fail to achieve satisfying accuracy for column lineage. In this
paper, we demonstrate LINEAGEX, a lightweight Python library that infers column
level lineage from SQL queries and visualizes it through an interactive
interface. LINEAGEX achieves high coverage and accuracy for column lineage
extraction by intelligently traversing query parse trees and handling
ambiguities. The demonstration walks through use cases of building lineage
graphs and troubleshooting data quality issues. LINEAGEX is open sourced at
https://github.com/sfu-db/lineagex and our video demonstration is at
https://youtu.be/5LaBBDDitlw | [
"cs.DB"
] |
# 1 Introduction
In the era of rapidly advancing large language models (LLMs), the widespread dissemination of misinformation, combined with the increasing presence of AI-generated content, has made it significantly harder for individuals to assess the reliability of information. Consequently, claim verification, leveraging advanced Natural Language Processing (NLP) techniques to automatically determine the veracity of claims, has emerged as a critical research topic (Guo et al., 2022; Dmonte et al., 2024).
Figure 1: Conceptual analysis of previous works and VeGraph: a) Traditional approaches use IR to retrieve evidence and then verify sub-claims; b) Advanced approaches use IR to resolve ambiguous entities and then verify sub-claims; c) Our approach represents claims with graph triplets, then iteratively interacts with IR for entity disambiguation and sub-claims verification.
Traditional approaches typically begin by decomposing a given claim (e.g., at the sentence or passage level) into sub-claims, often using methods such as chain-of-thought (CoT) prompting (Wei et al., 2022). Subsequently, each sub-claim is evaluated by prompting an LLM, incorporating knowledge sources (e.g., information retrieval systems) to determine the truthfulness of the overall claim (Krishna et al., 2022; Zhang and Gao, 2023), as shown in Figure 1(a). Multi-step reasoning in LLMs is the process of addressing complex tasks by breaking them into sequential inference steps, where each step builds on the previous one, enabling the model to integrate intermediate results and draw conclusions. Recently, more advanced methods have enhanced claim verification task by incorporating multi-step reasoning to resolve ambiguous entities before verifying sub-claims (Wang and Shu, 2023; Pan et al., 2023; Zhao et al., 2024), as illustrated in Figure 1(b). These improvements have made such methods more promising for explainable and interpretable claim verification systems.
However, despite the advancements achieved by multi-step reasoning mechanisms, several critical challenges persist: i) Ambiguous Entity Interactions: Ambiguities in entity relationships remain a significant hurdle for fact verification systems (Sedova et al., 2024). This challenge is amplified in multi-step reasoning, where entity disambiguation must span the entire verification process. Unlike previous approaches that employ external tools for resolving ambiguities in individual subclaims, effective resolution here requires seamless integration throughout the reasoning pipeline; ii) Limitations of LLM-Based Multi-Step Reasoning Agents: Many existing approaches rely on static, single-plan veracity prediction (Pan et al., 2023; Wang and Shu, 2023). If a failure occurs at any intermediate step, the entire reasoning process may collapse, thereby underutilizing the adaptive potential of LLM-based agents to recover and refine reasoning paths dynamically.
In response to these challenges, this study introduces an agent-based framework, named Verify-inthe-Graph (VeGraph), for automatic fact verification. Our approach, illustrated in Figure 1(c), consists of three interconnected stages: an LLM agent first constructs a graph-based representation by decomposing the input claim into sub-claim triplets. The agent then interacts with a knowledge base to resolve ambiguous entities in triplets, iteratively updating the graph state. Finally, the agent verifies triplets, completing the process. Overall, the primary contributions of this work are as follows:
(1) We propose a novel multi-step reasoning approach for claim verification using an LLM agent framework with interactive graph representation (VeGraph). To the best of our knowledge, this is the first study to leverage multi-step reasoning in conjunction with an interactive entity disambiguation process to enhance claim verification performance.
(2) The proposed method, by integrating interactive graph representations with LLM agent frameworks, enhances explainability and interpretability by exploiting both structured and unstructured information — the key elements for advancing multistep reasoning tasks.
(3) We evaluate and show the effectiveness of our approach on two widely recognized benchmark datasets in this research field: HoVer (Jiang et al., 2020) and FEVEROUS (Aly et al., 2021).
# 2 Related Work
Claim verification is a long-standing and challenging task that seeks to determine the veracity of a claim by retrieving relevant documents, selecting the most salient evidence, and making a veracity prediction. In the era of large language models (LLMs), LLM-based claim verification has evolved to generate subclaims from input claims using the chain-of-thought (CoT) approach, and to retrieve evidence by augmenting the LLM with external knowledge sources for verification (Guo et al., 2022). ProgramFC (Pan et al., 2023) improves this process by leveraging in-context learning along with the CoT method, decomposing the original claim into program-like functions to guide the verification steps. Similarly, FOLK (Wang and Shu, 2023) translates the claim into First-OrderLogic (FOL) clauses, where each predicate corresponds to a subclaim that requires verification. FOLK then performs FOL-guided reasoning over a set of knowledge-grounded question-answer pairs to predict veracity and generate explanations, justifying its decision-making process. Furthermore, PACAR (Zhao et al., 2024) leverages LLM Agent concept, which incorporates self-reflection technique and global planning to enhance performance.
Despite the advancement of these methods, which exploit LLM reasoning capabilities to interact with external knowledge bases, they are limited to a single interaction with the knowledge base for an ambiguous entity. If the knowledge base fails to identify the requested entity in the query, the entire verification process may collapse. In light of these limitations, our proposed method similarly leverages LLM reasoning in conjunction with external knowledge retrieval systems. However, we extend this by incorporating agent-based LLM, enabling iterative interactions with the knowledge base to resolve ambiguous entities and execute multi-step reasoning for more robust and in-depth claim verification.
Figure 2: Three key components of VeGraph: (i) Graph Representation, which decomposes the complex input claim into graph triplets; (ii) Entity Disambiguation, ambiguous entities are resolved through iterative interactions with the knowledge base (KB); and (iii) Sub-claim Verification, which evaluates each triplet by delegating the verification process to the sub-claim verification function. The logging module records the whole process.
Figure 3: Prompt to make LLM construct the Graph Representation
# 3 Methodology
The main objective of this study is to predict the veracity of a complex input claim $C$ through automated reasoning using an interpretable LLM Agent, incorporating both structured and unstructured information through graph representation. Figure 2 shows the architecture of our proposed framework. Specifically, VeGraph consists of three stages: (i) the agent represents the claim $C$ with graph triplets, each corresponding to a sub-claim; (ii) the agent interacts with an external knowledge base to resolve ambiguous entities; and (iii) once all ambiguities are addressed, the agent verifies sub-claims corresponding to the remaining triplets. The veracity of the input claim is determined by the veracity of all graph triplets, if all the graph triplets are verified with the information in the knowledge base then the claim $C$ is Supported, if one of the triplets cannot be verified then the claim $C$ is Refuted. During processing through stages, the logging module records the activities of the agent for explainability.
# 3.1 Graph Representation
Input claims often contain complex sentence structures that challenge LLMs to grasp their semantic meaning. To address this, we transform each claim into a graph representation composed of triplets, with each triplet capturing a subclaim within the original claim (illustrated in Figure 3). This semantic graph construction is grounded in techniques from the field of Information Extraction, utilizing a joint approach for entity and relation extraction (Li
### Task: Construct a graph that captures entities and
relationships from a given claim, including hidden, ambiguous or implicit entities. Only use information from the claim, do NOT repeat similar triplets in the graph and return the graph in the following format:
### Examples:
<input_claim> Radha started her career in a 1964 Kannada film. The film was based on the life of the creator of the music form Geetam, who was born in 1484.
<guidance_for_graph_construction>
The claim mentions $" a 1 9 6 4 ~ \mathrm { f i l m " }$ without specific information, so it will be marked as an ambiguous entity $X _ { 0 }$
The claim also mentions "the creator of the music form
Geetam" without specific information so it will be marked as an ambiguous entity $X _ { 1 }$
<graph>
Radha $\parallel$ started career in ||
$X _ { 0 } |$ |is a||1964 Kannada film Graph
$X _ { 0 } |$ is based on the life of| $| X _ { 1 }$ Representation
$X _ { 1 } |$ createda music form||Geetam
||was born in||1484
### Actual claim
<input_claim> {{claim}}
et al., 2013; Miwa and Bansal, 2016) in an end-toend fashion. Entities (nodes) are defined as spans of text that represent objects, events, or concepts mentioned in the claim. Unlike traditional Named Entity Recognition (NER) systems, which rely on fixed categories, this approach accommodates a more diverse set of entity types. For relation extraction (edges), we apply methods from Open Information Extraction (OpenIE) (Fader et al., 2011) leveraging LLMs’ semantic comprehension. Instead of restricting relations to predefined categories (e.g.,
OWNERSHIP, LOCATED), this method extracts relations expressed in natural language, capturing detailed document-level interactions. For instance, in a semantic graph, a relation like “is based on the life of” (in Figure 3) accurately represents the relationship between two entities within the claim.
Formally, in VeGraph, the graph construction process leverages in-context learning (Wei et al., 2022) to prompt the LLM to generate graph $G = \{ T _ { 1 } , T _ { 2 } , . . . , T _ { N } \}$ consisting of N triplets, each triplet $T _ { i } = ( E _ { 1 i } , R _ { i } , E _ { 2 i } )$ corresponds to a subclaim extracted from the original claim $C$ . Here, $E _ { 1 i }$ and $E _ { 2 i }$ denote the head and tail entities, respectively, while $R _ { i }$ captures the semantic relation between them. Complex claims often contain implicit or ambiguous entities that need to be resolved to facilitate claim verification. For example, in the claim shown in Figure 3, the entity “a 1964 Kannada film” is not explicitly named, necessitating a disambiguation process. To address this, we categorize entities into two types: explicitly stated entities are marked as standard entity nodes, while ambiguous entities are tagged as $X _ { i }$ to signal the need for further clarification. This disambiguation process of these entities, detailed in Section 3.3, ensures a comprehensive representation of claim semantics. With this graph-based representation, the LLM can more effectively capture the semantic intricacies of the claim, thereby enhancing its reasoning capabilities and supporting improved performance of claim verification. (Refer to Figure 12 in Appendix for the detailed prompt)
# 3.2 Knowledge Base Interaction Functions
To facilitate interaction with the knowledge base in the open-book setting, we implement two core functions: Entity Identification and Claim Verification. Both functions utilize Information Retrieval techniques to retrieve relevant documents enabling context-aware decision-making. During execution, all the retrieved documents are recorded for thoroughness and explainability.
Entity Identification. This function acts as a question-answering module that extracts a specific entity. Formally, for a given question $Q$ , a set of top- $k$ relevant documents $D$ are retrieved from the knowledge base using an information retrieval system. The question $Q$ and the retrieved documents $D$ are processed jointly by the LLM to identify the target entity requested in the question. This allows the system to leverage external knowledge to resolve ambiguities and produce informed answers.
(Refer to Figure 11 in Appendix for the prompt) Sub-claim Verification. The Sub-claim Verification function is designed to assess the truthfulness of a given claim $C$ . Upon receiving a claim as input, the system retrieves a set of top- $\mathbf { \nabla } \cdot k$ documents $D$ relevant to $C$ from the knowledge base. These documents are then processed alongside the claim by the LLM, which determines whether the information supports or refutes the claim. The output is a binary decision—either True or False—that indicates the veracity of the sub-claim (Refer to Figure 10 in Appendix for the detailed prompt).
# 3.3 Entity Disambiguation Process
Following the transformation of the claim into a graph representation, the next step is identifying and resolving ambiguous entities. The disambiguation process is described in Algorithm 1 and illustrated step-by-step in Figure 4.
Triplet Grouping. To effectively address entity ambiguities, we organize the extracted triplets from the graph $G$ into distinct groups based on shared ambiguous entities. Each group consists of triplets containing the same ambiguous entity. For instance, in Figure 4, the triplets are grouped according to two ambiguous entities, $X _ { 1 }$ and $X _ { 2 }$ . This method isolates each ambiguous entity along with relevant information, facilitating a more focused resolution. Interaction with Knowledge Base. Once the triplets are grouped, the LLM interacts with each group to generate clarifying questions for the ambiguous entities. A major challenge arises when entity-related information in the knowledge base is often fragmented across multiple documents or sections, leading to that if we combine all the information or aspects related to an entity to find it from a specific partition of the knowledge base can be difficult. To address this, we adopt an iterative question refinement approach where the LLM uses the triplet information to narrow down ambiguities. Specifically, in each iteration, the LLM processes a group $g$ of triplets, producing the following outputs: i) a rationale $r$ , which outlines the reasoning for selecting specific triplet information to construct the question; ii) a set of triplet identifiers ids, denoting the triplets used in formulating the question; and iii) a targeted question $q$ , designed to clarify the ambiguous entity. The rationale $r$ guides the LLM in filtering relevant triplets $( i d s )$ for constructing a precise question $q$ . This dynamic and self-controlled process enables the LLM to consider various aspects of the triplet group, ensuring
Input Claim Radhmatigereria964Kaadil 會 music formGeetam,whowasbornin1484. Agent with memory Interaction Graph Graph Representation and
Construction Ambiguous Triplet Group Question Generation Identify Entity with IR Rationale: thought for iteration 1 t Answer: Navakoti 2) x X0 o。 riple: i6 Uprayan Ety: $X _ { 0 }$ 3) Xo | based on life ofl[ Xi filminwhichRadhastartedhercareer? Verified Triplets:1, 2
Iteration 1 )ie X 8 Rripe:tghtfritrain Apsaer:Ninyformation ustio: Wieattam Verified Triplets:None )a Rrettfriratin 5 Answer: Purandara Dasa
Iteration 2 (3) Navakoti Narayana | based on life of | X1 8 Question:Who is the creatorof Geetam Update Entity: X1 :4) X1 I created a music form || Geetam X musicformwhoselifewasadaptedinto Verified Triplets: 3, 4 5) X1 ll was born in |l 1484 filmNavakotiNarayana? Iterating until all ambiguous entities are resolved or Max iteration k is reached 1) Radha lts tata car $\parallel$ i | lyakti aaim V Supported Triplet (sub-claim) Output 3) Navakoti Narayana $\parallel$ based on life of $\parallel$ Purandara Dasa √ Unverified Triplet (sub-claim) Graph 4) Purandara Dasa |l created a music form || Geetam ? Synchronize agent's memory 5) Purandara Dasa || was born in || 1484 !?
comprehensive coverage of the information. The question $q$ is then processed by the function Entity Identification to resolve the ambiguous entity.
In addition, in the case when the question $q$ fails to resolve the ambiguity, this question along with its rationale are fed back into the LLM at the next iteration to generate a refined question $q ^ { \prime }$ that incorporates alternative triplet aspects. As the process iterates, after each iteration, if an ambiguous entity $X$ in a group is clarified, the graph $G$ is updated accordingly by replacing $X$ with the actual entity founded. Other groups that have triplets related to $X$ benefit from this update, improving question refinement for those groups in subsequent iterations. For example, in Figure 4, after the first iteration entity $X _ { 0 }$ is identified as "Navakoti Nrayana", this information is then used to update other triplets (e.g. triplet with id 3). At the next iteration, this resolved entity adds more information related to the $X _ { 1 }$ group. The iteration continues until either: i) all ambiguous entities are resolved; or ii) a maximum iteration limit $k$ is reached. The iterative refinement provides opportunities for the system to interact with the knowledge base and resolve the required ambiguous entity under a limited computing budget (Refer to Figures 13 and 14 for the prompts).
Verified Information and Outcome. When a question resolves an entity’s ambiguity, the corresponding triplets (with ids) are marked as containing verified information. The disambiguation process concludes when all ambiguous entities are resolved. If an entity remains ambiguous after $k$ iterations, the entire claim associated with that entity is classified as "REFUTES", indicating insufficient information for verification. Once all ambiguities are resolved, the disambiguation process outputs an updated graph with: i) Verified triplets: Triplets that contributed to the process of resolving ambiguities; and ii) Remaining triplets: Triplets that did not participate in the disambiguation process.
# 3.4 Verification of Remaining Sub-claims
After entity disambiguation, some triplets remain unverified, while others were not initially grouped for the disambiguation process. These remaining triplets require further verification. To achieve this, we employ a large language model (LLM) to generate full-text sub-claim questions based on the unverified triplets. For example, consider the triplet from Figure 4: "Purandara Dasa || was born in || $I 4 8 4 ^ { \prime \prime }$ . The LLM transforms this triplet into a full-text subclaim, such as "Purandara Dasa is the person who was born in $I 4 8 4 ^ { \prime \prime }$ . This subclaim is then used in conjunction with the knowledge base for verification, facilitated by the Subclaim Verification function. Once all remaining sub-claims are verified, the original claim $C$ is classified. If all sub-claims are supported, $C$ is categorized as Supported; otherwise, if any sub-claim is refuted,
$C$ is categorized as Refuted.
Algorithm 1: Entity Disambiguation
Input :Claim $C$ , Input graph $G$ , Max iteration $k$
Output :Clarified graph $G$ , Verified triplets V T riplets
Initialize: Agent Attempt Logs: $l o g s = \emptyset$ ; Verified Triplets: V T riplets ${ \bf \Phi } = \emptyset { \bf \Phi }$ ;
Function Main $( C , G , k )$ : // Logic of the disambiguation process for $i = 1$ to $k$ do groups $\ c =$ GroupTriplets $( G )$ ; foreach $( a e , g )$ in groups do GenQuesAndResEntity $( a e , g )$ if Clarified( $\mathbf { \nabla } _ { \cdot } G )$ then // check if all ambiguous entities is identified return "Successful"; return "Failed";
Function GenQuesAndResEntity $( a e , g )$ : // Agent try to generate question q to identify the ambiguous entity ae of the group $g$ $\boldsymbol { r } , q , i d s = \mathsf { G e n Q u e s } ( C , g , l o g [ a e ] )$ ; $e = { \mathsf { Q A } } ( q )$ ; if $e \neq$ None then V T riplets.add(ids); UpdateState $( G , a e , e )$ ; // Update verified triplets and the state of the graph when identified a new entity else logs[ae]. $a d d ( ( r , q ) )$ ; // Log the rationale and the question when the agent failed
Function GroupTriplets $( G )$ : // Group triplets for ambiguous entities $g r o u p s = \emptyset$ ; entities $\ l =$ AmbiguousEntities $( G )$ ; foreach ae in entities do group = ∅; foreach triplet in $G$ do if $a e \in$ triplet then group.add(triplet); groups.add((ae, group)); return groups;
# 4 Experiments
# 4.1 Datasets and Evaluation Metric
Dataset. We conduct our experiments using an open-book setting, simulating a real-world scenario where the system has to interact with an external knowledge base to verify claims. We evaluate the proposed VeGraph on two widely-used benchmark datasets for complex claim verification: HoVer and FEVEROUS. Both datasets contain intricate claims that require multi-hop reasoning and evidence gathering from various information sources. Due to the unavailability of public test sets, we rely on validation sets for evaluation. The HoVer dataset (Jiang et al., 2020) is a multi-hop fact verification benchmark designed to validate claims using evidence across multiple sources, including 2-hop, 3-hop, and 4-hop paths. It is based on the introductory sections of the October 2017 Wikipedia dump. The multi-hop nature of HoVer challenges the system to retrieve and aggregate information from several interrelated documents. The FEVEROUS dataset (Aly et al., 2021) addresses complex claim verification using both structured and unstructured data. Each claim is annotated with evidence derived from either sentences or table cells within Wikipedia articles of the December 2020 dump. For consistency with prior work (Aly et al., 2021), we evaluate FEVEROUS claims on three key partitions: Multi-hop Reasoning, Entity Disambiguation, and Numerical Reasoning. As our research focuses on textual fact-checking, we exclusively select claims that require sentence-based evidence, discarding those involving table cells or other structured data. To manage computational costs, specifically for the HoVer dataset, we sample 200 claims from each partition while ensuring balanced label distributions.
Metrics. Following practices in the field, we use the Macro-F1 as the primary evaluation metric.
# 4.2 Baselines
For the comparison, we selected recent modern methods using LLM for multi-step reasoning veracity prediction, which are related to our work, as the baselines. Specifically, the baselines are sequentially described as follows:
CoT-Decomposing CoT reasoning (Wei et al., 2022) is a popular prompting approach that includes chains of inference steps produced by LLMs. Accordingly, for the claim verification task, the input claim is directly decomposed into subclaims using an LLM. These subclaims are then verified sequentially by prompting the LLM with facts grounded on external knowledge sources via the information retrieval systems.
ProgramFC (Pan et al., 2023) is one of the first claim verification models in the era of LLMs with the explainable capability for multi-step reasoning of veracity prediction. Specifically, the model decomposes complex claims into simpler sub-tasks and then solves the sub-tasks by using specialized functions with program-guided reasoning.
FOLK (Wang and Shu, 2023) improve the explainable claim verification by introducing the firstorder-logic (FOL) clause as the guided claim decomposition to make veracity predictions and generate explanations to justify step-by-step the verification decision-making process.
Table 1: Report results of Macro-F1 score on HoVer and FEVEROUS datasets. \* are taken from respective papers. Both texts indicate the best score for the same experimental setup.
# 4.3 Experimental Setups
Configurations: Since the original baselines have different configurations including input data, information retrieval systems, and underlying LLM in their respective papers, therefore, we try to reproduce the baseline with the unified configuration, following their available source codes23. To account for computational constraints, we limit the number of iterations $k$ in our proposed method, VeGraph, to 5. For a fair comparison, we also report the ensembled performance of ProgramFC over 5 runs, consistent with the original implementation (Pan et al., 2023).
Backbone LLM and Prompting Strategy: In our experiments, we employ Meta-Llama-3-70BInstruct4 as the underlying LLM. To construct graph representations, we leverage in-context learning by providing the model with human-crafted examples to guide the LLM to perform the required tasks. For other tasks, we use zero-shot prompting leveraging existing LLM reasoning capability.
Retrieval System: Focusing on open-book settings, we utilize the corresponding Wikipedia corpora constructed specifically for the HOVER and FEVEROUS as knowledge sources. To simulate real-world systems, we implement a two-layer retrieval system. The first layer employs BM25 (Robertson et al., 1994) as the sparse retrieval algorithm. The second layer combines a Bi-Encoder model (bge-m3) with a Reranker (bge-reranker-v2- m3) (Chen et al., 2024), refining the search results by filtering out irrelevant documents. When interacting with the two functions described in Section 3.2, we set a constraint of a maximum of 15 retrieved documents or a maximum of 6000 tokens, adhering to the model’s maximum input length.
# 4.4 Main Results
The overall performance of VeGraph and the baselines are presented in Table 1. The results are organized into two sections. The first section reports the performance of the baseline models as documented in their works, highlighting their diverse configurations, such as variations in the number of examples used for inference, the underlying backbone models and the retrieval systems employed. These models employ varying configurations, including differences in the number of examples used for inference and the retrieval systems implemented. The second section presents the results of our proposed VeGraph model, alongside the reproduced baselines, which are evaluated under identical configurations. From these experiments, we derive several key insights:
VeGraph can effectively verify complex claims: VeGraph consistently outperforms most previous models across various test cases. Notably, on the HoVer dataset—where input claims exhibit substantial complexity—VeGraph demonstrates significant improvements, particularly in multi-hop reasoning tasks. Specifically, it achieves a notable 5-point gain in performance on four-hop claims, highlighting its effectiveness in handling complex claim verification. In contrast to the five-run ensemble strategy employed in ProgramFC, VeGraph utilizes an iterative interaction approach, wherein each iteration builds upon the previous one. This step-by-step reasoning mechanism ensures that the output of one iteration serves as the input for the next, rather than merely aggregating multiple independent predictions. Consequently, the final result is derived from a refined, sequential reasoning process. These findings emphasize the crucial role of interactive disambiguation in our approach, underscoring VeGraph’s suitability for verifying intricate claims that require advanced reasoning capabilities. Enhanced entity disambiguation leads to gaining in performance: Through the integration of interactive graph representations and the agent-based LLM framework, VeGraph achieves substantial performance gains across multiple benchmark datasets. For instance, in the FEVEROUS dataset, VeGraph surpassed baselines by 2 points in the Disambiguation category and 5 points in the Numerical category. However, VeGraph showed slightly lower performance in the Multi-hop category of FEVEROUS. This performance drop compared to ProgramFC is attributed to its use of specialized in-context examples tailored specifically to the FEVEROUS dataset (Pan et al., 2023). In fact, unlike complex datasets such as Hover, which require multi-hop entity disambiguation, the multi-hop subset of FEVEROUS only necessitates combining evidence from multiple articles without extensive entity resolution (Aly et al., 2021). In contrast, VeGraph employs a generalized reasoning pipeline that consistently integrates entity disambiguation across tasks. While this generalization improves adaptability, it introduces a trade-off in performance (e.g., the Multihop partition of FEVEROUS) where task-specific optimization might yield better results.
# 4.5 Ablation Study
To evaluate the contribution of each component in the proposed VeGraph framework, we conducted an ablation study on the HoVer dataset. Specifically, we analyzed the impact of graph representation for disambiguating entity interactions and the role of multi-step reasoning in decision-making within the LLM-agent framework. We begin by removing the interactive graph component, and then gradually increase the maximum number of disambiguation steps $k$ allowed. The results are presented in Table 2. The results demonstrate that removing graph representation severely degrades performance, especially on more complex claims (e.g., 3-hop and 4-hop). This highlights the importance of graph-based reasoning in VeGraph. Additionally, increasing the number of reasoning steps improves performance, indicating that multi-step decision-making is crucial for verifying complex claims.
Table 2: Ablation studies on the maximum number of disambiguation steps and the effectiveness of graph representation on Hover dataset.
# 4.6 Interpretability and Error Analysis
Our proposed VeGraph framework not only enhances the performance of claim verification systems but also offers a high degree of interpretability, which is essential for human comprehension and trust. Examples of these generated reasoning traces are provided in Figure 7 of Appendix B. To evaluate the quality of the reasoning processes and the generated graphs, we conducted a human analysis on 50 failed predictions for each partition (2-hop, 3-hop, 4-hop) of the HOVER dataset, focusing on instances where VeGraph incorrectly predicted the claim’s veracity. Human annotators categorized the errors into three primary types, corresponding to different stages of the framework: i) Graph Representation Errors: These occur when VeGraph fails to accurately capture the semantic structure of the claim, resulting in flawed graph representations; ii) Entity Resolution Errors: These arise when the system either fails to disambiguate entities or struggles to correctly identify the entities relevant to the claim; iii) Subclaim Errors: These involve incorrect predictions at the level of individual subclaims leading to erroneous final verdicts.
Table 3: Proportions of incorrectly predicted examples across partitions on the HOVER dataset.
As shown in Table 3, the error distribution varies across the 2-hop, 3-hop, and 4-hop partitions of the HOVER dataset. Despite few-shot in-context learning strategies being employed, the LLM occasionally encounters challenges in constructing accurate graph representations, particularly when processing complex claims. The increasing complexity of multi-hop claims (e.g., 3-hop and 4-hop) further exacerbates issues in entity disambiguation, as a larger number of ambiguous entities complicates the retrieval of relevant documents. Even after multiple interaction cycles, entity disambiguation may remain incomplete, affecting the overall reasoning process. These limitations in both graph construction and entity resolution propagate through the framework, leading to reduced accuracy in the final verdicts, particularly in multi-hop scenarios. Additionally, another source of error comes from failed interactions with the knowledge base, where unresolved triplets mislead the retrieval system, underscoring the critical importance of retrieval performance. | Claim verification is a long-standing and challenging task that demands not
only high accuracy but also explainability of the verification process. This
task becomes an emerging research issue in the era of large language models
(LLMs) since real-world claims are often complex, featuring intricate semantic
structures or obfuscated entities. Traditional approaches typically address
this by decomposing claims into sub-claims and querying a knowledge base to
resolve hidden or ambiguous entities. However, the absence of effective
disambiguation strategies for these entities can compromise the entire
verification process. To address these challenges, we propose
Verify-in-the-Graph (VeGraph), a novel framework leveraging the reasoning and
comprehension abilities of LLM agents. VeGraph operates in three phases: (1)
Graph Representation - an input claim is decomposed into structured triplets,
forming a graph-based representation that integrates both structured and
unstructured information; (2) Entity Disambiguation -VeGraph iteratively
interacts with the knowledge base to resolve ambiguous entities within the
graph for deeper sub-claim verification; and (3) Verification - remaining
triplets are verified to complete the fact-checking process. Experiments using
Meta-Llama-3-70B (instruct version) show that VeGraph achieves competitive
performance compared to baselines on two benchmarks HoVer and FEVEROUS,
effectively addressing claim verification challenges. Our source code and data
are available for further exploitation. | [
"cs.CL",
"cs.AI",
"cs.DB",
"cs.IR"
] |
# I. INTRODUCTION
The publication of the Bitcoin white paper in 2008 and the subsequent launch of the Bitcoin blockchain in 2009 have catalyzed extensive interest and research into blockchain technology. This technology has attracted widespread attention from businesses, researchers, and the software industry due to its compelling attributes, such as trust, immutability, availability, and transparency. However, as with any emerging technology, blockchains and their associated smart contracts present new challenges, particularly in areas such as blockchain infrastructure and smart contract development.
Ongoing research is actively addressing several critical issues, including blockchain scalability, transaction throughput, and the high costs associated with consensus algorithms. Additionally, smart contract development faces unique difficulties, such as limited stack space, the oracle problem, data privacy concerns, and cross-blockchain interoperability. These topics have been explored in-depth, with numerous comprehensive literature reviews available [e.g., 1, 2].
The constraints imposed by blockchain technology increase the complexity of smart contract development, which is well documented in various literature surveys, such as [3, 4]. To address these difficult challenges and simplify smart contract development, researchers such as López-Pintado et al. (2019) [5, 6], Tran et al. (2018) [7], Mendling et al. (2018) [8], and Loukil et al. (2021) [9] have proposed using Business
Process Model and Notation (BPMN) models that can be transformed into smart contracts.
We also use BPMN modeling to represent the application requirements, but we use a different approach to transform BPMN models to smart contracts. Instead of transforming the BPMN models directly to smart contract methods, we exploit multi-modal modeling to represent the flow of computation of the business logic in a blockchain-independent manner. To show the proof of concept, we developed a tool, Transforming Automatically BPMN model into Smart contracts, called TABS, to generate smart contract from BPMN models while also supporting side-chain processing [10].
In [11] we extended the TABS tool and its underlying concepts into a tool TABS $^ +$ that allows representing multi-step activities of actors using nested trade transactions while also providing, in automated fashion, supporting mechanisms to enforce the transactional properties [11] of the nested multistep transactions.
Most recently, we further extended the underlying concepts and the tool to support upgrade/repair of smart contract, which is necessary (i) to repair bugs in smart contracts and/or (ii) to amend the smart contracts to model new functionalities or features in business processes as they continually evolve [12].
One of our main objectives is to automate generation of smart contracts from BPMN models such that the transformation process can be managed by a BPMN modeler without (much) intervention by IT support with expertise on blockchains smart contracts. Although our approach has brought us closer to that objective, services of a software developer are still required to write some well-defined methods for the BPMN task elements.
# A. Objectives and Contributions
We have two objectives achieving of which also form the paper’s contributions. Our first objective is to show that, for certain types of blockchain applications, our approach can generate smart contracts in automated fashion from BPMN models without assistance of a software developer. Although this limits the type of applications that can be supported, the benefit that is gained is the generation and deployment of smart contracts directly from BPMN models that can be exploited by organizations without the usual support of developers of smart contracts.
Our second objective is to show that our approach can be used to support generation of smart contracts from BPMN models under various scenarios ranging from use by SMEs to use by large companies with sophisticated IT infrastructure that also utilizes blockchains to support its internal activities as well as collaborations with partner organizations.
# B. Outline
The second section provides background. The third section describes how we are augmenting our approach and the tool to support generation of smart contracts without the need of a software developer, albeit for a subset of BPMN models that satisfy certain conditions. The fourth section describes how our approach is suitable for use by SMEs as well as by large companies. The fifth section provides related work, while the last section provides summary and conclusions.
# II. BACKGROUND
We overview BPMN modeling first, then the use of Hierarchical State Machines (HSMs) and multi-modal modeling in system analyses, and then our approach to generating smart contracts from BPMN models.
# A. Business Process Management Notation (BPMN)
Business Process Model and Notation (BPMN), developed by the Object Management Group (OMG) [13-16], is a standard that was designed to be accessible to a diverse range of business users, including analysts, technical developers, and managers. The widespread practical adoption of BPMN is evidenced by the variety of software platforms that facilitate the modeling of business processes with the aim of automatically generating executable applications from BPMN models. For instance, the Camunda platform converts BPMN models into Java applications [17], while Oracle Corporation translates BPMN models into executable process blueprints using the Business Process Execution Language (BPEL) [18].
BPMN models are characterized by several key features, including flow elements that represent the computational flows between different BPMN components. A task within a BPMN model signifies computation that is executed when the flow reaches the task element. Other elements in BPMN manage the conditional branching and merging of computational flows, with Boolean expressions (guards) used to control the flow of computation. Furthermore, BPMN also models various events that may arise and how these events are caught and processed. Additionally, data elements within BPMN models describe the data or objects that move along with the computations, serving as inputs for decision-making in guards or computation tasks.
# B. FSMs, Hierarchical State Machines (HSMs), and Multimodal Modeling
Finite State Machine (FSM) modeling has been extensively utilized in software design and implementation, often enhanced with features such as guards on FSM transitions. In the late 1980s, FSMs evolved into Hierarchical State Machines (HSMs), in which a state in an FSM can be represented by an FSM itself. Although HSMs do not increase expressiveness of FSMs, they lead to hierarchical FSM structures to facilitate the reuse of patterns by allowing states to contain nested FSMs [19].
Girault et al. (1999) [20] explored the combination of HSM modeling with concurrency semantics derived from models like Communicating Sequential Processes [21] and Discrete Event systems [22]. They demonstrated that a system state could be represented by an HSM, where a specific concurrency model is applied exclusively to that state. This approach enables multi-modal modeling, allowing different hierarchical states to employ the most appropriate concurrency models for the concurrent activities within those states. We exploit multimodal modeling to express the flow of computation within a BPMN model in a blockchain-agnostic way by using DE modeling to represent concurrency while concurrent FSMs are used to express functionality.
# C. BPMN Model Transformation to Smart Contract Methods and TABS+R Tool
In [10], we presented a methodology for transforming BPMN models into smart contracts. The transformation process involves several key steps:
1. Transformation to DE-HSM Model: The BPMN model is first transformed into a graph representation and then into a DE-HSM model.
2. Analysis and multi-step trade-transaction specification: The model’s computation flow is analyzed to identify localized sub-graphs that are then used to define nested, multi-step trade transactions.
3. Transformation to DE-FSM Model: The DE-HSM model is elaborated by recursive replacement of each DE-HSM model with its elaborated DE-FSM model and thus flattening the entire DE-HSM model into an interconnected network of DE-FSM (Discrete Event Finite State Machine) sub-models.
# 4. Transformation to Smart Contracts: The interconnected DE-FSM models are transformed into smart contract code.
It should be noted that the flow of computation in the smart contracts is represented by DE modeling combined with functionality represented by concurrent FSMs – and these are blockchain independent. As long as the target blockchain has a smart contract deployed containing the TABS monitor, any smart contract generated by the transformation process can be deployed and executed on that target blockchain. The monitor smart contract provides the execution environment for the DE modeling and concurrent FSMs. In short, the monitor has a detailed view of the business logic flow, including the corresponding data flowing along with the flow of computation, wherein the business logic is expressed in an abstract manner, using DE modeling techniques and concurrent FSMs, and is thus blockchain independent.
# III. ATTESTATION FOR AUTOMATED GENERATION
One of our objectives is to achieve generation of smart contracts that are blockchain agnostic. We made progress towards this objective by representing the flow of collaboration logic in a blockchain-independent as described above. However, currently, the scripts for the BPMN task elements need to be coded/written by a software developer in a specific computer language executable by the target blockchain.
To overcome this issue of the dependence on coding of task elements, in this section we describe how we adapted a twolayer approach taken by the Plasma project, described in [23], to generate smart contracts without writing scripts for the BPMN task elements. The Plasma project approach to improve scalability uses two chain layers, wherein the sub-servient chain performs the transaction detailed work, while the main chain records the certifications of the results of work performed by the subservient chain, such as a sidechain. This approach was used for scalability by the Ethereum public blockchain [24] in that the main Beacon Chain simply records coordination activities in managing the consensus and approvals of blocks appended to shards and in storing results of attestation of shard blocks.
We utilize a similar approach in that the scripts of the BPMN task elements are executed off the mainchain, while the smart contract executed on the mainchain simply guides the collaborations and obtains certifications about the results of the tasks executed off chain.
# A. Motivation for Certifications of Work of Task Elements
The BPMN task element represents computation, within a swimlane (BPMN terminology) of one actor, on data flowing into the task element. The task uses the data flowing into the computation and the content of state variables to produces data flowing out of the computation while also updating state variables. For some applications, the task element examines the details of a document flowing in and makes decisions based on the data contained within that document. By having such computation performed by a smart contract, trust is achieved as all parties can examine details.
and instead we use prepared interactions for certified exchange of documents.
We examined sample use cases appearing in the literature, use cases detailing transformation of BPMN models into smart contracts, with examples being Order-Supply [28], Supply Chain [29], Parts Order [30], Sales and Shipment [31], and Ordering Medications [32]. In all of them, besides transferring documents amongst actors, the document creation, review, or amendment are performed off-chain by a single actor. In fact, for some use cases, such as the case of the supply chain management [32], data exchanged between the actors only consists of exchanging QR codes identifying documents that are exchanged – the smart contract interaction between the partners is in terms of documents being exchanged.
Thus, if the task method execution can be performed offchain, then the code for the task script element does not need to be provided as long as the generation of the smart contracts from BPMN model facilitates certified exchange of documents between the on-chain and off-chain computation.
For exposition purposes, we are going to use a simple BPMN model, shown in Fig. 1, for a sale of a large product, such as a combine harvester. The model shows that an agreement on the sale of the product is reached first, which is followed by arrangements for the transport of the product. Transport arrangements include finding the requirements for the transport of the product, such as or safety requirements in case of dangerous products in transport. Once the transport requirements are determined, the insurance and transport are arranged, and the product is shipped/transported. Following the transport, the product is received, and payments are completed.
# B. Certification of Exchanged Documents
Recall that as part of BPMN modeling, the modeler is asked to
Fig. 1. PMN Model for a Sale of Product and Its Delivery
However, such computation also causes difficulties due to amendments required for either repairing bugs or for new features that need to be introduced, as it is likely that the required amendments would be within the task elements that are executed as a part of a smart contract. And repairing/upgrading smart contracts is not easy [25-27].
Many applications include simpler interactions amongst partners/actors, interactions that consist of exchange of documents rather than performing computations on such documents. In such situations, task elements need not be used, use data association elements to describe the purpose of the task and describe the data/information flowing along with the flow of computation, and hence also flowing in and out of the task element. This information is also passed to the off-chain component together with a document that is input (flows) into the task element. Once the task is completed, output from the task element is a document that is passed along the flow of computation.
As is the usual practice for blockchains, a document is stored off-chain, while it is the digitally signed hash-code of the document that is stored on the blockchain, wherein the signed hash code is used to confirm the document authenticity, where the authenticity includes confirmation of (i) authorship and (ii) that the document has not been modified.
For storage of documents, we currently utilize the Inter Planetary File System (IPFS) [33]. When a document is created, uploading it to IPFS generates a new contentaddressed hash code identifier (CID), which is signed and stored by the smart contract. This allows the on-chain components associated with BPMN data elements to interact with the off-chain document without needing to directly handle its content.
For example, the first task receives a purchase offer document from an external source. An accepted purchase offer results in a sales agreement that is used in subsequent processing. The sales agreement is represented by an association data element, SalesAgr. The dotted arrow from RecAgr to the association data element SalesAgr signifies the creation of the SalesAgr by the RecAgr task. The dotted arrow from the SalesAgr by the GetTrReq task element signifies that the SalesAgr is delivered for further processing to the GetTrReq task.
The GetTrReq task determines the transport requirements for the product and stores them in a newly created IPFS document TrRequirements. The CID of the document is forwarded to next step in processing. The transport requirements are forwarded to the GetIns and GetTransp task elements that can be executed concurrently as shown by the fork gate represented by a diamond with a plus sign in it. The getIns task produces the insurance contract, called Insurance, while the getTransp creates a Transport document that is a contract for the transport of the product. Once the insurance and the transport contracts are obtained and provided to the transporter, the product be delivered to the destination, which is represented by the task DoTrasp. Completed delivery is documented in the document called Delivery that is forwarded to the final task, RecAndFin, to indicate reception of the product by the purchaser and finalization of the contract.
It should be noted that for brevity only a simplified model was presented that ignores many details, such as not accepting the purchase offer, deposits or final payments.
This standard interaction model for storing documents offchain is used to prevent the blockchain from being overburdened, while still allowing transactions to be secure and complex multi-step processes to be executed. Additionally, any update or modification to a document generates its new CIDs, effectively handling version control and verification throughout the smart contract’s lifecycle.
Thus, for applications that include collaborations that involve exchange of documents, the computations associated with the task elements can be off-loaded to off-chain resulting in facilitation of generation of smart contracts without requiring scripts for the BPMN task elements. Under such circumstances, our approach and tool to generation of smart contracts from BPMN models can be automated without intervention of a software developer and can be under the control of a Business Analyst (BA) who develops the BPMN model and asks the tool (i) to transform it into smart contract for the target blockchain and that (ii) to deploy the smart contract on the target blockchain.
In short, when the work of a task element can be executed off-chain and the interaction between the on-chain and offchain components can be modeled simply by a certified exchange of documents, then the transformation of the BPMN model into a smart contract is used to support such a certified exchange of documents and thus avoid coding of the task elements. Consequently, a BPMN model can be transformed into a smart contract in automated fashion and deployed on the target blockchain under the control of the BA without assistance of a software developer.
Currently, we support certified information exchange between the on-chain and off-chain components using HTTP web services. As an example, consider the communication between the seller company and the insurance company. First, the on-chain component generates a request to the insurance company to get the insurance while providing it with a document containing the product description and transport requirements. The insurance company responds either with a negative response or with a positive response providing the seller with the insurance contract. As smart contracts are not able to access external resources, the smart contract raises an event that is captured and results in servicing the event by invoking the web service requesting insurance. The web service will produce the insurance document and will provide it to the mainchain smart contract by a call to a smart contract method to receive the insurance document/contract, wherein such a certified exchange is generated by the transformation process of the BPMN model into smart contract.
# IV. SMART CONTRACTS FOR SMES AND LARGE COMPANIES
To show the flexibility of our approach, we are going to utilize the example use case, shown in Fig. 1, under two different scenarios, one in the context of a small SME, while the other one in the context of a large organization with sophisticated IT department that has expertise on writing smart contracts.
# A. Use Case in the Context of an SME
An SME would like to use a smart contract to ensure secure computation and obtain certified documentation on the trade activity. An SME’s Business Analyst (BA), who is familiar with BPMN modeling, uses the $\mathrm { T A B S ^ { + } R }$ tool to create a BPMN model shown in Fig. 1. The BA creates the BPMN model and specifies that the task elements are executed offchain and that the system should facilitate exchange of documents between the smart contract and the off-chain computation.
For an SME, off-chain computation may simply be manual by, perhaps, BA performing the off-chain work. For instance, for the GetTrReq task, the BA may contact a registry and find the transport requirements and store them in a newly created IPFS document TrRequirements. The CID of the document is forwarded to next step in processing. The transport requirements are forwarded to the GetIns and GetTransp task elements that can be executed concurrently as shown by the fork gate represented by a diamond with a plus in it. The BA may communicate with the insurance company for an insurance contract represented by the Insurance document that is stored on IPFS. Similarly, BA may negotiate a contract for the transport of the product, wherein the transport contract is stored in the Transport document on the IPFS. Once the insurance and the transport contract are obtained, they are forwarded to the DoTransport task. Once the product is delivered, the transporter returns a document, called Delivery that contains information on the delivery of the product. The Delivery document is forwarded to the RecAndFin to receive the Delivery document and finalize the trade activity.
For the DoTransport task, the insurance and the transport agreement would be input into the off-chain task, wherein the transporter would perform the transport and at the completion of the task would provide a document with confirmation of the product’s arrival at the destination. The smart contract records the activities performed by the BA while storing the documents on IPFS with their CIDs stored on the blockchain smart contract.
There is some initial setup required before an SME can create smart contracts from BPMN models. The SME’s target blockchain would need to be identified so that the generated smart contract can be deployed on the target blockchain. Furthermore, initially, the smart contract containing the $\mathrm { T A B S ^ { + } R }$ monitor would need to be deployed on the blockchain. However, this is only a one-time initial overhead that is amortized over all smart contracts generated by the approach for that target blockchain. Furthermore, this task is also automated as it simply involves deploying the $\mathrm { T A B S ^ { + } R }$ monitor smart contract on the target blockchain. Currently, we provide the TABS $+ \mathbf { R }$ monitor smart contracts for Hyperledger Fabric (HLF) and for blockchains based on Ethereum Virtual Machine (EVM).
# B. Use Case in the Context of a Large Company
Assume now that a similar application is being developed in the context of a large company with sophisticated IT systems. The company now has two departments, one for sales and one for the product shipment, and uses cutting-edge technologies, such as blockchains for collaborations and AI for automation. Once the sales agreement has been reached by the sales department, the sales agreement, which includes the product description and the purchaser information, needs to be communicated to a shipment department that uses its own internal processes to facilitate the product shipment to the purchaser.
A BPMN model that may represent the application is shown in Fig. 2. However, as showing the creation and exchange of documents would clutter the figure, we do not show exchange of such documents explicitly.
In comparison to Fig. 1, Fig. 2 has significant differences as information is flowing across departments and external actors that include the buyer, insurance company, transport company, and the registry of transport requirements. In BPMN, actor activities are contained in a swimlane that is represented by a rectangle. Information flow between actors is represented by lines that cross swimlanes. Thus, instead of a single swimlane as shown in Fig. 1, there are multiple swimlanes in Fig 2. There is a swimlane for each of the company’s sales and shipping departments, denoted as SalesDep and ShipDep, respectively; and a swimlane for each of the external actors that include the buyer, the transport-requirements registry (ReqRegistry), insurance company (InsComp), and the transporter (Trnasp).
Once the sales agreement, which includes the product description and the purchaser information, has been approved by the sales department, it needs to be communicated to a shipment department that uses its own internal processes to facilitate the product shipment to the purchaser.
After the shipping department receives the sales agreement, it interacts with the transport-requirements registry to find the product transport requirements, and then it communicates concurrently with the insurance company to obtain insurance, and with the transporter to arrange the transport contract.
Insurance is obtained by invoking a smart contract method of the insurance company, while providing it with information on sales agreement that includes information on the product to be shipped, shipment destination, manner of transport, etc. Obtaining a transporter is achieved in a similar manner by invoking a smart contract method. The transporter responds by providing the contract for transport of the product.
Following this, the transporter performs the transport and when finished, the confirmation of delivery is provided by the transporter. Finally, once the product is delivered, payments are finalized.
If all interactions amongst the actors can be achieved by certified exchange of documents, the transformation of the BPMN model into the methods of a smart contract(s) can be achieved without requiring coding of task element scripts.
# V. RELATED WORK
Closest to our research is the work on transforming BPMN models to smart contracts. The Lorikeet project [7] employs a two-phase methodology for converting BPMN models into smart contracts. First, the BPMN model is analyzed and transformed into smart contract methods, which are subsequently deployed and executed on a blockchain platform, specifically Ethereum. An off-chain component handles communication with the decentralized application (DApp), ensuring that actors exchange messages according to the BPMN model. The project also supports asset control, including both fungible and non-fungible tokens, and provides a registry and management methods for assets, such as transfers.
Caterpillar [5, 6] adopts a different approach by focusing on BPMN models confined within a single pool (a BPMN construct) where all business processes are recorded on the blockchain. Its architecture consists of three layers: Web Portal, Off-chain Runtime, and On-chain Runtime. The Onchain Runtime layer includes smart contracts for workflow control, interaction management, configuration, and process management, with Ethereum as the preferred blockchain platform.
Loukil et al. (2021) [9] proposed CoBuP, a collaborative business process execution architecture on blockchain. Unlike other methodologies, CoBuP does not directly compile BPMN models into smart contracts. Instead, it deploys a generic smart contract that invokes predefined functions. CoBuP’s threelayer architecture, comprising Conceptual, Data, and Flow layers, transforms BPMN models into a JSON Workflow model that governs the execution of process instances, which in turn interact with data structures on the blockchain.
Similar to CoBuP, Bagozi et al. [34] employ a three-layer approach, albeit in a simpler form. In the first layer, a business analyst represents the collaborative process in BPMN. In the second layer, a business expert annotates the BPMN model to identify trust-demanding objects, after which Abstract Smart Contracts, independent of any specific blockchain technology, are created. Finally, Concrete Smart Contracts are generated and deployed on a specific blockchain platform. | Research on blockchains addresses multiple issues, with one being writing
smart contracts. In our previous research we described methodology and a tool
to generate, in automated fashion, smart contracts from BPMN models. The
generated smart contracts provide support for multi-step transactions that
facilitate repair/upgrade of smart contracts. In this paper we show how the
approach is used to support collaborations via smart contracts for companies
ranging from SMEs with little IT capabilities to companies with IT using
blockchain smart contracts. Furthermore, we also show how the approach is used
for certain applications to generate smart contracts by a BPMN modeler who does
not need any knowledge of blockchain technology or smart contract development -
thus we are hoping to facilitate democratization of smart contracts and
blockchain technology. | [
"cs.SE",
"cs.CR",
"cs.DC"
] |
# I. INTRODUCTION
M IbLroLaIdMerETchEaRn-nweal be (nmdwmidWtahvs ) ofrmepqaurend yto atrnadsi iofnfearl sub-6 GHz bands, enabling higher capacity for communication systems. However, their high propagation loss introduces new challenges for system design. While high-gain narrow beams formed by large antenna arrays can effectively compensate for the high path loss, the resulting strong directional characteristics make the beams highly sensitive to user mobility and environmental obstructions [1]. This sensitivity can lead to a significant decline in the stability of mmWave connectivity. In this context, beam alignment (BA) becomes a crucial technology for determining the optimal/sub-optimal beamforming direction, playing a key role in establishing reliable communication links.
In vehicle-to-everything (V2X) scenarios, the high mobility of user equipment (UE) imposes even more stringent requirements on BA mechanisms. Compared to general mobile scenarios, the rapid topology changes unique to mmWave V2X significantly increase the complexity of real-time alignment for beams. Frequent beam misalignment can lead not only to communication link interruptions but also to severe degradation of communication quality. Existing BA solutions rely on repeated beam training processes to determine the optimal beam direction, but the resource-intensive nature of this process can encroach upon time slot resources allocated for data transmission, resulting in reduced system throughput. Therefore, developing intelligent BA mechanisms that offer efficiency and robustness has become a key research direction for ensuring the reliability of mmWave V2X communications.
C. Zheng and C. G. Kang are with the Department of Electrical and Computer Engineering, Korea University, Seoul 02841, South Korea (e-mail: zc331 @korea.ac.kr, ccgkang@korea.ac.kr). J. He and Z. Yu are with the School of Computing and Information Technology, Great Bay University, Dongguan Key Laboratory for Intelligence and Information Technology, and Great Bay Institute for Advanced Study (GBIAS), Dongguan 523000, China (e-mail: jiguang.he@gbu.edu.cn, zitong.yu@ieee.org). G. Cai is with School of Information Engineering, Guangdong University of Technology, Guangzhou, China (e-mail: caiguofa $2 0 0 6 @$ gdut.edu.cn). M. Debbah is with the Center for 6G Technology, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates (e-mail: merouane.debbah@ku.ac.ae).
The recent advancements in integrated sensing and communication (ISAC) have brought significant attention to sensingempowered beam prediction. The optimal beam pair between the base station (BS) and UE is determined by their instantaneous spatial positions and the surrounding environment. Proactively utilizing UE positioning data alongside sensing data measured by the BS for beam prediction is a viable strategy to eliminate beam training overhead. Beam prediction based on historical sensing data essentially involves analyzing the complex interplay between the temporal dimension (historical state evolution) and the spatial dimension (environmental physical characteristics) of sensing data. This analysis is aimed at constructing mathematical models that reveal hidden dynamic patterns and establish a dynamic mapping between sensing data and future beam states. This task is particularly well-suited for deep learning (DL) methods, which can effectively capture and model such intricate relationships [2]. Prior studies have explored the use of various communication and sensing modalities—such as sub-6 GHz channel state information (CSI) [3], RGB images [4], radar [5], LiDAR [6], and GPS [7]—in conjunction with DL architectures for beam prediction.
The rapid development of large language models (LLMs), such as ChatGPT [8] and DeepSeek [9], introduces new possibilities for physical-layer applications that go beyond the limitations of conventional methods. LLMs, with their billions of parameters, exhibit exceptional capabilities in natural language understanding and logical reasoning. Owing to extensive pretraining on diverse and large-scale datasets, LLMs can achieve strong performance with minimal task-specific finetuning. Compared to training ransformer models from scratch, LLMs require significantly less labeled data, making them particularly advantageous in real-world applications where collecting and annotating large-scale supervised datasets is often impractical. Recent efforts have explored leveraging LLMs for physical-layer tasks, such as mmWave beam prediction. A representative example is BeamLLM [10], which utilizes LLMs to predict beam directions in vehicle-to-infrastructure (V2I) scenarios. While BeamLLM demonstrates strong prediction performance, its reliance on a single sensing modality-namely RGB images-limits its adaptability in complex and dynamic environments where visual information alone may be insufficient due to occlusions, lighting changes, or adverse weather conditions. To address this limitation, a more recent study [11] has attempted to integrate GPS positioning data and RGB images for multimodal beam prediction. This work highlights the potential of cross-modal fusion in enhancing robustness. However, it suffers from two limitations. First, it relies on large-scale GPT-4 models whose inference latency renders the approach unsuitable for real-time applications. For practical deployment, a more feasible solution would leverage historical data to predict future beam, aligning better with real-world requirements. Second, still only a limited number of modalities and lacking adaptability to environmental variations.
Beyond this, some prior works have investigated more generalized multimodal fusion frameworks to improve prediction robustness. For example, Shi et al. [12] explored combining multimodal sensing data, yet the fusion was static and based on naive concatenation. In contrast, Zhang et al. [13] introduced a more dynamic mixture-of-experts (MoE) framework, in which a gating network assigned weights to each modality. In addition, Cui et al. [14] established the relationship between different modalities through cross-attention.
In this paper, we introduce $\mathbf { M } ^ { 2 }$ BeamLLM, a multimodal sensing empowered beam prediction framework built upon LLMs. Our major contributions are summarized as follows:
1) Our $\mathbf { M } ^ { 2 }$ BeamLLM framework integrates multiple data sources—images, radar, LiDAR, and GPS. This multimodal approach enables a more comprehensive understanding of the environment, significantly enhancing the accuracy and robustness of beam prediction, particularly in dynamic scenarios.
2) We pioneer the application of LLMs, specifically GPT2, to the beam prediction task, which has traditionally been addressed using DL architectures, e.g., recurrent neural networks (RNNs) [4]. By leveraging the superior generalization and reasoning capabilities of LLMs, we effectively model complex relationships within multimodal data, an approach that remains largely unexplored in existing studies.
3) Through supervised fine-tuning (SFT) of a pretrained LLM [15], we enable the model to adapt to the beam prediction task with minimal training data and computational cost. This contrasts sharply with the resourceintensive approaches of training models from scratch, offering a more efficient solution for practical deployment.
4) Experimental results demonstrate that $\mathbf { M } ^ { 2 }$ BeamLLM outperforms state-of-the-art DL models in both standard and few-shot prediction scenarios. Our framework achieves a Top-1 accuracy of $6 8 . 9 \%$ , surpassing the nextbest method by $1 3 . 9 \%$ , and maintains superior performance even under challenging few-shot conditions.
5) Through ablation studies, we provide empirical evidence on the impact of different sensing modality combinations on prediction performance. This analysis reveals that modality diversity generally enhances prediction accuracy, offering new insights into the value of multimodal fusion in beam prediction and advancing the understanding of multi-source data contributions in this domain.
The rest of this paper is organized as follows: Section II establishes the system model and problem formulation of V2I mmWave beam prediction. The proposed $\mathbf { M } ^ { 2 }$ BeamLLM framework is presented in Section III. Section IV presents extensive simulation results, including performance comparisons with other methods, along with some ablation studies. Finally, we conclude our work in Section V.
Notations: Bold lowercase letters denote vectors (e.g., x), and bold uppercase letters denote matrices (e.g., X). The superscripts $( \cdot ) ^ { \mathsf { T } }$ and $( \cdot ) ^ { \mathsf { H } }$ represent the transpose and Hermitian (conjugate transpose) operations, respectively. The symbol $0$ denotes element-wise division. The operator $\mathbb { E } [ \cdot ]$ denotes the statistical expectation, while $| \cdot | _ { 2 }$ denotes the Euclidean norm of a vector, and $| \cdot |$ returns the magnitude of a complex number. The functions $\operatorname* { m i n } ( \cdot )$ and $\operatorname* { m a x } ( \cdot )$ return the minimum and maximum element, respectively. The indicator function $\mathbb { 1 } \{ \cdot \}$ equals 1 if the condition inside the braces is true, and 0 otherwise. Unless otherwise specified, $\mathbb { R }$ and $\mathbb { C }$ denote the sets of real and complex numbers, respectively.
Fig. 1: Illustration of the V2I system model: The BS is equipped with a camera, radar, and LiDAR, while a GPS-RTK system provides the UE’s localization information to the BS.
# II. SYSTEM MODEL AND PROBLEM FORMULATION
# A. System Model
As shown in Fig. 1, without loss of generality, we consider a wireless communication system with $N$ -antenna BS and a single-antenna UE, employing a pre-defined beamforming codebook denoted as $\mathcal { F } ~ = ~ \{ \mathbf { f } _ { m } \} _ { m = 1 } ^ { M }$ , where $\mathbf { f } _ { m } \in \mathbb { C } ^ { N \times \bar { 1 } }$ represents the $m$ -th beamforming vector in the codebook. At time instance $t$ , the transmitter selects a beam $\mathbf { f } _ { m [ t ] } \in \mathcal { F }$ for signal transmission, where $m [ t ]$ denotes the index of the beamforming vector at time instant $t$ . The received signal is modeled as
Fig. 2: The model framework of $\mathbf { M } ^ { 2 }$ BeamLLM. In the sensing data encoding part, it encodes and aligns multimodal sensing data from camera, radar, LiDAR, and GPS. It then fuses these diverse inputs using a multi-head attention mechanism. Subsequently, the fused data undergoes input projection before being fed into an LLM backbone, which includes both frozen and unfrozen pre-trained components. Finally, an output projection and inverse normalization are applied to predict future beams.
$$
y [ t ] = \mathbf { h } ^ { \mathsf { H } } [ t ] \mathbf { f } _ { m [ t ] } s + n [ t ] .
$$
where $\mathbf { h } [ t ]$ is the mmWave channel vector, $s$ is the unitpower transmitted symbol, and $n [ t ] \sim \mathcal { C } \mathcal { N } ( 0 , \sigma _ { n } ^ { 2 } )$ is complex additive Gaussian noise. The optimal beam selection strategy maximizes the effective channel gain as follows:
$$
\mathbf { f } _ { m ^ { * } [ t ] } = \arg \operatorname* { m a x } _ { m \in \{ 1 , \cdots , M \} } \left| { \bf h } ^ { \mathsf { H } } [ t ] \mathbf { f } _ { m [ t ] } \right| ^ { 2 } ,
$$
where $m ^ { * } [ t ]$ denotes the index of the optimal beamforming vector at time instant $t$ . This selection criterion ensures the highest possible received signal-to-noise ratio (SNR) under the given codebook constraints.
# B. Problem Formulation
We aim to design a beam prediction strategy that maximizes the probability of correctly identifying the optimal beam index $m ^ { * } [ t ]$ over a future horizon $t = 1 , 2 , \cdots , T$ , leveraging historical observations from time slots $\tau = - H + 1 , - H + 2 \cdot \cdot \cdot , 0$ . The performance metric is defined as the expected accuracy over $T$ :
$$
\mathcal { P } = \mathbb { E } \left[ \frac { 1 } { T } \sum _ { t = 1 } ^ { T } \mathbb { 1 } \{ \hat { m } [ t ] = m ^ { * } [ t ] \} \right] ,
$$
where the expectation is taken over the joint distribution of channel realizations $\{ { \bf h } [ t ] \} _ { t = 1 } ^ { T }$ and historical observations $\{ { \bf h } [ t ] \} _ { \tau = - H + 1 } ^ { 0 }$ . While this formulation relies on temporal correlations in channel realizations, such methods face the following fundamental limitations:
• Channel dynamics dependency: Traditional approaches assume predictable temporal patterns (e.g., slow fading, Markovian transitions), which collapse in nonstationary scenarios (e.g., sudden blockage by vehicles, UAV swarms maneuvering).
• Reactive adaptation: Channel measurements only provide posterior information, delaying beam adjustments until after link degradation occurs.
In contrast, sensing data enables proactive prediction by directly observing the physical causes of channel variations. This paradigm shift replaces statistical channel extrapolation with geometric-environmental reasoning, achieving robustness against abrupt channel dynamics through environment-aware prediction.
# III. $\mathbf { M } ^ { 2 }$ BEAMLLM FRAMEWORK
In this section, we introduce $\mathbf { M } ^ { 2 }$ BeamLLM to tackle the multimodal sensing data empowered beam prediction task outlined in Section II. The general $\mathbf { M } ^ { 2 }$ BeamLLM framework is shown in Fig. 2, which consists of the following key components: sensing data encoding, multimodal data fusion, and future beam prediction.
# A. Multimodal Feature Encoding
The BS is equipped with a set of sensing modalities denoted by $\Omega = \{ \mathrm { I m a g e }$ , Radar, LiDAR, GPS}. For each modality $\omega \in$ $\Omega$ , the raw sensing data at time $t$ is represented as $\mathbf { X } _ { \omega } [ t ]$ , with the following specifications:
• Image Data: $\mathbf { X } _ { \mathrm { I m a g e } } [ t ] \in \mathbb { R } ^ { W _ { I } \times H _ { I } \times C _ { I } }$ , where $W _ { I } , \ H _ { I }$ and $C _ { I }$ represent the spatial width, height, and RGB/IR spectral channels, respectively.
• Radar Data: $\mathbf { X } _ { \mathrm { R a d a r } } [ t ] \ \in \ \mathbb { C } ^ { M _ { R } \times S _ { R } \times A _ { R } }$ , where the dimensions correspond to antenna, sampling, and chirp, providing information on the target’s angle, distance, and velocity.
• LiDAR Data: $\mathbf { X } _ { \mathrm { L i D A R } } [ t ] \in \mathbb { R } ^ { N _ { L } [ t ] \times 3 }$ , with $N _ { L } [ t ]$ being the time-varying number of 3D points $( x , y , z )$ in the LiDAR point cloud.
• GPS Data: $\mathbf { X } _ { \mathrm { G P S } } [ t ] \in \mathbb { R } ^ { 2 \times 1 }$ , containing the latitude and longitude coordinates of the UE. Although GPS data incurs uplink latency from UE to the BS, its computational
simplicity compared to other latency-intensive modalities enables synchronization via buffer delay or some other schemes.
We formalize the feature extraction process for each sensing modality, where $\Psi _ { \omega } ( \cdot )$ denotes the modality-specific feature encoder for $\boldsymbol \omega \in \Omega$ . Their specific architecture is shown in Fig. 3. A detailed description is given below:
1) Image Data Feature Extraction: The image feature extraction pipeline processes raw input $\mathbf { X } _ { \mathrm { I m a g e } } [ t ]$ through three stages: standardization, backbone processing, and dimension compression. First, each image is reshaped to $2 2 4 \times 2 2 4 \times 3$ and normalized using ImageNet parameters [16]:
$$
\begin{array} { r } { \mathbf { x } _ { \mathrm { I m a g e } } ^ { \mathrm { N o r m } } = ( \mathrm { R e s h a p e } ( \mathbf { X } _ { \mathrm { I m a g e } } [ t ] ) - \pmb { \mu } _ { \mathrm { I m a g e } } ) \oslash \pmb { \sigma } _ { \mathrm { I m a g e } } , } \end{array}
$$
where Reshape $( \cdot )$ represents image reshaping operation, $\mu _ { \mathrm { I m a g e } }$ and ${ \pmb \sigma } _ { \mathrm { I m a g e } }$ are the mean and standard deviation of the ImageNet dataset, respectively.
The normalized tensor is then fed into a pretrained ResNet18 backbone [17] (with removed classification layers), followed by a learnable fully connected (FC) layer that compresses features to $M$ dimensions:
$$
\widetilde { \mathbf { x } } _ { \mathrm { I m a g e } } [ t ] = \Psi _ { \mathrm { I m a g e } } \left( \mathbf { x } _ { \mathrm { I m a g e } } ^ { \mathrm { N o r m } } \right) \in \mathbb { R } ^ { M } .
$$
2) Radar Data Feature Extraction: The radar feature extraction pipeline begins by processing raw radar data $\mathbf { X } _ { \mathrm { R a d a r } } [ t ]$ through a two-dimensional fast Fourier transform (2D-FFT) applied to each chirp signal along the antenna and fast-time dimensions. This transformation generates a range-angle (RA) map:
$$
\mathbf { X } _ { \mathrm { R a d a r } } ^ { \mathrm { R A } } [ t ] = \sum _ { a = 1 } ^ { A _ { R } } | \mathrm { F F T _ { \mathrm { 2 D } } } \left( \mathbf { X } _ { \mathrm { R a d a r } } [ t ] [ : , : , a ] \right) | \in \mathbb { R } ^ { M _ { F } \times S _ { R } } ,
$$
where $\mathrm { F F T _ { 2 D } ( \cdot ) }$ represents the 2D-FFT operation. The antenna dimension is zero-padded from $M _ { R }$ to $M _ { F }$ $( M _ { F } > M _ { R } )$ to achieve angular oversampling, reducing the effective angular sampling interval and enhancing spectral detail resolution [18]. The resulting RA map is then encoded into a feature vector through a convolutional neural network [19]:
$$
\begin{array} { r } { \tilde { \mathbf { x } } _ { \mathrm { R a d a r } } [ t ] = \Psi _ { \mathrm { R a d a r } } \left( \mathbf { X } _ { \mathrm { R a d a r } } ^ { \mathrm { R A } } [ t ] \right) \in \mathbb { R } ^ { M } . } \end{array}
$$
3) LiDAR Data Feature Extraction: The LiDAR feature extraction pipeline processes raw point cloud data $\mathbf { X } _ { \mathrm { L i D A R } } [ t ]$ through geometric transformation and neural encoding. First, the 3D points are projected onto a $2 5 6 \times 2 5 6 ~ \mathrm { { 2 D } }$ grid to create a histogram representation ${ \bf X } _ { \mathrm { L i D A R } } ^ { \mathrm { H i s t } } [ t ]$ , where point counts per cell are clipped to 5 and normalized to $[ 0 , 1 ]$ for robustness:
$$
\mathbf { X } _ { \mathrm { L i D A R } } ^ { \mathrm { H i s t } } [ t ] = \mathrm { P C } 2 \mathrm { H } \left( \mathbf { X } _ { \mathrm { L i D A R } } [ t ] \right) \in \mathbb { R } ^ { 1 \times 2 5 6 \times 2 5 6 } ,
$$
where $\mathrm { P C } 2 \mathrm { H } ( \cdot )$ denotes the point-cloud-to-histogram transformation. This single-channel feature map is then encoded into an $M$ -dimensional feature vector using a modified ResNet-18 architecture:
$$
\begin{array} { r } { \tilde { \mathbf { x } } _ { \mathrm { L i D A R } } [ t ] = \Psi _ { \mathrm { L i D A R } } \left( \mathbf { X } _ { \mathrm { L i D A R } } ^ { \mathrm { H i s t } } [ t ] \right) \in \mathbb { R } ^ { M } , } \end{array}
$$
4) GPS Data Feature Extraction: The GPS feature extraction process begins by normalizing raw coordinate data $\mathbf { X } _ { \mathrm { G P S } }$ to ensure compatibility with subsequent fusion operations. Using min-max scaling applied across the entire historical dataset XGPS ∈ R2×H, we maintain temporal consistency through global normalization:
$$
\mathbf { X } _ { \mathrm { G P S } } ^ { \mathrm { N o r m } } [ t ] = \frac { \mathbf { X } _ { \mathrm { G P S } } [ t ] - \operatorname* { m i n } ( \mathbf { X } _ { \mathrm { G P S } } ) } { \operatorname* { m a x } ( \mathbf { X } _ { \mathrm { G P S } } ) - \operatorname* { m i n } ( \mathbf { X } _ { \mathrm { G P S } } ) } .
$$
The normalized coordinates are then mapped to an $M$ - dimensional feature space through a multilayer perceptron (MLP) network:
$$
\begin{array} { r } { \tilde { \mathbf { x } } _ { \mathrm { G P S } } [ t ] = \Psi _ { \mathrm { G P S } } \left( \mathbf { X } _ { \mathrm { G P S } } ^ { \mathrm { N o r m } } [ t ] \right) \in \mathbb { R } ^ { M } . } \end{array}
$$
# B. Multimodal Alignment
Multimodal alignment seeks to bridge the semantic discrepancies among heterogeneous sensing modalities by projecting modality-specific features into a shared embedding space, thereby enabling effective cross-modal fusion [20]. In this work, we employ a contrastive learning-based alignment strategy inspired by CLIP [21], which enforces geometric consistency among modalities through cosine similarity constraints. Specifically, the feature vector $\tilde { \mathbf { x } } _ { \omega } [ t ]$ extracted from each modality $\boldsymbol \omega \in \Omega$ at time step $t$ is first $\ell _ { 2 }$ -normalized:
$$
\bar { \mathbf { x } } _ { \omega } [ t ] = \frac { \tilde { \mathbf { x } } _ { \omega } [ t ] } { \| \tilde { \mathbf { x } } _ { \omega } [ t ] \| _ { 2 } } , \quad \forall \omega \in \Omega ,
$$
projecting all features onto the unit hypersphere $\mathbb { S } ^ { M - 1 }$ . This normalization facilitates stable optimization using cosine similarity as the alignment metric. Cross-modal alignment is then enforced by maximizing the pairwise cosine similarity between normalized feature vectors. These similarities are organized into a symmetric matrix $\mathbf { S } [ t ] \in [ - 1 , 1 ] ^ { | \Omega | \times | \Omega | }$ , where each element is computed as
$$
S _ { \omega _ { 1 } , \omega _ { 2 } } [ t ] = \bar { \bf x } _ { \omega _ { 1 } } [ t ] \bar { \bf x } _ { \omega _ { 2 } } ^ { \top } [ t ] , \quad \forall \omega _ { 1 } , \omega _ { 2 } \in \Omega
$$
which quantitatively reflects the semantic consistency between modalities $\omega _ { 1 }$ and $\omega _ { 2 }$ at time $t$ .
By encouraging high cross-modal similarity, this geometric constraint implicitly disentangles modality-invariant semantics from modality-specific noise, thereby enabling robust and generalizable feature fusion—especially under distributional shifts or missing modalities.
# C. Multimodal Feature Fusion
To integrate features from heterogeneous modalities, we design a transformer-based fusion module that captures cross-modal dependencies through self-attention mechanisms. For each time step $t$ , the normalized features $\{ \bar { \mathbf { x } } _ { \omega } [ t ] \} _ { \omega \in \Omega }$ are stacked into a matrix $\begin{array} { r l } { \mathbf { F } [ t ] } & { { } = } \end{array}$ $[ \bar { \mathbf { x } } _ { \mathrm { I m a g e } } ^ { \mathsf { T } } [ \bar { t } ] , \bar { \mathbf { x } } _ { \mathrm { R a d a r } } ^ { \mathsf { T } } [ t ] , \bar { \mathbf { x } } _ { \mathrm { L i D A R } } ^ { \mathsf { T } } [ t ] , \bar { \mathbf { x } } _ { \mathrm { G P S } } ^ { \mathsf { T } } [ t ] ] ^ { \mathsf { T } } \ \in \ \mathbb { R } ^ { M \times 4 }$ , where each row corresponds to a modality. To model inter-modal interactions, we apply a multi-head self-attention operation, where the queries, keys, and values are all set to $\mathbf { F } ^ { \mathsf { T } } [ t ]$ , i.e., $Q ( t ) = K ( t ) = V ( t ) = \mathbf { F } ^ { \mathsf { T } } [ t ]$ :
$$
\begin{array} { r } { \mathbf { A } [ t ] = \mathbf { M } \mathbf { u } \mathbf { l } \mathrm { t i } \mathbf { H } \mathbf { e a d } ( \mathbf { F } ^ { \mathsf { T } } [ t ] , \mathbf { F } ^ { \mathsf { T } } [ t ] , \mathbf { F } ^ { \mathsf { T } } [ t ] ) \in \mathbb { R } ^ { | \Omega | \times M } . } \end{array}
$$
(a) Vision feature encoder. It generates visual features by preprocessing the input image, ResNet-18 feature extraction and multilayer linear transformation with ReLU activation.
LiDAR A 國 → £ 国Point Clouds Histogram Map LiDAR Feature(N×3) (1×256×256) 亚LiDAR (64×1)
(b) LiDAR feature encoder. It converts LiDAR point cloud data into histograms and then extracts LiDAR features through ResNet-18, convolutional layers, pooling, and linear layers.
Radar F品 一 000100 围 U 国 Radar Range-Angle Map 亚Radar eatur (M R×SR ×AR) (
6
4
× 2
5 6
) (64×1)
(c) Radar feature encoder. It processes the raw radar data with Range FFT, Angle FFT, to generate Range-Angle Map, and then extracts the radar features through a series of operations such as convolution, pooling, and linear layer.
GPS Lon. ? 美 TAU ? 00 Lat. GPS Position GPS Feature (2×1) 亚GPS (64×1)
(d) GPS feature encoder. It processes the longitude and latitude information through a simple MLP to extract GPS features.
Fig. 3: Multimodal data encoding module for the $\mathbf { M } ^ { 2 }$ BeamLLM system. This module demonstrates in detail the independent feature encoding process for (a) camera, (b) LiDAR, (c) radar, and (d) GPS sensors, converting the raw sensor data into a unified feature representation for subsequent multimodal data fusion.
Each attention head captures distinct correlation patterns among modalities, and the outputs are concatenated and projected back to the original dimension $M$ . This enables dynamic weighting and context-aware fusion of modalityspecific features. The modality-aware representations are then aggregated across modalities by summing the attended vectors:
$$
\mathbf { z } [ t ] = \mathrm { F F N } \left( \sum _ { \omega \in \Omega } \mathbf { A } [ t ] _ { ( \omega , : ) } \right) \in \mathbb { R } ^ { M } ,
$$
where $\mathrm { F F N } ( \cdot )$ denotes a position-wise feed-forward network (FFN) that refines the fused representation. Finally, the timeseries feature sequence is formed by concatenating the fused embeddings over a historical window of length $H$ :
$$
\mathbf { Z } = [ \mathbf { z } [ - H + 1 ] , \mathbf { z } [ - H + 2 ] , \cdots , \mathbf { z } [ 0 ] ] ^ { \mathsf { T } } \in \mathbb { R } ^ { H \times M } ,
$$
which serves as the input to downstream temporal modeling modules for beam prediction.
# D. Prediction
The fused sequence obtained from the previous steps is fed into a prediction network $\operatorname { P r e d } ( \cdot )$ to make predictions, as
follows:
$$
\hat { \mathbf { P } } = \operatorname { P r e d } ( \mathbf { Z } ) \in \mathbb { R } ^ { T \times M } .
$$
Finally, for each time sample $t$ , we can choose the optimal beam index for prediction:
$$
{ \hat { m } } [ t ] = \arg \operatorname* { m a x } _ { m } { \hat { \mathbf { P } } } [ t , m ] .
$$
# E. Model Structure
Input Projection. Given an input feature sequence $\mathbf { Z }$ , the input embedding module is designed to project $\mathbf { Z }$ into a $d _ { \mathrm { L L M } }$ -dimensional embedding space, effectively treating each time step as a token embedding suitable for LLMs. In other words, this process enables LLMs to “comprehend” the input feature sequence. Prior to embedding, $\mathbf { Z }$ is normalized to have zero mean and unit variance, which stabilizes training and preserves the relative relationships between features. The normalized sequence is then linearly transformed into variablelength embeddings compatible with the LLM input format.
Pretrained LLM and SFT. We employ a pretrained LLM backbone, freezing most of its self-attention and feed-forward layers during fine-tuning to retain the general representations learned from large-scale data. To enable task adaptation, the parameters of the last few layers are unfrozen and updated. This selective fine-tuning strategy balances model stability and plasticity: it mitigates overfitting risks on relatively small datasets while allowing higher layers to specialize in task-specific refinements without degrading the foundational knowledge captured in lower layers. In $\mathbf { M } ^ { 2 }$ BeamLLM, this strategy is applied to beam prediction via SFT, requiring only a small amount of labeled data from the target task. By freezing the majority of layers, our approach efficiently adapts the pretrained LLM to the specific demands of beam prediction in mmWave systems, while significantly reducing the training cost.
Output Projection. The contextualized embeddings output by the LLM are projected back to the original feature dimension $M$ through a learnable linear layer, producing the final predictions aligned with the input feature space.
# F. Learning Phase
The training process consists of two parts: encoder pretraining and beam prediction module fine-tuning. The encoder pre-training simply takes the one-hot encoding vector corresponding to the modal input with the optimal beam index as a supervised learning pair and then performs $M$ classification with the following loss function:
$$
\mathcal { L } _ { \mathrm { e n c } } = - \sum _ { m = 1 } ^ { M } p _ { m } \log ( \sigma _ { \mathrm { S o f t m a x } } ( \hat { p } _ { m } ) ) ,
$$
where $p _ { m } \in \{ 0 , 1 \}$ denotes the ground-truth one-hot encoded optimal beam index, $\hat { p } _ { m }$ represents the model prediction probability for the $m$ -th beam, and $\sigma _ { \mathrm { S o f t m a x } } ( \cdot )$ is the softmax activation function.
For beam prediction module fine-tuning part, the composite loss function comprises two key objectives:
1) Prediction Task Objective: We optimize beam index prediction accuracy through cross-entropy loss defined as:
$$
\mathcal { L } _ { 1 } = - \frac { 1 } { T } \sum _ { t = 1 } ^ { T } \sum _ { m = 1 } ^ { M } p _ { m } [ t ] \log ( \sigma _ { \mathrm { { S o f t m a x } } } ( \hat { p } _ { m } [ t ] ) ) .
$$
2) Multimodal Alignment Objective: Inter-modal feature similarity is enforced via normalized temperature-scaled contrastive loss defined as:
$$
\mathcal { L } _ { 2 } = - \frac { \sum _ { \tau = - H + 1 } ^ { 0 } \sum _ { \omega _ { 1 } , \omega _ { 2 } \in \Omega } p _ { m } [ t ] \log \left( \sigma _ { \mathrm { S o f t m a x } } \left( \frac { S _ { \omega _ { 1 } , \omega _ { 2 } } [ t ] } { \alpha } \right) \right) } { H \cdot | \Omega | ( | \Omega | - 1 ) } ,
$$
where temperature parameter $\alpha ~ > ~ 0$ modulates similarity concentration: smaller $\alpha$ sharpens hard negative mining while larger $\alpha$ promotes softer distribution.
3) Composite Optimization Objective: The unified training loss is defined by combining both objectives with weight $\lambda \in$ $[ 0 , + \infty )$ as follows:
$$
\begin{array} { r } { \mathcal { L } = \mathcal { L } _ { 1 } + \lambda \mathcal { L } _ { 2 } . } \end{array}
$$
TABLE I: Default Parameter Settings.
# IV. SIMULATION RESULTS
# A. Experimental Settings
1) Dataset Processing: In this study, we employ the DeepSense 6G dataset [22] to simulate a V2I mmWave communication environment. In this configuration, a stationary BS is equipped with an array of sensing technologies, including an RGB camera, radar, and LiDAR sensor, which collectively interact with a mobile UE functioning as a mmWave transmitter. The UE incorporates a GPS-RTK system that continuously streams meter-accurate positional data to the BS.
The dataset is partitioned into training $( 7 0 \% , 2 , 1 3 8$ samples) and testing $3 0 \%$ , 917 samples) sets, where each sample represents a complete vehicle pass-by event comprising synchronized multimodal sensing data sequences and beam index sequences. We decompose each sequence into input-output pairs using a 13-frame sliding window, configuring two prediction tasks: 1) standard prediction with $H \ = \ 8$ historical observations and $T \ = \ 5$ future beams, and 2) few-shot prediction with $H \ = \ 3$ and $T ~ = ~ 1 0$ , where the model processes multimodal sensing data inputs $\{ \tilde { \mathbf { x } } _ { \mathrm { I m a g e } } [ t ] , \tilde { \mathbf { x } } _ { \mathrm { L i D A R } } [ t ] , \tilde { \mathbf { x } } _ { \mathrm { R a d a r } } [ t ] , \tilde { \mathbf { x } } _ { \mathrm { G P S } } [ t ] \} _ { \tau = - H + 1 } ^ { 0 }$ while learning illumination-robust environment-beam correlations.
2) Baselines: The default LLM backbone used in $\mathbf { M } ^ { 2 }$ BeamLLM is the distilled GPT-2 model [23]. As a longstanding benchmark and representative of encoder-based architectures, BERT is selected as the primary comparison baseline.1 We compare our approach with several time-series forecasting models, including RNN-based model (GRU [24], LSTM [25]), linear model (NLinear [26]), and transformerbased model (Informer [27]).
3) Implementation Settings: Our experiments are conducted on a single NVIDIA A100-40 GB GPU via Google Colab, implemented in PyTorch. We employ the Adam optimizer [28] with an initial learning rate of $1 0 ^ { - 3 }$ , which decays by a factor of 0.5 every 5 epochs, alongside a batch size of 16 and a training duration over 30 epochs. The remaining simulation parameters are listed in Table I.
4) Performance Metrics:
• Top- $K$ Accuracy: It measures whether the true label is included among the top- $K$ predicted beamforming vectors with the highest probabilities, defined as:
$$
\mathrm { T o p - } K \ \mathrm { A c c u r a c y } = \frac { 1 } { N _ { \mathrm { T e s t } } } \sum _ { n = 1 } ^ { N _ { \mathrm { T e s t } } } \mathbb { 1 } \{ m _ { n } \in \mathcal { Q } _ { K } \} ,
$$
where $N _ { \mathrm { T e s t } }$ represents the total number of samples in the test set, $m _ { n }$ denotes the index of the ground truth optimal beam for the $n$ -th sample, and $\mathcal { Q } _ { K }$ is the set of indices for the top $K$ predicted beams, sorted by the element values in $\hat { \mathbf { P } }$ for each time sample.
• Distance-Based Accuracy (DBA) Score: It measures the accuracy of the distance between the predicted beam and the ground truth beam. For this, we utilize the top- $K$ predicted beams. The DBA-Score is defined as
$$
{ \mathrm { D B A - S c o r e } } = { \frac { 1 } { K } } \sum _ { k = 1 } ^ { K } Y _ { k } ,
$$
where
$$
Y _ { k } = 1 - \frac { 1 } { N _ { \mathrm { T e s t } } } \sum _ { n - 1 } ^ { N _ { \mathrm { T e s t } } } \operatorname* { m i n } _ { 1 \leq k ^ { \prime } \leq k } \left[ \operatorname* { m i n } \left( \frac { \left| m _ { n } - \hat { m } _ { n , \prime , k } \right| } { \Delta } , 1 \right) \right] .
$$
Here, $\hat { m } _ { n , k ^ { \prime } }$ represents the prediction for the $k$ -th mostlikely beam index of sample $n$ , and $\Delta$ is a normalization factor. Unlike traditional top- $K$ accuracy that focus on whether a prediction hits the target, it introduces a “precision-aware” evaluation criterion that allows for distance errors. If the prediction is close but not entirely correct, it can still receive a favorable score.
To facilitate presentation, the two performance metrics in the simulation results are averaged over all predicted time sequences.
# B. Dataset Analysis and Visualization
In this section, we present an analysis and visualization of the dataset used. For visual and GPS data, we visualize multiple samples due to the significant variation between different instances. In contrast, Radar and LiDAR data exhibit less variation across samples; therefore, we provide a single sample visualization as a representative example.
Fig. 4 shows the frequency with which each beam index was optimal in the dataset in Scenario 32. We observed that optimal beam indices in the range of 10 to 30 occur with higher frequency, with a particularly prominent peak around index $m ^ { * } = 1 7$ .
Fig. 5 illustrates the relative received power distribution across various beam indices (20, 40, 60) for the first three samples, highlighting how peak power varies with different beam selections. Note that due to the complexity of absolute power calibration, the DeepSense 6G dataset provides dimensionless relative received signal power [22]. From the figure, it can be seen that as the optimal beam index increases, the relative received power shows an overall increasing trend. With a beam index of 20, the peak of the relative received power is around 0.25; with a beam index of 40, the peak of the relative received power improves to around 0.4; and when the beam index is 60, the relative received power reaches the highest level of around 0.45. This implies that the distance between transceivers in the dataset may be smaller when the beam index is larger, leading to an increase in received power.
Fig. 4: Optimal beam index frequency distribution.
Fig. 5: Comparison of power profiles for the first 3 samples associated with optimal beams 20, 40, and 60, respectively.
Fig. 6 shows the first three samples when the optimal beam is 20, 40, and 60, respectively. Notice that the UE is the gray car with the sensor at the top, and the optimal beam index gets progressively larger as it travels from the left side of the screen over to the right side. For example, in samples 540 and 163, the UE locations are nearly identical, but the weather conditions differ. As shown in Fig. 5, this leads to noticeable differences in the received power. From this we can see that the weather condition affects the received power.
Fig. 7 shows two subplots of samlple 86: (a) the RA map and (b) the range-velocity (RV) map, both of which indicate signal energy in terms of color intensity. The radar system successfully detected multiple objects within the environment. A significant portion of these detects objects appear to be either stationary, functioning as environmental scatterers, or moving very slowly, such as pedestrians. This is clearly substantiated (a) Vision data visualization for the first 3 samples associated with optimal beam 20.
(b) Vision data visualization for the first 3 samples associated with optimal beam 40.
(c) Vision data visualization for the first 3 samples associated with optimal beam 60.
Fig. 8: Visualization of LiDAR point cloud data of sample 86.
Fig. 6: Vision data visualization for the first 3 samples associated with optimal beams 20, 40, and 60, respectively.
Fig. 7: Radar data visualization of sample 86.
Fig. 9: BS and UE GPS location map for the first 3 samples associated with optimal beams 20, 40, and 60, respectively.
by their distinct positions in the angle-distance spectrum and their strong concentration along the 0 velocity line in the RV map. Furthermore, the radar system is also capable of identifying distinct moving targets at specific distances. For instance, at a range of approximately 7 meters, the clear angular and velocity information collectively indicates the presence of a moving target that the radar system is actively tracking.
Fig. 8 represents the point cloud data of objects or environments in 3D space for sample 86, which, although sparse, reveals the presence of one or more major structures, such as trees, buildings, driveways, etc.
Fig. 9 is a geolocation map to show the GPS location of the top three samples with optimal beam indexes of 20, 40, and 60. Combined with the analysis from Fig. 6, we find that the positional and visual information basically remain aligned.
# C. Standard Prediction
Fig. 10 presents a comparative analysis of Top-1, Top-2, and Top-3 accuracy for standard beam predictions across all evaluated models. As expected, increasing the value of $K$ leads to improved accuracy for all models, as the correct label is more likely to appear within the top- $K$ candidates. Despite the large number of parameters, we only activated a small fraction for SFT, so we attribute the high performance to the successful knowledge activation of LLM. Notably, the proposed GPT-2-based $\mathbf { M } ^ { 2 }$ BeamLLM achieves the highest Top-1 accuracy of ${ \bf 6 8 . 9 \% }$ , surpassing the second-best model, BERT-based $\mathbf { M } ^ { 2 }$ BeamLLM $( 5 5 . 0 \% )$ , by a substantial $1 3 . 9 \%$ . This significant margin highlights the superior suitability of the transformer decoder architecture (e.g., autoregressive masking in GPT-2) for future beam index prediction, compared to the bidirectional encoder architecture used in BERT. Traditional RNN-based models show relatively weak performance at $K = 1$ , but exhibit notable gains at higher $K$ values. This suggests that while RNNs may struggle to rank the most likely beams correctly, they still offer reasonable overall coverage among top predictions. The Informer model approaches GPT2-based performance at $K = 3$ , illustrating the strength of its attention mechanism in capturing long-range dependencies. Meanwhile, the simple NLinear model performs comparably to RNNs, reflecting the surprising effectiveness of linear models when trained with sufficient data.
Fig. 10: Average Top- $K$ accuracy performance of the proposed method comparing to several baselines in the standard prediction task.
Fig. 11 presents the DBA-score as a complementary metric to further differentiate model performance and their temporal alignment behavior under varying tolerance thresholds. As observed, all models demonstrate a progressive improvement in performance as the top- $K$ value increases (from 1 to 5) and the tolerance threshold $\Delta$ is relaxed (from 1 to 2), indicating greater flexibility in acceptable prediction errors. However, the extent of this improvement is highly architecturedependent. The proposed GPT-2-based $\mathbf { M } ^ { 2 }$ BeamLLM consistently achieves the highest DBA-scores across all settings, further confirming its robust capability in both beam prediction accuracy. As the tolerance is relaxed to $\Delta = 2$ , the performance gap between models becomes noticeably smaller. This suggests that a more permissive evaluation criterion reduces the sensitivity to exact temporal alignment, thereby diminishing the relative advantage of more sophisticated models under looser error constraints.
# D. Few-Shot Prediction
In Figs. 12 and 13, we present the top- $K$ accuracy and DBA-score performance for the few-shot prediction task. It is important to note that at this point $H < T$ , we need to pad the input of $\mathbf { M } ^ { 2 }$ BeamLLM with zeros to extend it to $T$ in the time dimension. Despite the overall performance decline, $\mathbf { M } ^ { 2 }$ BeamLLM continues to exhibit superior performance, maintaining its leading position among the evaluated models. However, the Top-2 and Top-3 accuracy metrics remain largely unaffected.
Fig. 11: Average DBA-score performance of the proposed method comparing to several baselines in the standard prediction task.
Fig. 12: Average Top- $K$ accuracy performance of the proposed method comparing to several baselines in the few-shot prediction task.
# E. Comparative Analysis and Component Ablation Study
In this section, we conduct a series of ablation studies to validate the effectiveness and superiority of the proposed $\mathbf { M } ^ { 2 }$ BeamLLM framework. Specifically, we design experiments to investigate two key aspects: (1) the impact of different modality combinations on beam prediction performance, and (2) the influence of the number of frozen layers in the pretrained LLM during supervised fine-tuning.
1) Comparative Analysis of Different Modalities: We focus on evaluating the performance impact of different combinations of sensing modalities. This analysis investigates how varying the number of input modalities influences beam prediction accuracy. The training and evaluation results are presented in Figs. 14, 15, and 16, showcasing performance across three single-modality, two dual-modality, one tri-modality, and one full-modality configurations. A clear trend of performance enhancement through multimodal fusion emerges, where an increased number of modalities generally leads to improved beam prediction accuracy. For instance, in terms of Top1 accuracy, the combination of vision and radar modalities achieves an accuracy of $5 1 . 1 \%$ , representing an improvement of $2 . 6 \%$ and $1 3 . 1 \%$ over using vision-only and radaronly modalities, respectively. Further incorporating the LiDAR modality boosts the accuracy by an additional $1 0 . 7 \%$ . Building upon this improvement, the inclusion of GPS data yields a further $7 . 1 \%$ gain in accuracy. These results highlight the complementary benefits of multimodal sensing and validate the effectiveness of the proposed $\mathbf { M } ^ { 2 }$ BeamLLM framework in leveraging diverse sensing inputs for robust and accurate beam prediction.
Fig. 13: Average DBA-score performance of the proposed method comparing to several baselines in the few-shot prediction task.
Fig. 15: Average Top- $K$ accuracy performance of different combinations of sensing data.
Fig. 14: Top-1 training accuracy performance comparison of different combinations of sensing data.
Fig. 16: Average DBA-score performance of different combinations of sensing data.
2) Comparative Analysis of the Performance of Different Frozen Pre-training Layers: In this section, we explore the impact of unfreezing different numbers of layers in the LLM backbone model on training and performance.
We observe that as more transformer layers are unfrozen, the number of trainable parameters increases from $6 2 . 3 { \mathrm { ~ M ~ } }$ (0 layers) to $\mathrm { 1 4 7 . 3 M }$ (12 layers), with the final validation loss decreasing from 1.52 to 0.21 after 30 epochs. Top- $K$ accuracies and DBA-scores consistently improve with deeper fine-tuning. For instance, Top-1 accuracy increases from $4 7 . 4 \%$ (0 layers) to ${ \bf 8 5 . 7 \% }$ (12 layers), while DBA-score with Top-1, $\Delta = 1$ reaches 0.94 at full fine-tuning. These results demonstrate that deeper fine-tuning not only improves convergence but also enhances semantic and temporal alignment of predictions.
Furthermore, even partial fine-tuning provides substantial gains: unfreezing just 4 layers results in significant improvements across all metrics (e.g., Top-1 accuracy from $4 7 . 4 \%$ $ 7 2 . 4 \%$ , DBA-score with Top-1, $\Delta \mathit { \Psi } = 1 \mathit { \Psi }$ from $0 . 4 7 \ $ 0.72). Deeper tuning beyond 8 layers continues to yield improvements, though with diminishing returns relative to computational cost.
If computational resources allow, we recommend full finetuning (12 layers) to achieve optimal performance. For resource-constrained scenarios (e.g., limited GPU memory or training time), unfreezing 6–8 layers strikes a highly costeffective balance, achieving Top-3 accuracy around $9 0 \%$ , with significantly reduced training overhead.
Fig. 17: Distribution of trainable versus non-trainable parameters across unfrozen layers.
Fig. 18: Effect of the number of frozen layers on training loss.
Fig. 19: Comparison of average Top- $K$ accuracy for different numbers of frozen layers.
Fig. 20: Comparison of average DBA-score accuracy for different numbers of frozen layers.
# F. Complexity Analysis
To evaluate the feasibility of deploying the proposed method in practical scenarios, we conduct a comparative analysis of the model’s number of parameters and inference cost against various baseline approaches. This assessment provides insights into the computational demands and potential deployment challenges associated with each model. The results of this comparison are presented in Table II.
Overall, the inference time per sample for all models is shorter than the data sampling and synchronization intervals.2 Notably, due to the inference acceleration capabilities inherent in GPT and BERT models, $\mathbf { M } ^ { 2 }$ BeamLLM exhibits significantly reduced inference time compared to models like Informer and NLinear. Consequently, the proposed $\mathbf { M } ^ { 2 }$ BeamLLM demonstrates potential for real-time beam prediction services. | This paper introduces a novel neural network framework called M2BeamLLM for
beam prediction in millimeter-wave (mmWave) massive multi-input multi-output
(mMIMO) communication systems. M2BeamLLM integrates multi-modal sensor data,
including images, radar, LiDAR, and GPS, leveraging the powerful reasoning
capabilities of large language models (LLMs) such as GPT-2 for beam prediction.
By combining sensing data encoding, multimodal alignment and fusion, and
supervised fine-tuning (SFT), M2BeamLLM achieves significantly higher beam
prediction accuracy and robustness, demonstrably outperforming traditional deep
learning (DL) models in both standard and few-shot scenarios. Furthermore, its
prediction performance consistently improves with increased diversity in
sensing modalities. Our study provides an efficient and intelligent beam
prediction solution for vehicle-to-infrastructure (V2I) mmWave communication
systems. | [
"cs.CL"
] |
# I. INTRODUCTION
Code clone detection is a fundamental task in software engineering, aimed at identifying duplicated or highly similar code fragments within a software repository [1]. Code clones can arise due to several development practices such as copypasting, reusing code templates, or implementing similar functionalities across different projects. While code duplication can improve development speed in the short term, it often leads to maintainability issues, increased technical debt, and security vulnerabilities [2]. Detecting and managing code clones is crucial for ensuring software quality, facilitating refactoring, and preventing unintended inconsistencies that may introduce
bugs [1].
A key aspect of code clone detection is the representation of source code. Various code representations have been proposed to capture the syntactic and semantic features of programs, enabling more effective analysis. Among these representations, the Abstract Syntax Tree (AST) is one of the most widely used due to its ability to capture the syntactic structure of programs while being easy to extract [3]. ASTs abstract away surface-level variations, allowing models to focus on structural similarities rather than specific token sequences. However, despite its advantages, research has shown that AST-based representations primarily encode syntactic information and often fail to capture deeper semantic relationships in code [4].
To address these limitations, many studies have attempted to enhance AST-based graph structures by incorporating additional control and data flow information, leveraging Graph Neural Networks (GNNs) for code clone detection. A pioneering study in this direction was conducted by Wang et al. [5], who introduced a flow-augmentation technique that integrates ASTs with control and data flow information, thereby improving the detection of semantic code clones. Subsequent research has built upon this idea, enriching AST representations with handcrafted control and data flow information to combine both syntactic and semantic aspects of code [4], [6]–[9]. With the advancement of cross-language code clone detection, this approach has also been extended to that domain [10], [11].
Despite extensive research in this field, the impact of augmenting AST-based representations with control and data flow information has not been systematically examined. In particular, ablation studies assessing the contribution of control and data flow integration within AST structures remain largely unexplored. Furthermore, the additional computational overhead introduced by incorporating control and data flow information—an essential consideration in the development of real-world applications—has received limited attention in existing research.
In this study, we conduct an empirical analysis to evaluate the effectiveness of various AST-based hybrid graph representations in GNN-based code clone detection. We provide a detailed investigation into how different edge representations in AST-based graphs impact both detection accuracy and computational efficiency, offering valuable insights to the opensource research community. Specifically, our research aims to answer the following questions:
RQ1: Which AST-based hybrid graph representation and GNN architecture combination is most effective for code clone detection? This research question aims to evaluate the impact of different AST-based hybrid graph structures (e.g., $\mathrm { A S T } + \mathrm { C F G }$ , AST $^ +$ DFG, $\mathbf { A S T } + \mathbf { F A } ,$ ) on code clone detection performance. We systematically compare these representations across multiple GNN architectures, including GCN, GAT, GGNN, and GMN. The analysis focuses on assessing their effectiveness in terms of accuracy, recall, and precision for detecting different types of code clones.
RQ2: What is the computational overhead of different AST-based hybrid representations? This question investigates the trade-offs between detection performance and computational cost when incorporating additional structural information into AST-based hybrid graphs. We analyze key efficiency metrics such as memory consumption, graph density, generation time, and inference time to assess the feasibility of employing enriched representations in real-world applications.
In summary, our main contributions are as follows:
We conduct a systematic evaluation of various ASTbased hybrid graph structures and their effectiveness across multiple GNN architectures, including GCN, GAT, GGNN, and GMN, for code clone detection. Our analysis provides valuable insights into the impact of different hybrid representations on detection accuracy and the comparative performance of different GNN models.
• We analyze the computational overhead associated with different AST-based hybrid representations, examining factors such as memory consumption, graph density, generation time, and inference time. This evaluation provides practical insights into the trade-offs between detection performance and computational efficiency in real-world applications.
• We present an open-source resource encompassing dataset allocation, graph construction methodologies, hybrid graph combinations, and model implementations. This resource facilitates further research by enabling the exploration of diverse hybrid graph representations and the development of more efficient GNN-based approaches for code clone detection. The resource is publicly available at https://github.com/ZixianReid/semantic graph code code clone.
This paper is organized as follows: Section II introduces the necessary background concepts. Section III provides an overview of related work. Section IV details our experimental setup. Section V presents the experimental results. Section VI discusses potential threats to the validity of our study. Section VII offers a discussion of key insights and implications. Section VIII concludes this study.
# II. BACKGROUND
In this section, we present the background necessary for the understanding of this work in three parts: Code Clone Detection, Source Code Representations, and Graph Neural Networks.
# A. Code Clone Detection
Code clone detection aims to identify similar or duplicate code fragments. Code clones are typically categorized into four types [12]:
• Type-1: Code fragments that are identical except for superficial differences such as formatting and comments. • Type-2: Code fragments that exhibit minor modifications, such as renamed variables, altered data types, or changes in literals, while maintaining structural similarity. Type-3: Code fragments that have undergone significant structural modifications, including statement insertions, deletions, or reordering, yet still preserve core functionality. Type-4: Code fragments that achieve the same functionality through different implementations.
This study primarily focuses on Type-4 code clones, which are more challenging to detect due to their structural differences
# B. Source Code Representations
In this study, we explore four distinct source code representations that have been widely utilized for code clone detection: Abstract Syntax Tree (AST), Control Flow Graph (CFG), Flow-Augmented Abstract Syntax Tree (FA-AST), and Data Flow Graph (DFG).
Abstract Syntax Tree (AST): AST is the syntactical graph-based representation of code fragments, which is one of the most popular representations in code analysis. They abstract away surface-level variations, allowing models to focus on structural similarities rather than specific token sequences.
Control Flow Graph (CFG): CFGs capture the execution flow of a program, representing how different statements and expressions interact based on control structures. CFGs are particularly useful for identifying logical similarities between code fragments.
Data Flow Graph (DFG): DFGs model the dependencies between variables and expressions by tracking how data propagates through the program. They are beneficial in detecting clones with similar computational logic but different syntactic structures.
Flow-Augmented Abstract Syntax Tree (FA-AST): Wang et al. [5] constructed FA-AST by augmenting AST with explicit control and data flow edges to better capture semantic information. While this modification improves the AST’s ability to convey program behavior, it also introduces increased computational and structural complexity. For consistency with our hybridization notation, we refer to this representation in our experiments as FA.
In this study, we fuse these code representations in different combinations to evaluate their impact on code clone detection. By constructing various hybrid representations (e.g., $\mathrm { A S T } + \mathrm { C F G }$ , $\mathrm { A S T } + \mathrm { D F G }$ , and $\mathrm { A S T } + \mathrm { F A } + \mathrm { C F G } + \mathrm { D F G }$ , we aim to analyze the role of each representation in capturing syntactic and semantic similarities while also investigating the trade-offs between detection accuracy and computational complexity.
# C. Graph Neural Networks (GNNs)
Unlike Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), GNNs exploit the underlying graph topology to learn meaningful node or graph-level representations. A key principle of most GNN architectures is the message-passing paradigm, where node embeddings are iteratively updated by aggregating and transforming information from their neighbors. All GNNs used in this study follow this message-passing framework.
• Graph Convolutional Network (GCN): GCN [13] is a fundamental graph-based model that applies convolutional operations to aggregate information from neighboring nodes, enabling efficient representation learning on graph-structured data. It captures structural dependencies within a graph by iteratively updating node representations based on their local neighborhoods.
Graph Attention Network (GAT): GAT [14] enhances graph representation learning by introducing attention mechanisms that assign different importance weights to neighboring nodes during message passing. This adaptive weighting allows the model to focus on the most relevant structural elements
• Graph Gated Neural Network (GGNN): GGNN [15] extends traditional GNNs by incorporating gated recurrent units (GRUs) to model long-range dependencies in graph structures. This approach enables better information propagation across large and complex graphs, making it useful for applications that require deeper contextual understanding
• Graph Matching Network (GMN): GMN [16] is designed for comparing graph structures by learning a similarity function between node representations. It is particularly effective in tasks that require assessing relational patterns, such as similarity learning and structural alignment, where graph-level relationships must be preserved.
# III. RELATED WORK
In this section, we present the work related to our study of using AST-based hybrid graph representations with GNNs and other works leveraging alternative representations and machine learning techniques for code clone detection.
AST-based Hybrid Graph Representations are widely utilized in code clone detection as they integrate multiple code representations to provide a comprehensive analysis of both the syntactic and semantic aspects of code fragments. One of the pioneering works in this domain is FA-AST by
Wang et al. [5], which employs a Graph Neural Network (GNN) on flow-augmented abstract syntax trees to effectively capture syntactic and semantic similarities in code fragments. In FA-AST, flow edges such as nextUse, nextToken, and nextSibling are directly embedded into the AST to enrich it with additional semantic signals. However, their representation is tightly coupled and fixed within the AST structure.
Beyond FA-AST, a common approach to enhancing ASTbased representations involves the integration of the AST, CFG, and DFG. This hybrid strategy facilitates a more comprehensive analysis of program functionalities by capturing syntactical structures through the AST, control dependencies via the CFG, and data dependencies using the DFG. Various studies have employed such hybrid representations in both single-language and cross-language code clone detection, leveraging different GNN architectures. For instance, Zhao et al. [8] integrate the AST and CFG with hierarchical dependencies, employing GAT to improve detection accuracy. Similarly, Fang et al. [6] and $\mathrm { { X u } }$ et al. [9] have adopted this fusion methodology in their studies.
Some research has explored more comprehensive combinations of AST, CFG, and DFG. For example, Yuan et al. [17] introduce an intermediate code-based graph representation that integrates these three components, thereby enhancing the identification of functional code clones. Additionally, Liu et al. [4] propose TAILOR, which incorporates AST, CFG, and DFG to improve the detection of functionally similar code fragments. Similar hybrid graph representations have also been extended to cross-language code clone detection, as demonstrated by Mehrotra et al. [10] and Swilam et al. [11].
While these studies highlight the significance of hybrid graph representations as an innovation in clone detection, their specific impact remains insufficiently explored. A major challenge in this research area is the variation in experimental settings across different studies, including the use of distinct GNN architectures, datasets, and embedding techniques. Consequently, the direct contribution of hybrid representations to code clone detection remains obscured despite their widespread adoption. Moreover, when evaluating the effectiveness of code representations, computational overhead is critical in constructing efficient code clone detection tools. However, within the scope of existing research, only a limited number of studies have assessed the computational performance of ASTbased approaches from a comprehensive perspective [4], [6]. Prior work has primarily focused on inference and training time, often comparing proposed methods against other code clone detection tools. However, key aspects in AST-based hybrid representations, such as extra computational overhead and storage requirements, have been largely overlooked despite their significance in real-world applications. Addressing these gaps is crucial for advancing more efficient and scalable code clone detection methods.
Evaluation of Code Representation in Code Clone Detection is crucial for understanding the effectiveness of different code structures in capturing similarities. Most empirical studies in this domain have been tool-based, focusing on evaluating the performance of specific clone detection tools [12], [12], [18]–[20]. Recent research has explored the impact of different code representations on clone detection performance. Wang et al. [21] reproduce 12 clone detection algorithms categorized into text-based, token-based, tree-based, and graph-based approaches, revealing that token- and AST-based methods excel at detecting simple clones but struggle with semantic ones. In contrast, CFG- and PDG-based approaches achieve higher recall for complex clones but incur greater computational costs. Zubkov et al. [22] evaluated contrastive learning methods for code clone detection, comparing SimCLR, SwAV, and Moco across text, AST, and graph-based representations. Their findings show that graph-based models outperform others, with SimCLR and SwAV achieving the best results, while Moco demonstrates robustness. However, no studies focus on the impact of AST-based representations on code clone detection. Moreover, while some of these studies analyze the performance of different code representations, they primarily focus on reproducing existing algorithms where code representations are applied. However, other factors, such as network architecture, training strategies, and hyperparameter selection, introduce biases that are often overlooked. To address these gaps, this study systematically investigates the impact of different AST-based representations across various network architectures. By implementing diverse AST-based representations under identical experimental conditions, we aim to isolate the effects of code representation itself, ensuring a fair evaluation of its influence on code clone detection performance.
Fig. 1: Methodology Employed in Our Study.
# IV. METHODOLOGY
This section details the evaluation process used in this study. A visual representation of the methodology is provided in Figure 1.
# A. Problem Formulation
The code clone detection problem is defined as follows: Given two code fragments, $C _ { i }$ and $C _ { j }$ , the goal is to associate them with a label $y _ { i j }$ that determines whether they are clones:
• $y _ { i j } = 1$ indicates that $C _ { i }$ and $C _ { j }$ are clones, • $y _ { i j } = 0$ indicates that they are not clones.
Let $s _ { i j } ~ \in ~ [ 0 , 1 ]$ represent a similarity score between the given pair of code fragments:
$s _ { i j } = 1$ indicates perfect similarity (i.e., an exact clone), $s _ { i j } ~ = ~ 0$ indicates complete dissimilarity (i.e., a nonclone).
To assess classification performance, we binarized the predicted similarity score $s _ { i j }$ using a fixed classification threshold $\sigma$ , following standard practices in the literature. Formally:
$$
y _ { i j } = { \left\{ \begin{array} { l l } { 1 , } & { { \mathrm { i f ~ } } s _ { i j } > \sigma { \mathrm { ~ ( c l o n e ~ p a i r ) } } } \\ { 0 , } & { { \mathrm { o t h e r w i s e ~ ( n o n - c l o n e ~ p a i r ) } } } \end{array} \right. }
$$
This choice aligns with prior work such as Wang et al. [5] and Zhang et al. [23].
# B. Dataset Selection and Filtering
We use BigCloneBench (BCB) [24], one of the largest and most widely used benchmarks for code clone detection. Specifically, we adopt the balanced and filtered version introduced by Wei and Li [25], which excludes unlabeled code fragments (i.e., those not explicitly tagged as true or false clone pairs). However, this version lacks clone type labels and similarity scores. To recover this information, we merge it with the original BCB by aligning code fragment IDs, thereby retaining the benefits of a balanced dataset while restoring rich annotation metadata (e.g., clone type and similarity scores). Table I summarizes the dataset statistics: the 8,876 code fragments were paired to form labeled positive and negative clone pairs used across training, validation, and test sets.
TABLE I: Clone Type Demographics in BigCloneBench.
# C. Source code Representations
Abstract Syntax Trees (ASTs) serve as the fundamental representation in this study. All additional representations, including Control Flow Graphs (CFG), Data Flow Graphs (DFG), and Flow-Augmented ASTs $( \mathrm { A S T } + \mathrm { F A } )$ ), are constructed based on ASTs. To extract ASTs from Java programs, we utilize the Python package Javalang. $\mathrm { A S T } + \mathrm { F A }$ representations are constructed following the methodology outlined in the original FA-AST paper by Wang et al. [5].
For CFG and DFG construction, we adopt a methodology similar to prior studies that leverage AST-based hybrid representations [4], [10]. The construction process consists of two main steps:
AST Construction: We first extract the AST structure using the Javalang library, including node and edge information.
Dependency Identification and Graph Augmentation: Based on the AST, we identify two types of dependencies: – Control Dependencies: We traverse the AST to locate control structures (e.g., if, while, for, return) and generate control flow edges that connect nodes based on the execution order of statements. – Data Dependencies: We identify variable definitions and subsequent uses within the same function. A backward-tracking algorithm is applied from each variable use to find the nearest dominating definition along the AST traversal path. If such a definition is found, we add a directed edge from the defining node to the usage node, forming the DFG.
The resulting control and data flow relationships are incorporated into the original AST structure as additional directed edges with distinct edge types.
Figure 3 presents a visual representation of the constructed model for a sample code fragment.
# D. Network Configuration
We implement all GNNs with PyTorch [26] and its extension library Pytorch Geometric [27]. For a fair comparison of various GNNs, we maintain a consistent architectural configuration. The overall architecture consists of an embedding layer, a graph propagation layer, a global pooling mechanism, and a classification head.
1) Input Representation and Embedding: We employ an embedding layer to transform node representations into a continuous vector space. Specifically, we employ the embedding function from PyTorch [26], which implements a lookup table that stores embeddings for a fixed vocabulary size and dimensionality. Similarly, edge attributes are also embedded using a separate embedding layer, where each edge type is mapped to a corresponding feature vector.
2) Graph Propagation Layers: We utilize three standard GNN propagation layers from the PyTorch Geometric library to implement propagation layer of GCN, GAT and GGNN, whereas we implement the propagation layer of GMN in the work of Wang et al. [5].
3) Graph Pooling: We apply global attention pooling to obtain a fixed-size graph representation.
4) Classification Head: For GCN, GAT, and GGNN, the final node embeddings of the two graphs are concatenated before being processed through a fully connected feed-forward network. This approach merges the graph representations into a shared feature space before computing a similarity score. For GMN, the pooled graph representations are directly compared to determine their similarity.
The diagrammatic sketch of GNNs architectures is shown in Figure 2.
# E. Evaluation Metrics
We use multiple evaluation metrics to assess the effectiveness of different AST-based hybrid graph representations in code clone detection. These metrics include Precision, Recall, and F1-score.
To complement accuracy-based evaluation, we report four additional metrics to assess the efficiency and structural properties of each graph representation: Generation Cost, Storage Cost, Average Graph Density, and Inference Time. Generation Cost refers to the time required to construct the graph representations from raw code in the BigCloneBench dataset, which consists of 8, 876 code fragments.
Storage Cost represents the total memory footprint of the generated graphs for all code fragments in BigCloneBench.
Average Graph Density quantifies the overall connectivity of graphs in the dataset by averaging the density of individual graphs:
$$
{ \mathrm { A v e r a g e ~ G r a p h ~ D e n s i t y } } = { \frac { 1 } { N } } \sum _ { i = 1 } ^ { N } { \frac { | E _ { i } | } { | V _ { i } | ( | V _ { i } | - 1 ) } }
$$
# where:
$N$ is the total number of graphs in the dataset, : $\left| E _ { i } \right|$ is the number of edges in the $i$ -th graph, : $| V _ { i } |$ is the number of nodes in the $i$ -th graph.
A higher average graph density suggests a more interconnected structure, which may enhance representational power but also increase computational overhead.
Fig. 2: Code Clone Detection Pipeline with the Considered GNNs: GCN, GAT, and GGNN (Top Figure); GMN (Bottom Figure).
Inference Time evaluates the efficiency of utilizing these graph representations for code clone detection. In this study, we measure the total time required for GMN to process the test set, which contains 422,780 code fragment pairs.
# F. Experimental Settings
We implement the models using PyTorch [26] and PyTorch Geometric [27]. All experiments are conducted on a machine equipped with an Intel i9-13900K, 32GB RAM, and an NVIDIA RTX A4000 WITH 16GB memory. The models are trained using the Adam optimizer with an initial learning rate of 0.0005 and a weight decay of $1 \times 1 0 ^ { - 4 }$ . To dynamically adjust the learning rate, we apply a learning rate scheduler that reduces the learning rate by a factor of 0.5 if the validation loss does not improve for two consecutive epochs, with a lower bound of $1 \times 1 0 ^ { - 6 }$ . We train the model for 20 epochs with a batch size of 32, optimizing using Mean Squared Error (MSE) Loss. Each input consists of two code fragments represented as AST-based hybrid graphs, which are embedded into a 100- dimensional space before being processed by a 4-layer GNN. The model supports multiple architectures, including GCN, GAT, GGNN, and GMN, with each variant propagating node embeddings differently. The dataset is divided into training $( 8 0 \% )$ , validation $( 1 0 \% )$ , and testing $( 1 0 \% )$ sets.
# V. EXPERIMENTAL RESULTS AND ANALYSIS
In this section, we aim to answer the two research questions defined above by describing and analyzing the results of our
experimental study.
# A. RQ1: Which AST-based hybrid graph representation and GNN architecture combination is most effective for code clone detection?
We will attempt to answer this question in two phases: one which focuses on the most effective AST-based hybrid graph representation and another one that focuses on the most effective GNN model architecture.
1) RQ1.1: Which AST-based hybrid graph representation is most effective in detecting code clones?
To assess the effectiveness of different AST-based hybrid graph representations in detecting code clones, we conducted an empirical evaluation across four GNNs architectures: GMN, GCN, GAT and GMN. The performance of each hybrid representation is measured using Precision, Recall and F1- score, and the results are summarized in Table II.
We observe that different AST-based hybrid representations have varying impacts on different GNN architectures. Below, we analyze their effects on each model:
GMN: Our results indicate that while $\mathrm { A S T } + \mathrm { F A } + \mathrm { C F G }$ achieves the highest F1-score and $\mathrm { A S T } + \mathrm { C F G } + \mathrm { D F G }$ yields the highest recall, the standard AST representation attains the highest precision. Notably, the F1-score difference between AST and $\mathrm { A S T } + \mathrm { F A } + \mathrm { C F G }$ is only 0.001, suggesting that the performance gain from hybrid representations is minimal. Furthermore, most AST-based hybrid representations result in a performance decline compared to the standard AST, highlighting that GMN is inherently effective at capturing syntactic patterns from ASTs without requiring additional flow-augmented structures. This finding suggests that while enriched AST structures may offer marginal benefits, GMN already excels in leveraging AST’s syntactic features for code clone detection.
TABLE II: Performance of Different AST-Based Hybrid Graph Representations Across GNNs.
(b) Graph Convolutional Network (GCN)
(a) Graph Matching Network (GMN)
(c) Graph Attention Network (GAT)
(d) Graph Gated Neural Network (GGNN)
GCN: Our results indicate that GCN achieves the highest F1-score and recall with $\mathrm { A S T } ~ + ~ \mathrm { C F G } ~ + ~ \mathrm { D F G }$ , while $\mathbf { A S T } + \mathbf { C F G }$ attains the highest precision. Notably, most AST-based hybrid representations improve GCN’s performance in code clone detection, demonstrating the benefits of incorporating additional semantics information. However, $\mathrm { A S T } \ + \ \mathrm { F A } \ + \ \mathrm { C F G }$ leads to a decline in performance, suggesting that not all flow combinations are beneficial for convolutional feature propagation. These findings indicate that integrating semantic flow information, particularly through a combination of control and data flow graphs, enhances GCN’s ability to capture meaningful structural dependencies.
GAT: The results for GAT closely resemble those of GCN. We observe that GAT achieves the highest F1- score and recall with $\mathrm { A S T } + \mathrm { C F G } + \mathrm { D F G }$ , while AST $+ \mathrm { C F G }$ yields the highest precision. Notably, AST-based hybrid representations enhance GAT’s performance in code clone detection to varying degrees, indicating that the integration of control and data flow information effectively supports attention-based learning. These findings suggest GAT effectively leverage the enriched structural dependencies provided by hybrid AST representations, leading to improved detection accuracy.
GGNN: Our findings indicate that while $\mathrm { A S T } + \mathrm { C F G } +$ DFG achieves the highest F1-score, $\mathbf { A S T } + \mathbf { C F G }$ attains the highest precision, and the standard AST representation yields the highest recall. Additionally, the F1-scores among AST, $\mathbf { A S T } + \mathbf { C F G }$ , and $\mathrm { A S T } + \mathrm { C F G } + \mathrm { D F G }$ are very close, suggesting that the performance differences between these representations are minimal. Unlike GCN and GAT, where FA-AST-based hybrids had varying effects, GGNN exhibits a noticeable decline in performance when using FA-AST-based hybrids, indicating that the introduction of flow-augmented AST structures may not be beneficial for recurrent architectures.
Moreover, beyond comparing the performance of different hybrid representations, we aim to assess the contribution of each individual representation to the overall performance improvement. To achieve this, we calculate the relative percentage improvement of each added representation.
The relative F1-score percentage improvement observed across GMN, GCN, GAT, and GGNN when integrating CFG, DFG and FA representations into various AST-based hybrid structures is illustrated in Figure 4.
According to Figure 4, it is evident that for GCN and GAT, the addition of all extra semantic information consistently leads to performance improvements. Specifically, CFG and DFG exhibit a similar impact, suggesting that both control flow and data flow information contribute positively to enhancing code representations for these models. Moreover, when both CFG and DFG are combined with AST, their synergistic effect further enhances performance, reinforcing the importance of incorporating both control and data flow structures into graphbased code analysis. However, when $\mathrm { { A S T } + \ F A }$ is combined with either CFG or DFG, the performance generally deteriorates across all networks, indicating that FA edges may introduce noise or redundant connections that negatively impact learning.
A closer examination of $\mathrm { A S T \ + \ F A }$ impact reveals that while it can enhance performance in specific cases (e.g., improving AST with GCN and $\mathbf { A S T } + \mathbf { C F G }$ with GMN), its overall contribution across different hybrid representations is public static void main(String[] args){ int $\textbf { \textsf { x } } = \textbf { 1 }$ int $y \ = \ \times \ + \ 2$ : System.out.println(y);
Fig. 3: AST-based Hybrid Code Representation of the sample function.
largely negative. This effect is particularly pronounced with GGNN, where the addition of FA leads to a clear decline in performance across all representations. The negative impact of A $\mathrm { \Omega _ { \mathrm { \Omega } S T } + \mathrm { \partial _ { F A } } }$ suggests that the additional flow augmentation edges might interfere with the ability of message-passing GNNs to extract meaningful patterns, possibly due to an increase in graph complexity or redundant dependencies that hinder effective feature propagation.
# 2) RQ1.2: Which GNN model architecture is more effective for code clone detection using AST-based hybrid graphs?
To assess the effectiveness of different GNN architectures in code clone detection, we compared the F1-scores of the standard AST representation and the best-performing ASTbased hybrid representation for each network. The results are visualized in Figure 5.
GMN achieves the highest F1-score regardless of whether the AST is enriched or not, outperforming all other GNNs. This suggests that the cross-attention mechanism in GMN’s propagation effectively enhances the model’s ability to capture and compare structural similarities between code snippets. Compared to other GNNs that rely on additional semantic information for improvement, GMN’s superior performance indicates that capturing cross-code similarities is more critical for code clone detection than merely enhancing semantic information.
Although GCN and GAT initially perform worse than GGNN with the standard AST representation, their performance significantly improves when enriched with additional semantic information, ultimately reaching a comparable level. This suggests that convolutional and attention-based architectures effectively utilize semantic edge information to enhance structural learning and detection accuracy.
# Key Findings of RQ1
GMN is the most effective model due to its crossattention mechanism, which enhances structural similarity detection. GMN performs well even with the standard AST delivering the highest Precision and nearly the best Recall and F1-score across the board– thus reducing the necessity for hybrid representations.
Hybrid representation effectiveness varies by GNN architecture: GCN and GAT benefit the most from hybrid representations, while GMN and GGNN show minimal or negative effects. GCN and GAT improve significantly with hybrid representations, reaching performance levels comparable to GGNN.
CFG and DFG improve performance, but FA often degrades it. FA tends to introduce noise in most cases, leading to performance drops–especially in GGNN.
GCN and GAT are best suited for hybrid representations, whereas GMN excels with standard AST, and GGNN struggles with complex structures like FA-AST.
# B. RQ2: What is the computational overhead of different ASTbased hybrid representations?
In this section, we analyze the computational and storage efficiency of various AST-based hybrid graph representations to quantify the overhead introduced by AST enrichment. Specifically, for each representation, we evaluate the generation cost, the storage cost, the graph density, and the inference time when using them with a GNN (in our case, GMN) to assess the trade-offs associated with incorporating additional structural information into AST-based models. These results are shown in Table III.
Based on Table III, we observe significant variations in computational and storage overhead across different ASTbased hybrid graph representations. For generation cost, CFG introduces a negligible increase, with $\mathrm { A S T } \ + \ \mathrm { C F G }$ requiring 14.224s. However, incorporating DFG results in a large generation cost increase of 21 times, reaching 305.095s. This density, and inference time is evident in AST-based hybrid representations, as the enriched structural information is primarily encoded through additional edges in the ASTs. As the number of edges increases, the graph storage cost, average graph density, and inference time also increase accordingly. Representations incorporating CFG or DFG introduce a moderate increase in graph density, with $\mathbf { A S T } + \mathbf { C F G }$ and ${ \mathrm { A S T } } +$ DFG exhibiting densities of 0.0085 and 0.0073, respectively. However, $\mathrm { A S T } + \mathrm { F A }$ representations demonstrate significantly higher graph densities and storage requirements, with AST $^ +$ FA requiring $1 1 5 2 . 0 0 ~ \mathrm { M B }$ of storage and reaching an average graph density of 0.0202. This increase is due to the inclusion of additional edges representing child-parent relationships, sibling order, token sequences, variable usage, and basic control flow structures such as conditional statements and loops. While these additional structural connections do not explicitly enhance the expressiveness of FA-AST-based representations, their impact on computational performance should be carefully evaluated.
Fig. 4: Relative F1-score Improvement Achieved By Alternatively Introducing the Representations CFG, DFG, or FA to Other Code Representations While Using the Different GNN Models.
TABLE III: Computational Overhead and Storage Cost of AST-Based Hybrid Graph Representations. Inference Time Measured using GMN.
Fig. 5: Performance Comparison Across Various GNNs.
increase is likely due to the additional complexity involved in tracking data dependencies compared with control dependencies. $\mathrm { A S T } + \mathrm { F A }$ alone requires only 16.910 seconds, reflecting a minor increase compared to AST due to the simple logic to generate it.
The relationship between graph storage cost, average graph
The increased number of edges in AST-based hybrid representations leads to higher storage requirements, greater graph density, and longer generation and inference times. However, this increase in structural complexity does not necessarily result in improved effectiveness for code clone detection. Therefore, within the research community, it is essential to carefully select the most effective representations when enriching ASTs, ensuring that the additional structural or semantics connections provide meaningful insights while maintaining computational efficiency.
# Key Findings of RQ2
• Generation cost is significantly impacted by DFG, while CFG has a minimal effect. The generation of AST $^ +$ CFG requires only $1 4 . 2 2 4 s$ , whereas the generation cost of $\mathrm { A S T } + \mathrm { D F G }$ increases to 305.095s, highlighting the high computational cost required for the tracking of data dependencies. Structural complexity does not always enhance effectiveness. While FA enriches representations the most, its computational overhead should be carefully considered for code clone detection tasks.
# VI. THREATS TO VALIDITY
One major limitation of this study is its reliance on the BigCloneBench dataset. While we initially considered using the Google Code Jam (GCJ) [28] dataset for comparison, the model’s performance on GCJ was exceptionally high, making it unsuitable for a meaningful evaluation. To the best of our knowledge, BigCloneBench remains the most widely used benchmark for code clone detection. However, its focus on Java limits the generalizability of our findings to other programming languages. Mutation-based techniques applied to diverse datasets could potentially enhance generalizability, but this remains an area for future exploration. Additionally, due to the constraints of our experimental setup, cross-language code clone detection was not included in this study.
Another limitation of this study is the scope of evaluation. While AST is the dominant representation in code clone detection, there is a lack of research on the performance of various network architectures applied to different AST designs. This study primarily focuses on AST-based hybrid graphs in the context of GNNs. However, ASTs are also widely used in other architectures, including RNN-based models (RvNN, LSTM, GRU) [29], [30], CNNs [31], transformers [23], [32], and treestructured models (Tree-LSTM [33], Tree-CNN [34]). A more comprehensive empirical study exploring AST representations across diverse neural network architectures would provide valuable insights for the research community.
# VII. DISCUSSION
Our findings highlight that while certain hybrid representations can enhance clone detection performance, their effectiveness is highly contingent on the underlying GNN architecture. Our results suggest that model architecture—designs incorporating mechanisms such as graph matching—often plays a more critical role than simply enriching ASTs with additional semantic edges. In other words, architectural enhancements may offer more substantial gains than increasing representational complexity alone.
For researchers, this underscores the importance of considering whether the hand-crafted semantic enhancements—such as control or data flow edges—can actually be leveraged by the chosen model. If the architecture lacks the capacity to utilize this information effectively, such enhancements may not only be unhelpful but could even degrade performance.
From a practical perspective, especially in resourceconstrained settings, the additional computational and storage overhead introduced by graph enrichment must be weighed carefully. In contrast to controlled experimental environments, real-world applications may benefit more from lightweight configurations. | As one of the most detrimental code smells, code clones significantly
increase software maintenance costs and heighten vulnerability risks, making
their detection a critical challenge in software engineering. Abstract Syntax
Trees (ASTs) dominate deep learning-based code clone detection due to their
precise syntactic structure representation, but they inherently lack semantic
depth. Recent studies address this by enriching AST-based representations with
semantic graphs, such as Control Flow Graphs (CFGs) and Data Flow Graphs
(DFGs). However, the effectiveness of various enriched AST-based
representations and their compatibility with different graph-based machine
learning techniques remains an open question, warranting further investigation
to unlock their full potential in addressing the complexities of code clone
detection. In this paper, we present a comprehensive empirical study to
rigorously evaluate the effectiveness of AST-based hybrid graph representations
in Graph Neural Network (GNN)-based code clone detection. We systematically
compare various hybrid representations ((CFG, DFG, Flow-Augmented ASTs
(FA-AST)) across multiple GNN architectures. Our experiments reveal that hybrid
representations impact GNNs differently: while AST+CFG+DFG consistently
enhances accuracy for convolution- and attention-based models (Graph
Convolutional Networks (GCN), Graph Attention Networks (GAT)), FA-AST
frequently introduces structural complexity that harms performance. Notably,
GMN outperforms others even with standard AST representations, highlighting its
superior cross-code similarity detection and reducing the need for enriched
structures. | [
"cs.AI",
"cs.SE"
] |
# 1 Introduction
Generative Music modeling is a subfield of Generative AI, where music modeling through user inputs and concepts such as melody reconstruction, music continuation, text-music, lyric-to-song are explored. None of these research directions have an unified dataset structure and requirement which means it depends on the researchers what datasets to choose and how to construct them. Most often, as we have seen for Jukebox from OAI, Neural Melody reconstruction from Microsoft and other well-known players, datasets are constructed from scratch through scraping online lyric finders and YouTube videos. While the trained models, its codebase and model weights are made public, the datasets are sealed-off due to competitive and legal reasons.
We can also see its Chinese counterparts such as DiffSinger [12], Prompt-Singer [13], SongCreator [2], which have either constructed their own open-source datasets or have used third-party datasets available on Kaggle to construct an artificial training corpus. Apart from these some research labs like Meta AI, use proprietary stock music datasets by paying respective rights holders. Parallel to all these debacles, there’s a growing community mainly Chinese who are constantly publishing open-source high-quality pre-training datasets for Generative music modeling community. Prominent examples are GTSinger [6] and M4Singer [5] that have created professionally recorded mono and multilingual training datasets, while its size itself is a concern since most models require a couple of hundred thousands of audio for training but it depends on the specific paper, architecture and objective. These datasets have shown adoption by the research community but its practical adoption by the community has been limited. Due to its limited nature, serious contender models want to use popular and real-world music which people are listening to constantly and vibing to them.
To tackle this, efforts have been made where the problem of lack of datasets representative of real-world loved songs and popular tracks was tried to be addressed by the DISCO team. Where they had released DISCO-10M [1] and LAION later released its LAION-DISCO-12M [3]. These datasets, similar to its previously mentioned efforts, were poorly adopted by the community and most serious papers where large-scale pre-trained models for Generative Music modeling are introduced, overlook these datasets and use their own scraped dataset or private owned corpus. Blame can be fallen on, these large-scale arbitrary datasets are practically YouTube music video links and limited metadata scraped from Spotify or YouTube Music. Making it undesirable for most practical applications in the research community.
Table 1: Comparative Analysis between Sleeping-DISCO and Competition
In table 1. We did a comparison between Sleeping-DISCO and other well-known quality contributions in the subfield of Generative music modeling which is singing datasets. In this part, researchers either focus on training a foundation model from scratch to generate songs, continuation of part of a song, and writing lyrics. Jukebox [4] is the most well-known example in this category followed by SongCreator, and specialised datasets such as M4Singer for Chinese language and GTSinger for English, Chinese and a few European languages. Among them, Jukebox scraped Lyricwiki (now defunct) [10] website to create a private dataset and rest of the examples, professionally recorded couple of popular songs and scratch-written songs using paid vocalists and artists. Unlike these datasets where either quality corpus is private or not interesting, our contribution Sleeping-DISCO provides massive amounts of songs, artists and covers 169 languages while covering English, Chinese, Japanese and EU languages.
Table 2: Breakdown of scale and metadata between past contributions and our dataset
In table 2. We do a side-by-side comparison between our dataset and other contributions to contrast the balance between scale and quality that we have provided. DISCO-10M leads in providing the most hours of audio, followed by LAION-DISCO dataset and Sleeping-DISCO comes in third. While it positions itself as third but outcompete both DISCO and LAION-DISCO in terms of available artists and metadata. Both DISCO and LAION-DISCO have been created to provide an arbitrary number of audio clips for pre-training without factoring metadata components for individual songs and searching for songs based on artists and genre. Our contribution provides in-depth and exhaustive metadata for each individual song, albums, ability to search based on artists and genre. Sleeping-DISCO also makes it possible to search for songs based on a particular year which is unavailable in all the other contributions. If we explore some of the recent high-quality contributions such as M4Singer and GTSinger which provide audio-wise metadata, our contributions beat it by a large-margin as well.
# Our contributions are as follows:
1. We have provided a balanced version of a large-scale pre-training dataset for the Generative Music modeling field.
2. We also include in-depth metadata in the form of individual song and album related metadata, lyric embeddings, nearly a thousand genres, and all widely spoken languages. Additionally, YouTube links to download audio clips, YouTube video metadata, embeddings for lyrics, and captions for songs.
# 2 Related Work
Quality private corpus: Big labs construct private in-house lyrics and song metadata datasets by scraping online lyricfind, lyric translation and song metadata websites. Afterwards based on that database collect audio from YouTube and third-party places. Jukebox and Neural Melody reconstruction [11] are well-known examples in this category. Sleeping-DISCO is the first training corpus in this category that is public and it matches both the quality and scale of big labs private datasets through scraping popular online lyric and song metadata websites called Genius.
Scattered Singing dataset contribution: It is an isolated contribution where the datasets are not being created for training Generative Music modeling models and research rather it is aimed towards exploratory analysis, song and metadata analysis and studying Music and signal components. In the past, the million song dataset, GeniusExpertise [14] are notable contributions. These datasets have never been used in training models (at the time of writing this paper) but they have the potential to be used since these datasets provide extensive metadata such as Lyrics, artist details and audio in some instances. Our contribution does not compete in this category but we have taken heavy inspiration from GeniusExpertise. As it was the first dataset that open-sourced a Music lyric and metadata dataset.
Professionally-recorded paid datasets: M4Singer and GTSinger are well-known examples in this category where popular songs and songs written by paid artists have been sung by paid vocalists and compiled as high-quality opensource datasets. These datasets have a strong Chinese presence combined with some European language influence. But, these are limited as the corpus is limited, lacks enough diversity and scale to train foundation models and no known artists and famous songs are present. Sleeping-DISCO is rather opposite, we provide a large-scale corpus suitable for training, high-quality and high-fidelity audio and famous artists like Maluma, Maroon5, Shakira and many others.
Open-source datasets: There are multiple open-source datasets present on Kaggle [7] and Huggingface that are either synthetic dataset or have been scraped from popular lyricfind websites like MusiXmatch [9] and Genius. We were able to find some Kaggle datasets for Genius that had five million songs and their metadata which was scraped by abusing GeniusAPI. These datasets often lack quality and filtering and are limited in scope since it was made as a passion project or side-project. We also realised these datasets did not include all the metadata fields that Genius and its competitors provide. We address these problems in our work by scraping all available metadata fields and having quality control in the scraping procedure to make it suitable for training models and scientific research.
# 3 Dataset
Figure 1: Overview of our extraction pipeline
# 3.1 Extraction and scraping pipeline
We wrote a python spider and scraper using the cloudscraper library to map the entire Genius website [8]. Cloudscraper was used to bypass the cloudflare protection and then we parsed the html using beautifulsoup and extracted all the available data fields, song details, metadata, album and artist names and record information. Meanwhile we stored both the mapped links tracing to all the songs and extracted data from those webpages to a secured storage and then uploaded all the data to Huggingface.
300000 Albums 加 8000 1 400000- 200000- 0 2010 1 1750000 川 1500000 1500000 1250000 1250000 1000000
1000000
8 750000 750000- 500000 500000- 250000 250000 0 0 2018 $ { \boldsymbol { \vartheta } } ^ { 2 ^ { \nu } }$ Year Year
Figure 3: Breakdown of songs in region-based languages and Top 10 genres in Sleeping-DISCO
Figure 2: Number of albums released between 2010–2023
# 3.2 Statistics of the dataset
Figures 2 and 3 illustrate the yearly growth in album releases, which shows a consistent upward trend. They also compare the number of songs released relative to albums. Additionally, we present statistics highlighting the top 10 most prominent genres in the Sleeping-DISCO dataset. Some less common genres appear due to Genius’s unconventional tagging system.
We also visualize three major language regions in our dataset using distinct colors: brown for Afro–Middle Eastern languages (Arabic, Hebrew, Amharic, Swahili, Persian, Turkish, Yoruba, Zulu, Hausa), azure for European languages (English, Spanish, French, German, Italian, Portuguese, Dutch, Russian, Polish, Swedish), and red for Asian languages (Chinese, Hindi, Japanese, Korean, Bengali, Thai, Vietnamese, Urdu, Malay, Indonesian). While these groups are representative, they are not exhaustive—our dataset includes 169 languages in total.
# 3.3 Lyric Embeddings and YouTube links
Table 3: YouTube link matching with similarity scores
We have used Model2Vec to create high-quality embeddings for all the songs in Sleeping-DISCO whose lyrics were available and shared them on Huggingface alongside the main dataset and additionally, we have extracted YouTube video links for the songs that we were able to find. To search for YouTube links, we used the Grass Foundation scraping pipeline and used embeddings to find the highest overlap between the song title and YouTube video name. Then we compared the YouTube title and description to make sure it was a relevant video.
# 3.4 Withheld Data fields
There were additional data fields within Sleeping-DISCO that we had discovered during scraping; these were specifically Genius Annotation, a form of Music caption written by the Genius team and Lyrics of the songs. These data fields are not open; rather, the exclusive rights are reserved for Genius to use them. That is why we are not sharing them in our public version of Sleeping-DISCO but we will share them with academic institutions and researchers based on verification of intent only for research purposes.
# 3.5 License
Sleeping-DISCO is shared under CC-BY-NC-ND 4.0. That is, nobody is allowed to create derivatives of SleepingDISCO aside from the original authors of the dataset.
# 4 Ethics
Sleeping-DISCO was created using publicly available data found on Genius website and it is entirely a metadata and hyperlink dataset which allows for creating training corpus for Generative Music modeling field. Further, we scraped the data over the course of a couple of months to avoid overloading Genius servers and it was done for research and scientific purposes under European law.
# Author Contribution
Tawsif led the entire project alongside Andrej, who was vital for scaling and data collection. Gollam helped in writing and providing feedback on the draft.
# Acknowledgements
We thank our sponsors who funded this project and our friends who have provided feedback on the draft. We also thank the Grass Foundation for the use of its resources.
# References
[1] Luca A. Lanzendörfer, Florian Grötschla, Emil Funke, and Roger Wattenhofer. DISCO-10M: A Large-Scale Music Dataset. arXiv preprint arXiv:2306.13512, 2023. https://api.semanticscholar.org/CorpusID: 259243841
[2] Shunwei Lei, Yixuan Zhou, Boshi Tang, Max W. Y. Lam, Feng Liu, Hangyu Liu, Jingcheng Wu, Shiyin Kang, Zhiyong Wu, and Helen M. Meng. SongCreator: Lyrics-based Universal Song Generation. arXiv preprint arXiv:2409.06029, 2024. https://api.semanticscholar.org/CorpusID:272550648
[3] LAION e.V. LAION-DISCO-12M: A Collection of 12 Million YouTube Music Links and Metadata. LAION Blog, Nov 17, 2024. https://laion.ai/blog/laion-disco-12m/
[4] Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. Jukebox: A Generative Model for Music. arXiv preprint arXiv:2005.00341, 2020. https://api.semanticscholar.org/ CorpusID:218470180
[5] Lichao Zhang, Ruiqi Li, Shoutong Wang, Liqun Deng, Jinglin Liu, Yi Ren, Jinzheng He, Rongjie Huang, Jieming Zhu, Xiao Chen, and Zhou Zhao. M4Singer: A Multi-Style, Multi-Singer and Musical Score Provided Mandarin Singing Corpus. In Proceedings of the Neural Information Processing Systems (NeurIPS), 2022. https://api. semanticscholar.org/CorpusID:258509710
[6] Yu Zhang, Changhao Pan, Wenxiang Guo, Ruiqi Li, Zhiyuan Zhu, Jialei Wang, Wenhao Xu, Jingyu Lu, Zhiqing Hong, Chuxin Wang, Lichao Zhang, Jinzheng He, Ziyue Jiang, Yuxin Chen, Chen Yang, Jiecheng Zhou, Xinyu Cheng, and Zhou Zhao. GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks. arXiv preprint arXiv:2409.13832, 2024. https://api.semanticscholar.org/CorpusID: 272827980
[7] Kaggle LLC. Kaggle: Data Science & Machine Learning Community. Accessed June 2025. https://www. kaggle.com/
[8] Genius Media Group Inc. Genius: Annotate the World. Accessed June 2025. https://genius.com/
[9] Musixmatch S.p.A. Musixmatch: The World’s Largest Lyrics Platform. Accessed June 2025. https://www. musixmatch.com/
[10] Reddit user u/username. Anybody know what happened to LyricWiki? Reddit, posted on June 3, 2018. https://www.reddit.com/r/Music/comments/9hpzv/anybody_know_what_happened_to_lyricwiki/
[11] Hangbo Bao, Shaohan Huang, Furu Wei, Lei Cui, Yu Wu, Chuanqi Tan, Songhao Piao, and Ming Zhou. Neural Melody Composition from Lyrics. arXiv preprint arXiv:1809.04318, 2018. https://arxiv.org/pdf/1809. 04318
[12] Jinglin Liu, Chengxi Li, Yi Ren, Feiyang Chen, Zhou Zhao. DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism. In AAAI Conference on Artificial Intelligence, 2021. https://api.semanticscholar. org/CorpusID:235262772
[13] Yongqi Wang, Ruofan Hu, Rongjie Huang, Zhiqing Hong, Ruiqi Li, Wenrui Liu, Fuming You, Tao Jin, and Zhou Zhao. Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt. arXiv preprint arXiv:2403.11780, 2024. https://arxiv.org/abs/2403.11780
[14] Derek Lim and Austin R. Benson. Expertise and Dynamics within Crowdsourced Musical Knowledge Curation: A Case Study of the Genius Platform. arXiv preprint arXiv:2006.08108, 2020. https://arxiv.org/abs/2006. 08108 | We present Sleeping-DISCO 9M, a large-scale pre-training dataset for music
and song. To the best of our knowledge, there are no open-source high-quality
dataset representing popular and well-known songs for generative music modeling
tasks such as text-music, music-captioning, singing-voice synthesis, melody
reconstruction and cross-model retrieval. Past contributions focused on
isolated and constrained factors whose core perspective was to create synthetic
or re-recorded music corpus (e.g. GTSinger, M4Singer) and arbitrarily
large-scale audio datasets (e.g. DISCO-10M and LAIONDISCO-12M) had been another
focus for the community. Unfortunately, adoption of these datasets has been
below substantial in the generative music community as these datasets fail to
reflect real-world music and its flavour. Our dataset changes this narrative
and provides a dataset that is constructed using actual popular music and
world-renowned artists. | [
"cs.SD",
"cs.LG",
"eess.AS"
] |
# 1 INTRODUCTION
Hindle et al. [29] show that software is repetitive and predictable like natural language, and hence can be modeled using statistical techniques like LLMs. Subsequently, LLMs have been used effectively for a wide variety of software engineering (SWE) tasks , including code generation [13], language translation [85] code summarization [91] and others. Many code specific datasets [43, 81], models [49, 67] and benchmarks [28, 127] have also been developed. Despite this progress, LLMs have been shown to be limited in their capacity to solve real-world SWE tasks, like GitHub issue resolution [41]. Recent development of large reasoning models (LRMs) [2, 25, 35] and SWE Agents have resulted in tremendous improvement on code generation, test generation and GitHub issue resolution.
In a recent survey, Yang et al. [108] explore how code and reasoning reinforce each other. They compile works showing how incorporating code data improves reasoning, and how better reasoning leads to improvement on SWE tasks. Many underlying techniques contribute to reasoning models, including Chain-of-Thought or CoT [102] which elicits reasoning, learning from environment feedback [15] and exploring multiple reasoning paths [112]. Many recent surveys explore reasoning techniques, SWE task specific LLMs, benchmarks and Agents, and we discuss them in Sec. 2. We did not, however, find any survey that explores the impact of reasoning, and specifically code-based reasoning techniques for SWE tasks. SWE is one of the most interesting applications areas of Artificial Intelligence (AI) and there is growing research in this space. As different reasoning techniques mature and agents become more robust, it is reasonable to expect more and more SWE tasks will be automated. With our survey on code reasoning for code tasks, we hope to address this gap by making the following contributions:
(1) The first survey specific to reasoning for coding tasks, emphasizing reasoning techniques which borrow ideas from coding principles (Sec. 3). SWE Agents are given a special focus (Sec. 4) since they depend on multiple reasoning techniques.
(2) A taxonomy covering different reasoning approaches and benchmarks for code (Fig. 1). We also highlight approaches employing multiple reasoning techniques for LLMs in general (Tab. 1) and agents in particular (Tab. 2).
(3) A showcase of benchmarks used to study the impact of reasoning on SWE tasks. We compiled results (Tab. 3, 5, 6, 7) showing the performance of different code reasoning and agentic approaches (Sec. 6.1). We also highlight promising benchmarks specific to code reasoning (Sec. 6.2), and surface some new agent-specific benchmarks with potential for furthering SWE research.
Plraonm-bpatisnegd (C3.o1T) PlanSearch [95]; Self-Planning [39]; ClarifyGPT [61] Code CoT Reasoning (§3.1) Code-structure-based CoT Prompt (§3.1) SCoT [48]; MoT [73]; CodeChain [44]; SemCoder [19]; CGO [115] CoT fine-tuning (§3.1) UniCoder [90]; COTTON [109]; MSCoT [42]; ChainCoder [125] Self-evaluation of Self-debugging [15]; CodeCOT [30]; AlphaCodium execution behavior (§3.2) [84]; Revisiting Self-debugging [16]; 𝜇Fix [93] Code Reasoning Taexcohnoimquyesof Erexaescountionng-(b§a3s.e2d) Trbaainsiendg eweidtbhacEkxe(c§u3t.i2o)n- LEVER [65]; CYCLE [18]; LeDex [38] Automated Test Generation (§3.2) UTGEN [79]; AceCoder [120] ; DSTC [57]; ASTER [72]; SWT-Bench [63] Inference Scaling (§3.3) Sampling (§3.3) AlphaCode [50]; REx [92]; S\*: Test-time Scaling [46] Search (§3.3) ToT [112]; GToT [58]; ORPS [117] Workflow (§4.1) Agentless [104]; AutoCodeRover [123] Taxonomy of Tasks Agentic (§4) Agent Optimization (§4.2) SWE-Agent [110]; CodeAct [98]; MASAI [4]; CodeR [9]; PairCoder [121]; HyperAgent [77]; AgileCoder [64]; OpenHands [100] Reasoning Model ImLingma [59]; SWE-Gym [71];SWE-Fixer [105]; SWE-RL [103] provement (§4.3) Inference Scaling (§4.4) CodeTree [47]; SWE-Search [3]; Tree-of-Code [66] HE [12]; MBPP [7]; APPS [27]; CodeContests[51]; LCB [36]; BigNon-Agentic (§6) Code Tasks (§6.1) TesCtoEdveaBle[n9c7h] [S12W7]E;-BCeRnUchX[E4v0a]l [S24W];E-HBEenPcahckM[u6l2t]i;mSopdiadler 1[1415,] 11M6u]l;tiSWE-Bench [119]; $M ^ { 3 }$ ToolEval [99]; Otter [1]; SWT-Bench [63] ing Tasks (§6.2) Code Reason- CRUXEval [24]; CodeMind [53]; ReEval [10]; ExeRScope [52]; CodeMMLU [60]
(4) Discussion on how the performance of different code reasoning techniques may be connected to different code properties (Sec. 4). In Sec. 8, we use this discussion to motivate future work.
# 2 RELATED SURVEYS
Wei et al. [102] introduce CoT as a form of in-context learning which induces reasoning in LLMs. In the same year, Dong et al. [20] survey in-context learning techniques and reference CoT reasoning but do not expand on it. Qiao et al. [82] and Huang and Chang [31] survey methods and tasks for reasoning, extensively studying CoT and other prompting approaches, but do not include SWE tasks. Chu et al. [17] also cover CoT reasoning extensively in a recent work. They define a more general concept of XoT or X-of-Thought, which covers concepts like Program-of-Thought [14], Treeof-Thought [112] etc. apart from CoT. However, they focus on the impact of these techniques on reasoning benchmarks while we are more interested in how reasoning impacts codespecific or software engineering benchmarks. Other recent surveys also cover different types of reasoning techniques for LLMs. Xu et al. [106] discuss reinforcement learning based reasoning techniques, but they don’t discuss code-specific reasoning strategies. Plaat et al. [78] classify the in-context reasoning approaches into prompting, evaluating and control (inference scaling and search) based strategies, but they don’t focus on coding tasks.
In their work titled "Code to Think, Think to Code", Yang et al. [108] highlight the interplay between code properties and reasoning capabilities and how one enhances the other. This survey makes the case that training with code related data improves performance on math and reasoning benchmarks, while incorporating reasoning improves performance on coding benchmarks because some code properties reinforce reasoning capabilities and vice versa. Compared to this work, we dive deeper into reasoning techniques used for coding tasks and provide a taxonomy covering different strategies.
A lot of surveys do cover impact of LLMs and agents on SWE tasks but none so far have focused on reasoning based strategies. Zan et al. [118] survey 27 LLMs for natural language to code generation task. Jiang et al. [37] undertake an extensive survey covering not just LLMs but also LLM architectures, many different research topics, benchmarks and datasets, encompassing a total of 235 papers. Sun et al. [89] also do a wide ranging survey covering 50 different models and their variants along with 20 different code-related task categories. Huynh and Lin [34] survey many topics in this space including challenges and applications. Apart from surveys covering multiple topics from the domain of
AI for code/software engineering, there are also surveys that are more topic-specific. Wang et al. [96] focus exclusively on reinforcement learning for code generation. Chen et al. [11] survey different evaluation techniques for coding tasks. Yehudai et al. [114] also focus on evaluation, specifically of LLM-agents, including those applied to software engineering (SWE) Agents.
We did not find any survey specific to code-based reasoning techniques for software engineering tasks.
# 3 TAXONOMY OF TECHNIQUES
Brown et al. [8] show that LLMs are few-shot learners. Performance of LLMs on reasoning tasks is further enhanced by a certain kind of prompting called Chain-of-Thought or CoT [102] prompting which elicits LLM reasoning. Wei et al. [101] suggest that in-context learning ability of LLMs, including CoT reasoning, is an emergent property of LLMs. Code CoT papers (39, 48, 73 and others) suggest that code reasoning is a specific kind of reasoning and CoT can be more impactful when induced with prompts that recognize this difference. We survey such techniques in Sec. 3.1.
Yao et al. [112] state that "System $2 "$ thinking should involve exploring diverse solution paths rather than greedily picking one. They connect CoT with sampling and search to enable exploration of multiple reasoning paths. Li et al. [50] effectively leverage sampling and search techniques to generate competition-level code. Sec. 3.3 covers sampling and search techniques used to explore multiple reasoning paths for software engineering tasks.
One way code output is different from natural language output is that it can be executed and tested to validate its correctness. Yao et al. [112] highlight that execution can be a way to check if the reasoning is correct. Other such techniques based on code execution are covered in Sec. 3.2.
Many approaches use a combination of these techniques, although one technique usually dominates. Tab. 1 shows approaches which rely on multiple techniques.
# 3.1 Code Chain-of-Thought Reasoning
CoT prompts for code can be categorized as plan-based or structure-based. Plan-based CoT is a natural language articulation of steps that need to be taken to solve a coding problem. Code-structure-based CoT utilizes some code structure or programming concept. Besides prompting-only techniques, another approach used by many is fine-tuning or instruction tuning for software engineering tasks with code CoT data.
Plan-based CoT Prompting. Several recent approaches enhance code generation by explicitly modeling intermediate reasoning or problem understanding steps. For instance, PlanSearch [95] generates 3–6 problem observations, combines them into natural language plans, and translates these into pseudocode and then code. Self-Planning [39] uses few-shot prompting to extract a high-level plan from the problem, which guides code generation. ClarifyGPT [61] employs test generation to construct clarifying questions and answers that are appended to the prompt for code synthesis.
Code-Structure-based CoT Prompting. In SCoT, Li et al. [48] use programming structures, like sequence, branch, and loop, as steps towards intermediate code, which is used to prompt the model to generate code. Chain of grounded objectives (CGO) [115] embed appropriately-structured functional objectives into the input prompts to enhance code generation. Pan and Zhang [73] propose a novel prompting technique, Modularization -of-thought (MoT), which exploits modularization principals to decompose complex programming problems into smaller independent reasoning steps, via a multi-level reasoning graph. Le et al. [44] also elicit modularized code generation but in a multi-step technique called CodeChain, which is a chain of self-revisions applied by picking potentially correct representative submodules.
CoT fine-tuning. Sun et al. [90] define UniCoder; they use an intermediate representation CoT based on programming language conventions and use this to instruction-tune a model on a multi-task learning objective. Yang et al. [109] generate high-quality CoTs based on the COTTON framework, which trains light-LMs $\mathit { \Theta } _ { < } ^ { \prime } 1 0 \mathrm { B }$ parameters) to generate CoT comparable to those generated by strong teacher LLMs.
ChainCoder [125] generates code iteratively in a "courseto-fine" approach and trains a model using an AST-based vocabulary. SemCoder [19] uses a monologue reasoning approach to train a model to learn program semantics, which is generated by asking the Code LLM to summarize the program functionalities, key properties and constraints, and reason about code execution step-by-step using a bi-directional monologue reasoning method. MSCoT [42] extends SCoT [48] to 11 more programming languages beyond Python; a trained MSCoT model generates structured-CoT before producing code in multiple languages.
# 3.2 Execution-based Reasoning
Execution-based reasoning involves executing LLM-generated code in a given environment and having the LLM reason and learn from the execution environment output.
Self-Evaluation of Execution Behavior. These strategies utilize code execution feedback to select the final prediction from a LLM. In Chen et al. [15], the Self-debugging approach teaches the model to self-debug i.e., debug the model’s predicted code, via few shot prompting and without additional model training. The code explanation along with the execution results constitute the feedback message that is used for debugging the generated code.
A similar approach was taken in Code Chain-of-Thought (CodeCoT) by Huang et al. [30], where CoT is used as a first
Ira Ceka, Saurabh Pujar, Irene Manotas, Gail Kaiser, Baishakhi Ray, and Shyam Ramji step to generate the code, then a LLM generates test cases to validate whether the code has syntax errors during the execution. AlphaCodium, proposed by Ridnik et al. [84], is a flow to improve code LLM performance that does not require training a model. The two key phases in AlphaCodium’s flow are: (a) a pre-processing phase, where it generates problem reflection and test reasoning; and (b) an iterative code generation phase, where code is generated, run, and fixed against both public and AI-generated tests.
Table 1: LLM Reasoning based approaches for code tasks and key components. CoT (Chain-of-Thought); Exe-based (Execution-based feedback); GenAI Tests (Generated Tests with LLMs); MV (Majority Vote); RR (Re-Ranking); RL (Reinforcement-Learning). Each approach has a dominant strategy by which we categorize our taxonomy: CoT and Planning , Execution-driven , and sampling or search . For agentic see Tab. 2.
Table 2: In our taxonomy Agents are classified as employing one of the following techniques 1. Workflow, 2. Reasoning Model improvement, 3. Agent optimization, 4. Inference scaling. However, many agents employ multiple techniques. For example, SWE-Gym is classified in the reasoning model improvement category, but they also train a verifier model for inference scaling. This table highlights such nuances.
In Revisit Self-Debugging [16] authors explored both post-execution and in-execution self-debugging, leveraging self-generated tests. In post-execution self-debugging, the process directly validates the correctness of code by checking whether the output after execution matches the test output or not, whereas in-execution self-debugging analyzes the intermediate runtime states during program execution without knowing the results from post-execution.
More recently, Tian et al. [93] proposed $\mu \mathrm { F i x }$ (Misunderstanding Fixing) where thought-eliciting prompting techniques are combined with feedback-based prompting to improve the code generation performance of LLMs. Feedbackbased prompting focuses on trying to understand the root cause of failure of tests by analyzing the actual understanding implicitly utilized by LLMs for code generation through code summarization.
Training with Execution-based Feedback. We pinpoint approaches that train an LLM, leveraging execution data, to improve model performance. LEarning to VERify [65] (LEVER) is an approach where verifiers are trained to check whether the generated code is correct or not based on three sources of information: the natural language input, the program itself, and its execution results.
CYCLE [18] trains code LLMs to self-refine using natural language specifications, generated code, and execution feedback, while avoiding repeated errors via a Past Generation Mask. Similarly, Jiang et al. [38] proposed LeDex, a training framework to improve the self-debugging capability of LLMs using a chain of explanations on the wrong code followed by code refinement.
Automated Test Generation. Unit Tests (UT) are a way to assess the correctness of code and give execution-based feedback to code generation models. UTGEN [79] is a data creation and training recipe that bootstraps training data for UT generation and works by perturbing code to simulate errors, generating failing tests and augmenting it with CoT rationales.
Along with UTGEN, authors presented UTDEBUG, an improved multi-turn debugging method that improves the output accuracy of generated UTs by scaling test-time compute via self-consistency.
AceCoder [120] leverages automated large-scale test-case synthesis to enhance code model training. They propose a pipeline that generates extensive (question, test-cases) pairs from existing code data. In the UT generation process, a LLM is asked to generate 20 test cases from a refined code problem description (instruction), then another stronger LLM is used as a proxy to validate the quality of the generated UTs. With the aid of test cases, they create preference pairs based on pass rates over sampled programs to train reward models with Bradley-Terry loss.
Liu et al. [57] propose Direct Preference Learning with only Self-Generated Tests and Code (DSTC), using only selfgenerated code snippets and tests to construct preference pairs with direct preference learning to improve LM coding accuracy without external annotations. The UT generation process is joint with the code generation process, where the LLM is prompted to generate multiple code snippets and tests for each given instruction. More recently, ASTER [72] is a multilingual UT-generator built with LLMs guided by lightweight program analysis. ASTER has a generic pipeline that incorporates static analysis to guide LLMs in generating compilable and high-coverage test cases for Python and Java.
SWT-Bench [63] is a benchmark has over 1, 900 samples that were created by transforming SWE-bench [40] from code repair to test generation.
# 3.3 Inference Scaling for SE Tasks
Several approaches to code generation, code repair, and testcase generation use tree-based scaling strategies to guide decisions and explore reasoning paths, while others use sampling.
Sampling. In AlphaCode, Li et al. [50] solve competitive programming problems using large-scale sampling followed by filtering and clustering. AlphaCode diversifies the generation process by generating half of its samples in Python and half in $C { + } { + }$ , randomizing problem tags and ratings in the prompt, and using a high sampling temperature. With these techniques, they are able to generate millions of sample solutions per programming problem. This generation phase is then followed by filtering and clustering, which uses existing or generated unit-tests.
The authors of $\mathsf { R E x }$ [92] frame iterative code repair, or refinement, as a multi-armed bandit problem using Thompson sampling, where each "arm" is a program and "pulling the arm" corresponds to refining the program.
The heuristic reward is the fraction of specifications (test cases) satisfied by the program. Using this sampling technique, they are able to select the program according to its probability of giving the best reward.
In $\mathsf { S } \star$ , Li et al. [46] take a hybrid approach to sampling, first generating N diverse programs in parallel then refining the programs using iterative debugging. Their iterative debugging is informed by execution results on public test cases.
# Search.
The Tree-of-thoughts or ToT [112] paradigm allows LMs to explore multiple reasoning paths over CoT.
The language model’s reasoning is used as the heuristic, which contrasts with traditional approaches that use learned or programmed rules. To travers the tree, ToT uses classic search strategies: breadth-first search (BFS) or depth-first search (DFS). Similarly, Guided Tree-of-thought (GToT) [58] also uses a tree-search algorithm, where the LLM is used as a heuristic for generating search steps. GToT uses prompting to reach an intermediate solution to a problem, then introduces a checker, which assesses the correctness or validity of the intermediate solution.
Ouédraogo et al. [70] explore the effectiveness of various prompting techniques, including ToT and GToT, on the task of test generation. They show that GToT prompting is effective in generating syntactically-correct and compilable test suites, and can also lead to test suites with superior code coverage. Yu et al. [117] propose ORPS, Outcome-Refining Process Supervision for code generation. Their paradigm performs beam-search over a "reasoning tree." In this tree, each state captures the complex nature of code; a state contains information about the theoretical reasoning, code implementation, and execution outcome of a potential solution.
# 4 TAXONOMY OF TASKS: AGENTIC
Agentic systems for different tasks use many of the reasoning techniques described in Sec. 3. Software engineering (SWE) agents take a programming problem and iteratively solve it by self-debugging based on the feedback provided by the environment. The self-debugging is enabled by CoT-style natural language reflection [88] on environment feedback. The reasoning is done by an LLM which interacts with the agent execution environment with API based tool calls [113].
# 4.1 Workflow
Schluntz and Zhang [86] draw a distinction between agents and LLM-based workflows, stating that the latter are simpler, have a fixed path, and do not require an LLM to make a decision. Agentless [104] is a three step process for GitHub issue resolution involving localization, repair, and patch validation. AutoCodeRover [123] uses program structure, in the form of an Abstract Syntax Tree (AST), to enhance code search and looks at a software project as classes and functions, rather than as a collection of files.
Agents may employ one or more of the techniques described below. Our categorization is based on what we consider to be the dominant technique, but we highlight all the different techniques used in Tab. 2.
# 4.2 Agent Optimization
There can be many ways to improve an SWE agent, including but not limited to, better environment management or agent-environment interface, improved workflow or architecture, and incorporating more tools. SWE-Agent [110] is an agent capable of editing repository-level code by generating a thought and a command, and subsequently incorporating the feedback from the command’s execution into the environment. In CodeAct, Wang et al. [98] propose to use executable Python code to consolidate LLM agents’ actions into a unified action space. This is claimed to be better than the existing technique of producing actions by generating JSON or text in a predefined format, which is less flexible and has a constrained action space. OpenHands [100] is a platform for developing flexible AI Agents that interact with the digital world the same way a human would, by writing code, interacting with the command line or browsing the web. This platform allows for integration of other specialist agents, like CodeAct [98] for software engineering.
Arora et al. [4] take inspiration from modularization and develop MASAI, a modular SE agent with 5 sub-agents for different tasks: Test Template Generator, Issue Reproducer, Edit Localizer, Fixer, and Ranker. CodeR [9] is also a multi-agent framework with task graphs for resolving issues. Similar to role-based teams of humans that resolve issues, the framework also defines roles and actions like Manager, Reproducer, Fault Localizer, Editor and Verifier. PairCoder [121] is inspired by the software development practice of pair programming. It incorporates two collaborative agents: NAVIGATOR agent for high-level planning and DRIVER for specific implementation. HyperAgent [77] is a multi-lingual (Python/ Java), multi-agent system that emulates the workflow of human developers. It consists of four specialized agents called Planner, Navigator, Code Editor and Executor, which are capable of managing the full SE task life-cycle from planning to verification. AgileCoder [64] is a multi-agent system that uses sprints and agile roles (e.g., Product Manager, Developer, Scrum Master) to coordinate work based on user input.
# 4.3 Reasoning Model Improvement
Some agent improvements are the result of task-specific training of underlying reasoning model with patch data or agent environment interaction data, called trajectories. Ma et al. [59] observe that software evolution involves not just code but developers’ reasoning, tools, and cross-role interactions. Their Lingma SWE-GPT models (7B, 72B) are fine-tuned on repository understanding, bug localization, patching, and rejection sampling using pull-requests from repos. Training starts from Qwen2.5-Coder-7B [32] and Qwen2.5-72BInstruct [107], and inference runs through SWESynInfer, an AutoCodeRover-based workflow [123]. Pan et al. [71] build SWE-Gym from 2,438 real-world Python tasks—each with a runnable codebase, unit tests, and an NL spec. Using OpenHands scaffolding [100], they fine-tune Qwen2.5-Coder32B [32] on 491 agent–environment trajectories and train a verifier on the same data for scalable inference. SWE-Fixer [105] is an open-source, two-stage GitHub issue fixer. A fine-tuned Qwen2.5-7B retriever, boosted with BM25, identifies relevant files, while a fine-tuned Qwen2.5-72B editor generates patches. Each model was trained on $1 1 0 \mathrm { k }$ issue–patch pairs, with the editor further tuned on CoT data synthesized by GPT-4o [33]. SWE-RL [103] is the first scalable RL-based reasoning approach for software engineering. Llama 3 [23] is trained with lightweight rule rewards and GRPO [87] on 11M filtered PRs, producing Llama3-SWE-RL-70B, the top medium-sized model on SWE-bench Verified [68].
# 4.4 Inference Scaling
Agentic systems often involve a component that scales inference time compute and improves agent performance by searching over multiple samples.
CodeTree [47] frames code generation as a tree-search problem using a combination of planning, execution-guided reasoning, and sampling. CodeTree employs heuristic strategies similar to other search-based approaches, using testing pass rate (as in REx, $\mathsf { S } \star$ ) combined with LM critique as a heuristic (as in ORPS, ToT/GToT) to guide the traversal of the tree. Unlike other approaches, it uses a collaborative, multi-agent framework; sub-agents like thinker, critique, and debugger, are specialized for a particular type of reasoning.
ToC [66] also presents the reasoning process as a tree. They represent nodes in a similar way to CodeTree, using the thought, generated code, and execution results as attributes of the node. Contrary to CodeTree, which uses a combination of test-pass rates and a soft score to judge robustness of a solution, ToC uses a binary heuristic: execution pass or execution fail.
SWE-Search [3] is a moatless-tools [69] based multi-agent framework which integrates Monte-Carlo Tree Search with self-improvement for bug-fixing. An LLM-backed hybrid value function combines numeric and qualitative scores from trajectories, file context, and test output to steer node expansion.
# 5 RESULTS TABLES
We manually inspect every work in our survey and collate author-reported and cross-reported results on common code tasks. The task-specific benchmarks considered intersect across approaches, and we use these intersecting models/benchmarks to make observations of their trends.
In our surveyed works, the common benchmarks for code generation are APPS [28], HumanEval [13], HumanEval $^ +$ [54], HumanEval-ET [21], multi-language benchmarks HumanEvalX [124] and HumanEval-XL [75], as well as MBPP [5], $M \mathrm { B P P + }$ , MBPP-sanitized [6], and MBPP-ET. Code generations results for various techniques can be compared in tables: 3, 7, 6.
For GitHub issue resolution, the common benchmark was SWE-bench [41] and the results are shown in Tab. 5. For code reasoning, the common results for LiveCodeBench, CodeContests, and $M ^ { 3 }$ ToolEval can be seen in Tab. 4.
# 6 TAXONOMY OF TASKS: NON-AGENTIC
Code reasoning systems are often non-agentic, and they are evaluated on some standard tasks and benchmarks. These tasks or benchmarks can also be reasoning-specific.
# 6.1 Code Tasks
Code generation given natural language description is a popular task with different benchmarks covering various aspects of the problem. HumanEval (HE) [12] is a set of 164 hand-written programming problems, each problem including a function signature, docstring and unit tests. A multilanguage version of HE is also available in HumanEval-XL [76]. MBPP [7] (The Most Basic Programming Problems) benchmark has 1k crowd-sourced Python programming problems and was designed to be solvable by entry-level programmers. EvalPlus [55] augments a given evaluation dataset with large amounts of new test cases created by an automatic test input generator, powered by both LLM- and mutationbased strategies. EvalPlus includes $M B P P +$ , HumanEval+, and EvalPerf. APPS [27] is another benchmark for code generation with $1 0 \mathrm { k }$ samples. More recent extensions of some of the above benchmarks such as HumanEval-ET, MBPPET, and APPS-ET were introduced by Dong et al. [22]. ConvCodeBench [26] is a benchmark for interactive code generation; it uses pre-generated feedback logs, avoiding costly LLM calls for verbal feedback while maintaining strong correlation with live results; Spider [116] [45] is a benchmark to evaluate the generation of SQL queries from natural language.
GitHub Issue Resolution. SWE-Bench [40] is a popular benchmark for the GitHub issue resolution task. Other variations of SWE-Bench include: SWE-Bench Multimodal [111] for visual and user-facing components, Multi-SWEBench [119] and SWE-PolyBench [83] for more programming languages besides Python.
Test generation. Benchmarks like TestEval [97] can help w.r.t. three different aspects: overall coverage, targeted line or branch coverage, and targeted path coverage. SWT-Bench [63] is another GitHub based test-generation benchmark; Otter [1] also proposed an LLM-based solution to generate test cases from issues.
Ira Ceka, Saurabh Pujar, Irene Manotas, Gail Kaiser, Baishakhi Ray, and Shyam Ramji
Table 3: Performance across the APPS benchmark [28], including the APPS Introductory, Interview, Competition, APPS-ET, and APPS overall sets. Default performance is reported as 𝑝𝑎𝑠𝑠@1 $( \% )$ . Approaches marked with $\diamond$ use the $n @ k$ metric, where $n = 5$ and $k = 1 , 0 0 0$ .
# 6.2 Code Reasoning Tasks
CodeContests [51] is a code generation dataset with problems in Python, Java, and $C + +$ , curated from competitive programming platforms such as CodeForces, requiring solutions to challenging code generation problems. More recently, LiveCodeBench (LCB) [36] collected new problems over time from contest platforms including LeetCode, AtCoder, and CodeForces, for a more holistic evaluation. CRUXEval [24] includes both input and output predictions to evaluate code reasoning and code execution, respectively. BigCodeBench [127] challenges LLMs to invoke multiple function calls as tools from multiple libraries and domains for different fine-grained tasks. ReEval [10] helps to analyze how code LLMs reason about runtime behaviors (e.g., program state, execution paths) of programs. The ExeRScope [52] tool helps to analyze the result of code execution reasoning frameworks and understand the impact of code properties. CodeMMLU [60] is a large benchmark to evaluate both code understanding and code reasoning through a multiple-choice question-answering approach. CodeMind [53] is a code reasoning benchmark for LLMs, evaluating Independent Execution Reasoning (IER), Dependent Execution Reasoning (DER), and Specification Reasoning (SR) tasks and metrics. M3ToolEval [99] is used for multi-turn, multi-tool complex tasks.
# 7 COMPARISON AND DISCUSSION
How can variance in performance of different techniques (planning, structure-aware, execution-based, inference scaling, etc.) on common benchmarks be explained by properties of code? First, we must understand why chain-of-thought (CoT) prompting helps over direct prompting. One hypothesis from Prystawski et al. [80]’s work provides theoretical and experimental evidence that intermediate steps (i.e., chainof-thought reasoning) reduce bias in transformers. They show that when training data has local structure (as textual data does), intermediate variables (CoT) can outperform direct prediction (no CoT). This suggests that CoT reasoning helps most when a model is asked to make inferences about concepts that do not co-occur in the training data, but which can be chained together through topics that do.
This may shed light on the variance in performance across different CoT patterns. Section 3.1 surveys works that formulate CoT in plan-based, structure-based, and modular arrangements. The results suggest that modular formats outperform structure-aware ones, which in turn outperform plan-based approaches.
Ira Ceka, Saurabh Pujar, Irene Manotas, Gail Kaiser, Baishakhi Ray, and Shyam Ramji
Table 4: Performance across the LiveCodeBench (LCB), CodeContests (test set), and $\mathbf { M } ^ { 3 }$ ToolEval. Default results are reported as 𝑝𝑎𝑠𝑠 $@ 1$ . Approaches marked with $\dagger$ indicate 𝑝𝑎𝑠𝑠 $@ 5$ , while those marked with $\ddagger$ use the $n @ k$ of $1 0 @ 1 k$ rate. $\mathbf { \boldsymbol { s } } ^ { * }$ results reflect performance on LCB v2.
Structure-aware CoT strategies are better than planning-based CoT strategies, especially for self-contained code-contest benchmarks like MBPP and HE benchmarks. Chain of Grounded Objectives (CGO) outperforms Self-Planning and ClarifyGPT with gpt-3.5 on MBPP-S; it is also better than Self-Planning on ${ \mathrm { M B P P + } }$ . This also holds true for Llama-3-8B-Instr, where CGO is better than Self-Planning. On MBPP and ${ \mathrm { M B P P + } }$ with gpt-4o-mini, SCoT is better than Self-Planning (Table 6).
We posit that because code has properties of structured syntax, the primitive structures invoked within the CoT are highly local in the training data. Structures (such as indents, branches, loop invariants, functions, etc) are seen countless times in the training corpus. The model’s ability to estimate probabilities (and thus its ability to arrive at a correct solution) become sharper by eliciting these localities. Modular structures may push this same principle further, which explains the next finding.
Modularity helps in CoT, as is evident when modular techniques dominate other structured and plan-based CoT approaches. MoT outperforms SCoT and Self-Planning with DS-R1 on MBPP and HE. This is also true for MBPP and ${ \mathrm { M B P P + } }$ with gpt-4o-mini. CodeChain (a modular approach) also outperforms SCoT on APPS overall with gpt-3.5 (Table 3, 6, 7).
Modularity improves upon structure-based CoT by providing ultra-localized scoping; with more clearly defined and specific functionality, modularity eliminates the chance of error propagating to subsequent steps. Additionally, text lacks the precision required for computational tasks, whereas structure and modularity are more precise and unambiguous. Still, there are other fascinating properties of code that can be leveraged: code exhibits deterministic output, executable nature, and error feedback. These properties can be leveraged to validate or verify a solution, which explains the next observation:
Ira Ceka, Saurabh Pujar, Irene Manotas, Gail Kaiser, Baishakhi Ray, and Shyam Ramji
Table 5: Performance on SWE-Bench Verified, and SWE-Bench Lite, and SWE-Bench. Performance is measured by resolved rate.
Execution-aware strategies dominate CoT-based strategies. 𝜇Fix and Self-Debugging surpass other CoT baselines (CGO, SCoT, Self-Plan, Clarify-GPT) on HE (gpt3.5). Revisiting Self-Debugging beats PlanSearch on ${ \mathrm { H E } } +$ (Claude-3.5). 𝜇Fix and Self-Debugging outperform ClarifyGPT on MBPP-ET (gpt-3.5), further reinforcing dominance of execution-based methods. On MBPP with gpt-3.5, Self-Debugging surpasses SCoT by a large margin. $\mu \mathrm { F i x }$ and Self-Debugging outperform UniCoder on HE. The findings hold true on the APPS benchmark, where $\mu \mathrm { { F i x } }$ outperforms CodeChain, SCoT, and Self-Planning with gpt-3.5. This is true for DeepSeek-Coder as well, where $\mu \mathrm { F i x }$ , SelfDebugging, and CYCLE models, which are smaller-sized parameter models but finetuned, outperform SCoT. (Tables 3, 6, 7).
We posit that execution may help because executing code can be used as a deterministic check. Any chain that violates the check can be discarded. Hence, bad chains are filtered out, so variance may collapse faster. However, even with reduced variance, LLMs can still exhibit issues, such as model rigidity. Because code is inherently deterministic (i.e. under certain assumptions, a given input consistently produces the same output), this can lead models to develop rigid generation patterns in training. For example, Twist et al. [94] show that LLMs exhibit a strong bias towards certain programming languages, like Python; Liu et al. [56] document the pervasiveness of repetition in LLM-based code generation, where models often reproduce patterns observed in training. Zhang et al. [122] demonstrate that LLMs favor certain libraries and APIs by default, reflecting the distribution of their training corpora. Furthermore, Pan et al. [74] show that LLMs struggle to generalize to the architectural design principles of given projects, leading to the generation of conflicting code. This phenomenon compels the integration of search in order to explore diverse trajectories, which explains the recent success of inference scaling techniques.
Approaches that integrate inference scaling outperform execution-dominant or CoT-dominant strategies. CodeTree outperforms Revisiting Self Debugging on ${ \mathrm { M B P P + } }$ with gpt-4o. ORPS outperforms MoT and other structure-based and plan-based approaches (like SCoT and Self-Planning) on MBPP with gpt-4o-mini. This is also true for MBPP with DeepSeekCoder; ORPS outperforms UniCoder by a large margin. REx with gpt-4 also claims to achieve the state-of-the art on APPS, with roughly $7 0 \%$ . $S ^ { \star }$ also beats PlanSearch on LCB with o1-mini and 4o-mini. (Tables 4, 6)
Due to the issues mentioned prior, methods that incorporate both exploration and feedback (as these search-based techniques do) have shown superior performance. These methods can actively counteract model rigidity by encouraging the model to deviate from its default generation paths, resulting in more diverse and contextually appropriate outputs. In fact, several works support the case for re-sampling, or exploring multiple and diverse paths through a combination of models.
Agentic approaches appear to dominate both execution-based and CoT strategies. PairCoder and AgileCoder significantly outperform ClarifyGPT with gpt4 on HE. PairCoder is better than CGO and Self-Planning on ${ \mathrm { M B P P + } }$ with gpt-3.5. Both PairCoder and Agile coder are better than SCoT on MBPP with gpt-3.5; both dominate Self-Debugging as well. With DeepSeek-coder on HE, Paircoder outperforms $\mu \mathrm { F i x }$ , Self-Debugging, and UniCoder; also with DeepSeek-Coder, PairCoder outperforms UniCoder on MBPP. This is also true for gpt-4 on MBPP-S, where PairCoder outperforms ClarifyGPT. (Tables 3, 6, 7)
Agentic approaches succeed by integrating chain-of-thought reasoning, execution-based validation, and sampling into a unified framework–thus leveraging code’s structured syntax, executable semantics, and error feedback all in one.
Agentic approaches that scale inference with search are highly competitive and can even outperform other strategies. CodeTree outperforms MoT, SCoT, SelfPlanning, and PlanSearch on ${ \mathrm { H E } } +$ and ${ \mathrm { M B P P + } }$ with gpt-4omini; CodeTree outperforms these strategies with gpt-4o as well. CodeTree also outperforms ORPS on MBPP with gpt-4o-mini. On $M ^ { 3 }$ ToolEval, ToC is better than CodeAct. Moreover, SWE-Search, which combines inference scaling in an agentic approach, dominates the leaderboard on SWE-Bench Lite. (Tables 4, 5, 6, 7).
Furthering the case for counteracting model rigidity, agents that integrate search to scale their inference achieve state-ofthe-art performance. ToC and SWE-Search in particular show that integrating diverse trajectories (either via multiple models or collaborative agents) and incorporating backtracking can lead to major gains. This reinforces the case for exploration. Indeed, SWE-Search tops the leaderboard, achieving $31 \%$ on SWE-Bench Lite (Table 5). We leave it to future work to undertake the validation and theoretical substantiation of the premises discussed here. | The rise of large language models (LLMs) has led to dramatic improvements
across a wide range of natural language tasks. These advancements have extended
into the domain of code, facilitating complex tasks such as code generation,
translation, summarization, and repair. However, their utility for real-world
deployment in-the-wild has only recently been studied, particularly on software
engineering (SWE) tasks such as GitHub issue resolution. In this study, we
examine the code reasoning techniques that underlie the ability to perform such
tasks, and examine the paradigms used to drive their performance. Our
contributions in this paper are: (1) the first dedicated survey on code
reasoning for code tasks, highlighting overarching strategies, hybrid and
agentic approaches; (2) a taxonomy of various techniques used to drive code
reasoning; (3) a comprehensive overview of performance on common benchmarks and
a showcase of new, under-explored benchmarks with high potential in SWE; (4) an
exploration on how core properties of code can be used to explain different
reasoning techniques; and (5) gaps and potentially under-explored areas for
future research. | [
"cs.SE",
"cs.AI"
] |
# I. INTRODUCTION
By 2050, global agricultural production must double to feed 10 billion people—a Herculean task exacerbated by climate-induced disruptions to pest populations. Rising temperatures accelerate insect reproduction cycles, amplifying crop destruction: the UN Food and Agriculture Organization estimates annual losses of $2 0 \%$ $\$ 70$ billion) due to pests. These losses are further compounded by inefficient pest treatment strategies, often disregarding Integrated Pest Management (IPM) recommendations. While cutting-edge solarpowered insect monitoring tools have modernized entomological studies and pest control efforts, many rural agricultural edge devices operate with limited computational resources. They are unavailable due to poor network connectivity, delaying near real-time data analysis. This leads to pesticide overuse, increasing growers’ input costs and jeopardizing ecological integrity through soil degradation, water contamination, and killing pollinators. With $1 5 0 +$ AgriTech interviews, the authors observed the need for resource-efficient and privacy-centric technologies, adaptability for heterogeneous devices, and delaying actionable insights by days [1], [2].
Federated Learning (FL) [3] emerged as a distributed machine learning (ML) by exchanging model gradients instead of raw data (Fig. 1 (a)). However, FL mandates full DNN model training over the local edge devices, straining devices with limited compute or intermittent connectivity—a mismatch for agricultural edge nodes governed by solar cycles and sub-1 Mbps bandwidth [4], leading to inefficiencies, infeasibility, and a lower technology adoption rate. Though
FL restricts the flow of raw data, gradient inversion attacks [5] can reconstruct sensitive farm data, eroding trust in FL’s privacy guarantees. To address FL’s limitations, Split Learning (SL) [6] is a lightweight alternative for leveraging client data that cannot be centralized due to bandwidth limitations, high communication and computational overheads, and privacy concerns. In SL, a DNN model, denoted as $\mathcal { M }$ , is split into two segments: $\mathcal { M } _ { C }$ for the client’s device and $\mathcal { M } _ { S }$ for the server, as presented in Fig. 1 (b). The client-side split, $\mathcal { M } _ { C }$ , contains the input layer upto the split point (a.k.a. cut layer), while the server-side split $\mathcal { M } _ { S }$ comprises the remaining layers. Given $N$ participating clients in the SL framework, each client conducts a forward propagation (e.g., feature extractors) on their local data $\mathcal { M } _ { C }$ upto the cut layer and sends intermediate activations (smashed data) to the server. Upon receiving the smashed data, the server completes the forward propagation and loss computation and commences backpropagation until the cut layer. Once the gradients are calculated, $\mathcal { M } _ { S }$ send them to the respective client to complete the backpropagation and update the model parameters. This completes one round, and the cycle persists until convergence.
While SL reduces client-side computing by $3 0 \mathrm { - } 6 0 \%$ than FL [7], its rigid same-split-for-all design inherently assumes homogeneous device capabilities—a flawed premise in agriculture. Powerful drones idle with shallow splits, while farmedge insect monitoring devices choke on deep ones, creating stragglers that delay global convergence. Static splits also ignore transient factors like battery drain or network congestion. These tailored challenges lead to powerful devices remaining underutilized, while weaker devices risk becoming performance bottlenecks, and the close coupling between the server and each device can exacerbate system fragility. Recent work [8] adapted splits per device; however, they rely on heuristic policies that lack theoretical guarantees and fail to scale with dynamic agriculture conditions. Given the increasing complexity of deep neural network (DNN) architectures, deterministically selecting an optimal, personalized cut layer (DNN-agnostic) for each resource-constrained edge device is challenging. These challenges are magnified when dynamic cut-layer identification and assignment are required across adaptive agricultural devices. Motivated by reinforcement learning’s (RL) ability to adapt to dynamic, uncertain environments via reward-driven exploration, our approach leverages RL to determine optimal cut-layer placements autonomously.
We employ a Q-learning agent that frames split selection as a finite Markov Decision Process (MDP), balancing computational load, latency, and accuracy. This adaptive framework overcomes agriculture-specific limitations, aligning with UN sustainable development goals and inherently protecting from adversarial model inversion attacks [5], offering a blueprint for dynamic SL irrespective of underlined DNN (as shown in Fig. 1 (c)) in applications such as smart healthcare and Industry 4.0. Overall, our solution effectively addresses device heterogeneity and dynamic resource management challenges. Our contributions are:
1) Our proposed RL-based dynamic SL framework, abbreviated as ReinDSplit, addresses the inherent limitations of the conventional SL, specifically accommodating heterogeneous devices. To our knowledge, ReinDSplit is the first step towards adaptive SL using RL.
2) We provided theoretical analysis demonstrating that, with Q-learning on a finite MDP, the probability of choosing an infeasible split approaches zero, ensuring stable local gradients, straggler-free convergence, and improved resource utilization of heterogeneous devices.
3) We conducted comprehensive experiments on ReinDSplit usability (accuracy) and split-point assignment trade-off with model performance and client load for three pest recognition datasets with three SOTA DNN architectures: ResNet18, GoogleNet, and MobileNetV2.
4) ReinDSplit outperforms traditional SL and is comparable to FL by delivering superior computational efficiency and model accuracy $( 7 1 - 9 4 \% )$ across IID and non-IID settings. MobileNetV2 achieves $9 4 . 3 1 \%$ (IID), demonstrating robust tolerance to heterogeneous distributions.
Organization: Related work in Section II and preliminaries in Section III. Our system model and proposed methodology are discussed in Section IV, and theoretical analysis and experimental results in Section VII and Section VII, respectively. Section VII concludes the paper with future work.
# II. RELATED WORK
# A. Pest Monitoring and Precision Agriculture
Advances in sensing technologies and ML are driving precision pest monitoring and IPM strategies. Sensor- and imagebased detection has demonstrated upto $9 7 \%$ accuracies [9]. Further refinement is illustrated by [10], where a Faster R-CNN model with a MobileNetV3 backbone achieved an average precision of $9 2 . 6 6 \%$ for insect detection under nonhomogeneous conditions. Optical remote sensing methods, including UAV and satellite platforms [11], expand the spatial scope for real-time pest or disease surveillance. In particular, [12] collected 10K field records using optical sensors in oilseed rape crops, surpassing $80 \%$ accuracy in classifying flying pests. UAVs are increasingly crucial in precision spraying and localized interventions [13]. Beyond high-resolution imaging capabilities, they can be tailored for different agronomic tasks, as shown by [14], which investigated fixedwing, single-rotor, and multi-rotor UAVs for targeted pest management. Real-time DNN frameworks have also emerged; [15] offers continuous orchard surveillance with minimal power demands, though bandwidth constraints persist in large farms [16]. The scope of detection tasks extends beyond pests, with [17] demonstrating anomaly segmentation in UAV images for weeds and other farmland disruptions.
# B. Split Learning
FL and SL have gained attention to safeguard data privacy and reduce computational burdens in agriculture-focused IoT scenarios. Leveraging SL’s capacity for partial computation offloading, [18] presents binarized local layers, cutting memory overhead and curtailing exposure of sensitive model intermediates with minimal performance decline. Further enhancements in distributed learning involve hybrid architectures. PPSFL [19] merges FL and SL with private group normalization, effectively handling data heterogeneity and resisting gradient inversion attempts. Likewise, a federated split learning method in [20] lowers the computational burden for client devices, retaining comparable accuracy to standard SL. Parallel SL designs have also been introduced for communication efficiency, as discussed in [21], where channel allocation and gradient aggregation significantly cut overall latency. Heterogeneous client scenarios motivate ring-based SL strategies, with [22] mitigating slower “straggler” clients.
# C. RL-based Resource Allocation Strategies
The interplay between device heterogeneity, fluctuating connectivity, and limited energy budgets in IoT networks necessitates robust resource allocation strategies. RL has emerged as a powerful tool for dynamic optimization, offering policy-driven adaptations at runtime. Concurrent Federated RL [23] exemplifies this, merging FL principles with RL agents to improve system-wide utility and expedite task completions in edge computing. The approach outperforms classical baselines by jointly addressing privacy preservation and rapid decision-making. Similarly, [24] adopts a deep RL framework for mobile edge computing, reporting $1 5 \%$ lower task completion times and a $20 \%$ reduction in resource demands compared to standard DQN approaches.
Clustering-based offloading further refines performance, as demonstrated by [25], which outperforms system cost outcomes via an RL-driven grouping of IoT users. Additional complexities arise when handling general task graphs, prompting the advanced DRL scheme of [26] to reduce energy-time costs by upto $9 9 . 1 \%$ of the theoretical optimum. DeepEdge [27] similarly harnesses a two-stage RL scheme to improve QoE, enhancing latency and success rates for edgebased IoT workloads. A multi-agent perspective is highlighted in [28], where IL-based Q-learning yields a $2 5 \%$ improvement in system costs by enabling distributed decision-making among selfish clients. Although these studies illustrate RL’s efficacy, concerns over high-dimensional state spaces and scalability persist in multi-farm or large-scale settings.
# III. REINFORCEMENT LEARNING OVERVIEW
1) Preliminaries: A Markov Decision Process (MDP) is characterized by a tuple $( S , A , P , R , T )$ , where $S$ denotes set of all possible states, $A$ denotes the finite set of all possible actions, $P : S \times A \to P ( S )$ is the state transition probability function; $\textstyle P ( s _ { t + 1 } \mid s _ { t } , a _ { t } )$ giving the probability of moving to state $s _ { t + 1 }$ from state $s _ { t }$ after acting $a _ { t }$ , $R : S \times A \times S \to \mathbb { R }$ is the reward function, where $R _ { t } \ = \ R ( s _ { t } , a _ { t } , s _ { t + 1 } )$ is the reward received by the agent upon transitioning from $s _ { t }$ to $s _ { t + 1 }$ via $a _ { t }$ , and $T$ is the terminal time. We seek a policy $\pi _ { \boldsymbol { \theta } }$ (parametrized by $\theta$ ) that maximizes the cumulative, possibly discounted, future rewards in this MDP. Formally, if $\gamma \in [ 0 , 1 ]$ is the discount factor, the expected return starting from state $s _ { t }$ and action $a _ { t }$ under policy $\pi _ { \boldsymbol { \theta } }$ is captured by the stateaction value function $Q ( s _ { t } , \underset { \infty } { a _ { t } } )$ , defined as
$$
Q ( s _ { t } , a _ { t } ) = \mathbb { E } \Big [ \sum _ { i = 0 } \gamma ^ { i } R _ { t + 1 + i } s _ { t } , a _ { t } \Big ]
$$
We aim to identify an optimal policy $\pi _ { \boldsymbol { \theta } } ^ { * }$ that yields the highest $Q$ -values possible, say $\pi _ { \theta } ^ { * } = \arg \operatorname* { m a x } _ { \theta } Q ( s _ { t } , a _ { t } )$ .
In RL, an agent learns purely from trial-and-error experience, guided by reward signals rather than labeled examples. At each step, the agent in state $s _ { t }$ takes an action $a _ { t } \in A$ . Upon executing $a _ { t }$ , it transitions to a new state $s _ { t + 1 }$ and obtains a scalar reward $R ( s _ { t } , a _ { t } , s _ { t + 1 } )$ . A policy $\pi$ maps each state to an action (or probability distribution over actions). The agent’s main objective is to find an optimal policy $\pi ^ { * }$ that yields the maximum expected discounted return. Mathematically,
$$
\pi ^ { * } ( s ) = \arg \operatorname* { m a x } _ { a \in A } \gamma \sum _ { s ^ { \prime } \in S } P ( s ^ { \prime } \mid s , a ) V ^ { * } ( s ^ { \prime } ) ,
$$
where $V ^ { * } ( s )$ is the optimal value function at state $s$ , satisfying
$$
V ^ { * } ( s ) = \operatorname* { m a x } _ { a \in A } \Big ( R ( s , a ) + \gamma \sum _ { s ^ { \prime } \in S } P ( s ^ { \prime } \mid s , a ) V ^ { * } ( s ^ { \prime } ) \Big )
$$
These value functions capture how “good” it is to be in a particular state and help evaluate different policies.
2) Deep $R L$ via $\boldsymbol { Q }$ -Learning: An approximate function is used to represent $Q$ , especially for high-dimensional state or action spaces. We approach includes Deep $\boldsymbol { \mathcal { Q } }$ -Network (DQN), which replaces the tabular $\mathrm { \Delta Q }$ -value storage with a neural network $Q _ { \theta } ( s , a )$ . In DQN, our network takes as input the current state, $s _ { t }$ , and the output layer provides an estimate $Q _ { \theta } ( s _ { t } , a )$ for each $a \in A$ .
Loss Function: At each time step $t$ , we observe a transition, $( s _ { t } , a _ { t } , R _ { t + 1 } , s _ { t + 1 } )$ and the target for Q-learning is given by
$$
y _ { t } = R _ { t + 1 } + \gamma \operatorname* { m a x } _ { a ^ { \prime } } Q _ { \theta ^ { - } } ( s _ { t + 1 } , a ^ { \prime } ) ,
$$
where $\theta ^ { - }$ represents the parameter set of a target network, which is periodically updated (and remains fixed between updates to stabilize training). We calculate the DQN loss as
$$
L ( \theta ) = { \Bigl ( } y _ { t } \ - \ Q _ { \theta } ( s _ { t } , a _ { t } ) { \Bigr ) } ^ { 2 } ,
$$
where $y _ { t }$ is the target from (4), and $Q _ { \theta } ( s _ { t } , a _ { t } )$ is the predicted Q-value from the DQN. The parameters $\theta$ are updated by minimizing $\textstyle \sum _ { t } L ( \theta )$ .
# A. Why DQNs for Our Framework?
Our framework operates in a discrete action space where each action designates a model cut layer under dynamic resource and time constraints. Because the state spaceencompassing device resource availability, time windows, and partial model outputs are large and complex, tabular Q-learning becomes impractical. Instead, DQNs leverage neural function approximators to estimate Q-values within this complex space. By framing split selection as a finite MDP, our approach exploits RL’s reward-driven exploration to adapt to uncertain environments. Leveraging replay, a target network, and an $\epsilon$ -greedy strategy, we balance exploration and exploitation, optimizing split assignments across devices.
# IV. OUR PROPOSED FRAMEWORK
This section discusses our system model, mathematical formulation, and proposed framework, ReinDSplit.
# A. System Model
We consider an agricultural region $\mathcal { R }$ , comprises $N$ geographically separated farms. In each location, a client or edge device $d _ { i } \in \mathcal { H } = \{ d _ { 1 } , d _ { 2 } , . . . , d _ { N } \}$ captures high-resolution images of insect pests, as presented in Fig. 2. Though spatially apart, these farms cultivate the same crops (like soybeans or corn) under analogous conditions, thus exhibiting a near- $. I I D$ distribution of pest species. Each $d _ { i }$ has limited computational resources $R _ { i }$ (e.g., CPU/Jetson Nano) and a varying active time window $T _ { i }$ due to solar battery life and power schedules. We assume an intermittent communication network exists (periodically slow and unreliable) for smashed data exchange. Contextually, we use device and client interchangeably.
A cloud server $s$ manages the $\mathrm { s L }$ framework, where a global DNN model $\mathcal { M }$ is partitioned into $K$ “client-server” submodel pairs denoted as:
$$
\Gamma = \Big \{ \big ( \mathcal { M } _ { C } ^ { 1 } , \mathcal { M } _ { S } ^ { 1 } \big ) , \big ( \mathcal { M } _ { C } ^ { 2 } , \mathcal { M } _ { S } ^ { 2 } \big ) , \dots , \big ( \mathcal { M } _ { C } ^ { K } , \mathcal { M } _ { S } ^ { K } \big ) \Big \} ,
$$
where each $\mathcal { M } _ { C } ^ { k }$ is computed locally on $d _ { i }$ and the complementary part $\mathcal { M } _ { S } ^ { k }$ executes on $S$ . The server is aware and can estimate each submodel’s minimum requirements $R _ { \mathrm { r e q u i r e d } } ( \mathcal { M } _ { C } ^ { k } )$ and $T _ { \mathrm { r e q u i r e d } } ( \mathcal { M } _ { C } ^ { k } )$ , along with local farm details, such as $R _ { i } , T _ { i }$ and dataset. By selecting an appropriate split point (or cut layer) $k$ for each $d _ { i }$ , the system aims to balance classification accuracy with heterogeneous computational and time constraints. If $\mathcal { M } _ { C } ^ { k }$ demands more time than expected $T _ { i }$ , the device becomes a straggler, leading to incomplete local training. Alternatively, our proposed framework aims to maximize aggregated accuracy as:
$$
\begin{array} { r l } { \underset { \varphi } { \operatorname* { m a x } } } & { \displaystyle \sum _ { i = 1 } ^ { N } \mathrm { A c c } \big ( \varphi ( i ) \big ) } \\ { \mathrm { s u b j e c t ~ t o } } & { R _ { \mathrm { r e q u i r e d } } \big ( \mathcal { M } _ { C } ^ { \varphi ( i ) } \big ) \leq R _ { i } , \quad \forall i = 1 , \dots , N , } \\ & { T _ { \mathrm { r e q u i r e d } } \big ( \mathcal { M } _ { C } ^ { \varphi ( i ) } \big ) \leq T _ { i } , \quad \forall i = 1 , \dots , N , } \\ & { \varphi ( i ) \in \{ 1 , \dots , K \} , \quad \forall i = 1 , \dots , N . } \end{array}
$$
where, $\varphi ( i ) = k$ is an allocation function that decides the split model-pair $\left( \mathcal { M } _ { C } ^ { k } , \mathcal { M } _ { S } ^ { k } \right)$ to be assigned to the farm $d _ { i }$ and $\operatorname { A c c } ( \varphi ( i ) )$ denotes the expected pest classification accuracy. In this paper, we constitute only two constraints; however, in our future work, we will formulate additional constraints such as battery budget, memory, bandwidth, etc., and objectives (e.g., minimizing average training time, balancing device load, or weighting accuracy per device).
Fig. 2: Overview of our proposed ReinDSplit-based pest recognition system. Client 2’s device is our in-lab-developed automated insect monitoring prototype and will be deployed for real-time field validation in our future work.
# B. Reinforcement-based Dynamic SL (ReinDSplit)
ReinDSplit develops an adaptive policy for partial model allocation that considers dynamic resource and time constraints, precluding raw data sharing. Each client $d _ { i } \in \mathcal { H }$ is viewed as an agent in an MDP, where $t ^ { t h }$ state $s _ { t }$ encodes (i) local resources $R _ { i }$ , (ii) time availability $T _ { i }$ , and (iii) partial model parameters for the client-side split $\mathcal { M } _ { C }$ . An action $\boldsymbol { a } _ { t }$ determines the current split point. The reward $R _ { t + 1 }$ derives from a performance objective (Eq. 6) once the partial forward pass and backward propagation complete with $\mathcal { M } _ { C }$ and $\mathcal { M } _ { S }$ .
By learning a Q-function mapping states to split-layer actions, ReinDSplit adaptively selects $\mathcal { M } _ { C } ^ { k } , \mathcal { M } _ { S } ^ { k }$ for each round. This balances efficiency (lightweight partial forward passes on constrained devices) and performance (server-side layers benefit from aggregated gradient signals). Moreover, limited gradient transmission to the server and raw data always remaining local increases protection from adversarial attacks such as data reconstruction, thus enhancing privacy [5]. This unifies RLs’ multi-agent perspective with adaptive model partitioning of SL to orchestrate ReinDSplit in resource- and privacy-critical scenarios.
# C. Mathematical Formulation
Let the overall training be divided into discrete rounds $t =$ $1 , 2 , \ldots$ We aim to learn a policy that adaptively assigns the split index $\varphi ( i )$ with constraints $R _ { i }$ and $T _ { i }$ (ref. Eq. (6)).
$\boldsymbol { { \mathit { 1 } } }$ ) State Space: At round $t$ , the state of the device $d _ { i }$ is:
$$
s _ { i } ^ { t } = \big ( R _ { i } ^ { t } , T _ { i } ^ { t } , \mathcal { P } _ { i } ^ { t } \big ) ,
$$
where $R _ { i } ^ { t }$ and $\it { T _ { i } ^ { t } }$ represent the available computational resources and time window, respectively, for device $d _ { i }$ , dedicated to local training in round $t$ , and $\mathcal { P } _ { i } ^ { t }$ encapsulates partial model performance metric for the client-side model.
After each round, the environment (ReinDSplit plus the device’s local conditions) transitions $s _ { i } ^ { t } \ \to \ s _ { i } ^ { t + 1 }$ based on the chosen action $a _ { i } ^ { t }$ and the resource–time consumption of training, as highlighted in Fig. 2.
2) Action Space: At each round $t$ , device $d _ { i }$ selects an action $a _ { i } ^ { t }$ from the finite set $\mathcal { A } _ { i } = \{ 1 , 2 , . . . , K \}$ , where each integer $k \in \{ 1 , \ldots , K \}$ indicates choosing the split pair $\left( \mathcal { M } _ { C } ^ { k } , \mathcal { M } _ { S } ^ { k } \right)$ for local processing. Here, $a _ { i } ^ { t } = k$ determines how many layers are executed on the client versus the server, thus dictating the local resource–time burden.
3) Reward Function: Upon taking action $a _ { i } ^ { t }$ in state $s _ { i } ^ { t }$ , the agent receives an immediate reward $r _ { i } ^ { t }$ . We design $r _ { i } ^ { t }$ to balance classification performance and resource–time feasibility, aligning with the objective formulated in (6):
$$
r _ { i } ^ { t } = \left\{ \begin{array} { l l } { \alpha \operatorname { A c c } \big ( \varphi ( i ) \big ) - \beta \left( \operatorname* { m a x } \{ 0 , ~ R _ { \mathrm { r e q u i r e d } } ( \mathcal { M } _ { C } ^ { \varphi ( i ) } ) - R _ { i } ^ { t } \} \right. } \\ { \qquad + \left. \operatorname* { m a x } \{ 0 , ~ T _ { \mathrm { r e q u i r e d } } ( \mathcal { M } _ { C } ^ { \varphi ( i ) } ) - T _ { i } ^ { t } \} \right) , } & { \quad \mathrm { i f ~ f e a s i b l e } , } \\ { - \gamma \left( \mathrm { p e n a l t y } \right) , } & { \quad \mathrm { o t h e r w i s e } } \end{array} \right.
$$
where $\alpha , \beta$ , and $\gamma$ are nonnegative weighting parameters. The term $\operatorname { A c c } ( \varphi ( i ) )$ captures the classification accuracy for the chosen split, while the penalty terms reflect deficits in resources or time. Infeasible actions incur a direct penalty to discourage unworkable split assignments.
4) RL Objective: Each device $d _ { i }$ seeks a policy $\pi _ { i } \colon s _ { i } ^ { t } \mapsto$ $a _ { i } ^ { t }$ that maximizes its discounted cumulative reward:
$$
\operatorname* { m a x } _ { \pi _ { i } } \quad \mathbb { E } _ { \pi _ { i } } \bigg [ \sum _ { t = 0 } ^ { \infty } \delta ^ { t } r _ { i } ^ { t } \bigg ] ,
$$
where $\delta \in [ 0 , 1 ]$ is discount factor. The Q-function $Q _ { i } ^ { \pi } ( s , a )$ is approximated via a DQN, updated iteratively to converge to an optimal policy $\pi _ { i } ^ { * }$ . This occurs for all devices in $\mathcal { H }$ .
By penalizing or rewarding local splitting decisions, ReinDSplit allocates deeper model segments to nodes with ample computational resources, while resource-constrained devices offload heavier workloads to a centralized server, eliminating conventional SL strategies’ “same-split-fits-all” limitation for heterogeneous scenarios with various applications such as the pest recognition system. Additionally, ReinDSplit maintains raw image data onsite, preserving privacy and enabling large-scale deployment across geographically dispersed farms. We provide the pseudocode of ReinDSplit in Algorithm 1 as Appendix.
# V. THEORETICAL ANALYSIS
This section develops a theoretical foundation for straggler mitigation and convergence of our ReinDSplit framework.
Definition 1 (Straggler Effect in ReinDSplit). Let $\mathcal { H } =$ $\{ d _ { 1 } , d _ { 2 } , \dots , d _ { N } \}$ be $N$ devices, and each $d _ { i }$ selects a split index $k \in \{ 1 , \ldots , K \}$ using its $R L$ policy $\pi _ { i }$ . We define the straggler effect as the probability that, in a training round $t$ ,
$$
\exists d _ { i } s u c h t h a t \Delta _ { i } ^ { k } < 0 \quad a n d \quad a _ { i } ^ { t } = k ,
$$
where $\Delta _ { i } ^ { k }$ is a local resource surplus for a device $d _ { i }$ under split $k$ . Equivalently, a straggler arises if a device selects $a$ split that exceeds its resource/time availability, delaying the global update or even incomplete local training.
Lemma 1 (Bound on Straggler Probability). Suppose each device $d _ { i }$ executes a $\boldsymbol { Q }$ -learning over a finite state-action space $S _ { i } \times \{ 1 , \ldots , K \}$ . Then the probability that $d _ { i }$ selects an infeasible action $k \notin \mathcal { F } _ { i }$ (i.e., $\Delta _ { i } ^ { k } < 0 .$ ) $t \to 0$ as $t \to \infty$ :
$$
\operatorname* { l i m } _ { t \infty } \mathrm { P r } [ a _ { i } ^ { t } \notin \mathcal { F } _ { i } ] = 0
$$
Theorem 1 (Diminishing Straggler Effect). Let $\pi _ { i } ^ { * }$ be the optimal policy for the device $d _ { i }$ in our proposed ReinDSplit framework. Suppose Pr[Straggler at round $t ]$ is the probability that, at time step $t _ { : }$ , at least one device $d _ { i } \in \{ d _ { 1 } , \ldots , d _ { N } \}$ selects an action/split $k$ outside its feasibility set $\mathcal { F } _ { i }$ . Mathematically,
$$
\begin{array} { r l } & { \operatorname { \it ~ \beta ~ } ^ { a u t y , } } \\ & { \operatorname* { P r } \big [ S t r a g g l e r ~ a t ~ r o u n d ~ t \big ] ~ = ~ \operatorname* { P r } \Big [ \bigcup _ { i = 1 } ^ { N } \{ a _ { i } ^ { t } \notin \mathcal { F } _ { i } \} \Big ] . } \end{array}
$$
Then, under $\boldsymbol { Q }$ -learning convergence:
$$
\operatorname * { l i m } _ { t \infty } ~ \mathrm { P r } \big [ S t r a g g l e r ~ a t ~ r o u n d ~ t \big ] ~ = ~ 0 .
$$
Definition 2 (ReinDSplit Convergence). Global convergence is achieved when, across repeated training rounds, each device $d _ { i }$ takes action $k \in \{ 1 , \ldots , K \}$ following its learned policy $\pi _ { i }$ , executes local forward/backward passes on $\mathcal { M } _ { C } ^ { k }$ , and transmits the resulting smashed gradients to the server for updates of $\mathcal { M } _ { S } ^ { k }$ . Convergence occurs if the local (clientside) and global (server-side) model parameters stabilize in expectation, ensuring no unbounded variance in performance metrics (accuracy) over time.
Lemma 2 (Stability of Local Updates). If each device $d _ { i }$ consistently selects actions $k \in \mathcal { F } _ { i }$ (its feasibility set), then local updates remain bounded. Formally, for any feasible split $k \in \mathcal { F } _ { i }$ , the gradients of the partial model $\mathcal { M } _ { C } ^ { k }$ satisfy
$$
\lvert | \nabla \mathcal { M } _ { C } ^ { k } \rvert | \ \leq \ M _ { \mathrm { g r a d } } ,
$$
where $M _ { \mathrm { g r a d } }$ is a device-independent constant determined by batch size and network architecture factors.
Theorem 2 (Global Convergence of ReinDSplit). Suppose a finite MDP models each device $d _ { i }$ with a corresponding $\boldsymbol { \mathcal { Q } }$ -learning routine that converges to the optimal policy $\pi _ { i } ^ { * }$ ; local gradients remain bounded as per Lemma 2; and a central server periodically aggregates partial updates from all devices. Then, our proposed ReinDSplit algorithm converges in expectation to a stable partition of model parameters $\{ \mathcal { M } _ { C } ^ { k ^ { * } } , \mathcal { M } _ { S } ^ { k ^ { * } } \}$ across devices, thereby maximizing total accuracy subject to the given resource/time constraints.
# VI. EXPERIMENTAL ANALYSIS
To validate ReinDSplit, we implemented all experiments in Python 3 using PyTorch, utilizing 2 Nvidia V100 GPU nodes, 8 CPU cores, and $4 0 ~ \mathrm { G B }$ of system memory. We performed our experiments on three DNN architectures: ResNet18 (RN), GoogleNet (GN), and MobileNetV2 (MN), and three pest datasets, namely Economic Crops (EC) [29], Field Crops (FC) [29] (we adopted EC and FCs’ classes as given in Table 1 of [29]), and Kaggle’s Agriculture Pests (KAP) [30] for clients $\mathit { \Theta } = \ 1 0 \$ . For classes, please refer to Table I in the Appendix. We partitioned each dataset into train $( 7 5 \% )$ , validation $( 1 5 \% )$ , and test $( 1 0 \% )$ , and images are resized to (224,224).To simulate the dynamic split actions in vertical SL, we partition each model into 5 sub-models for uniformly comparing computational loads between client and server when selecting different cut layers. For different split (or cut) layers of MobileNetV2, refer to Table II in the Appendix.
Fig. 3: Radar charts comparing normalized evaluation metrics for (a) EC, (b) FC, and (c) KAP datasets. Higher values closer to the outer edge indicat better performance, while the uniformity of the polygon shape reflects balanced model behavior across all metrics.
Hyperparameter Tuning and Implementation: We tuned hyperparameters with 20 Optuna trials: learning rate $\mathrm { ( l r = 1 e { - } } 4$ , 1e-2), weight decay $( = 1 \mathrm { e } { - } 6$ , 1e-3), discount factor $( \gamma = [ 0 . 9 5 , 0 . 9 9 , 0 . 9 9 9 ] )$ , batch size $\langle \{ 3 2 , 6 4 \} \rangle$ ), and target network update frequency. Each trial runs for 50 training episodes, with 75 steps per episode. We mitigate oscillations in Q-learning with a replay buffer to batch updates of the Qnetwork, synchronizing a target network every 500 or 1000 step, depending on Optuna’s suggestion. We assigned a subset of the training dataset for each client via class-based shards to employ the non-IID distribution. We considered the crossentropy loss function and the AdamW optimizer. We validated ReinDSplits’ performance with accuracy, precision, recall, F1- score, and Matthews Correlation Coefficient (MCC).
1) Heterogeneous Devices Simulation: To emulate farming environments, we simulate $N = 5$ virtual clients, each with an assigned computational capacity and time constraint drawn from a continuous range [0.5, 7.5]). These states vary stochastically across training rounds to reflect dynamic resource availability, and each client can become unavailable with $10 \%$ probability. The agent’s action space thus consists of 5 possible split points, where choosing a higher split index implies more local layers.
2) Dynamic Allocation: Our Q-learning framework maintains a state vector compute capacity, time constraint for each client, and the agent selects one of the five split indices at each round. Our vanilla Q-network is a 2-layered, fully connected neural network with ReLU activation. Specifically, the first layer maps the 2-dimensional state vector $\boldsymbol { s } _ { i } ^ { t }$ to a 128-dimensional hidden representation, and the second layer outputs Q-values for each of the five possible splits. We deploy a $\epsilon$ -greedy exploration strategy that decays over training episodes with the reward function defined in Eq. 8.
# A. Recognition Analysis
To visualize all evaluation metrics and DNN architectures, we aggregated the mean values for each metric in the IID and non-IID settings, as the radar presentations show in Fig. 3. Next, we applied the min-max normalization approach, mapping all metrics to the [0.01, 1] range to prevent zero values. This normalization highlights relative performance differences without skewing results toward high/low ranges.
EC: MobileNetV2 achieved the highest maximum accuracy of $8 7 . 6 1 \%$ (IID and $8 6 . 3 1 \%$ (non-IID); thus, its observed range can be viewed as $7 9 . 1 9 \% \pm \ 8 . 4 2 \%$ . ResNet and GoogleNet followed closely with peak accuracies around $8 6 . 7 4 \%$ (IID) and $8 6 . 3 7 \%$ (IID). However, their non-IID performance dropped slightly to near $8 4 . 9 8 \%$ and $8 6 . 3 1 \%$ , respectively. Across additional metrics (precision, recall, F1, MCC), all models consistently maintained values above 0.75, signifying robust performance with resource-diverse devices. In comparison, traditional SL (with MobileNetV2) achieved maximum accuracies of $81 \%$ (IID) and $7 9 \%$ (non-IID) with precision, recall, and F1 scores around 0.76 (IID) and 0.74 (non-IID), while FL reached $90 \%$ (IID) and $8 8 \%$ (non-IID).
FC: For this scenario, maximum accuracy spanned the interval $[ 7 9 . 2 4 \%$ , $8 2 . 7 8 \%]$ , with MobileNetV2 topping at $\approx ~ 8 2 . 7 8 \%$ (IID) and $8 2 . 2 4 \%$ (non-IID). ResNet and GoogleNet followed suit, reaching $[ 7 9 . 2 4 \%$ , $8 1 . 5 3 \%]$ range interval. Beyond accuracy, MobileNetV2’s precision, recall, and F1 lay in [0.82, 0.88] under both IID and non-IID conditions, edging out the competing models by a small but consistent margin. Hence, MobileNetV2 retained its advantage in recognizing pests despite varying data distributions.
Fig. 4: Comparison of average split-point frequency (left y-axis) and validation accuracy (right y-axis) over 50 episodes for the FC dataset under (a) IID and (b) non-IID distributions. Each colored marker traces how often a given split point is selected, and the dashed line reflects the evolving mean accuracy.
SL attained approximately $7 6 . 8 \%$ (IID) and $7 6 . 2 \%$ (nonIID), compared to $8 2 . 7 8 \%$ (IID) and $8 2 . 2 4 \%$ (non-IID) for ReinDSplit and $8 5 . 0 \%$ for FL.
KAP: Here, MobileNetV2 again attained the top accuracy of roughly $9 4 . 3 1 \%$ (IID), slightly dipping to $9 4 . 0 8 \%$ (non-IID). Meanwhile, GoogleNet recorded a maximum near $9 3 . 6 2 \%$ , with a modest decline to about $9 3 . 2 0 \%$ in non-IID. ResNet’s peak lay around $9 3 . 1 7 \%$ . When considering non-accuracy metrics, MobileNetV2’s F1 ranged from 0.90 to 0.94, and its MCC consistently exceeded 0.93. Such stable, high metrics highlight MobileNetV2’s capability to handle large, heterogeneous datasets without significant performance degradation. Moreover, SL reported accuracy of $8 8 . 3 \%$ (IID) and $8 8 . 1 \%$ (non-IID), and for FL, $9 7 . 0 \%$ (IID) and $9 6 . 5 \%$ (non-IID), with F1 and MCC values following the same trend.
# B. Impact of Split Points on Accuracy
We analyzed the trade-off of MobileNetV2 model splitpoint (SP) assignments to clients with validation accuracy for the FC dataset, as illustrated in Figure 4. In early episodes, SP2 often surges above 100 out of 120 possible selections, reflecting an initial strategy that balances computation and capacity. In the IID setting, SP2 dominates for the first 10 episodes, driving a $40 \%$ accuracy range by episode 30 (ref. Figure 4 (a)). However, SP3 progressively overtakes SP2 mid-training, stabilizing near 70–80 selections while accuracy plateaus at around $78 \% - 8 0 \%$ . In contrast, the non-IID setting triggers significant accuracy fluctuations—between $60 \%$ and $78 \%$ —as the model navigates heterogeneous distributions (Figure 4 (b)). SP3 and SP4 each show 50-point frequency swings between episodes 20 and 40, aligning with shifts in the loss landscape from uneven client splits. Meanwhile, SP1 and SP5 remain near zero frequency in both data distribution settings, indicating minimal performance gains despite occasional spikes (over 20) late in non-IID training. Finally, although both scenarios converge near $7 5 \% - 8 0 \%$ accuracy, non-IID demonstrates greater volatility in split-point assignments, highlighting the need for adaptive partitioning when data distributions are non-uniform.
C. Trade-Offs Between Client Load, Reward, and Accuracy
In Fig. 5 (a) and (b), we examine how average reward (0.35–0.70) and average client load (0.0–0.60) correlate with classification accuracy $( 8 6 \% - 9 5 \% )$ in ReinDSplit for the KAP dataset. MN IID (purple “ $\times \prime \prime$ ) occupies the upperright quadrant in both subplots, achieving $0 . 6 5 \mathrm { - } 0 . 7 0$ reward at $9 2 \% - 9 5 \%$ accuracy and often surpassing $94 \%$ accuracy at loads above 0.50, implying uniform data distributions leverage additional client computation effectively. In contrast, non-IID configurations—such as RN non-IID (orange squares)—cluster around 0.40–0.60 reward or below 0.3 load, with accuracies of $8 8 \% - 9 2 \%$ , reflecting the constraints imposed by skewed data partitions. GN IID (green diamonds) strikes a balance in mid-range reward (0.50–0.60) or load (0.2–0.5), frequently exceeding $90 \%$ accuracy. Moreover, MN non-IID (pink plus signs) extends across moderate load levels (0.1–0.4) and reward values (0.45–0.65) while still reaching accuracy above $90 \%$ , highlighting that architecture choice can partially offset heterogeneous data’s impact. We observed that allocating higher client loads boosts performance for IID scenarios. In contrast, non-IID settings require more adaptive strategies to maintain competitive accuracy. | To empower precision agriculture through distributed machine learning (DML),
split learning (SL) has emerged as a promising paradigm, partitioning deep
neural networks (DNNs) between edge devices and servers to reduce computational
burdens and preserve data privacy. However, conventional SL frameworks'
one-split-fits-all strategy is a critical limitation in agricultural ecosystems
where edge insect monitoring devices exhibit vast heterogeneity in
computational power, energy constraints, and connectivity. This leads to
straggler bottlenecks, inefficient resource utilization, and compromised model
performance. Bridging this gap, we introduce ReinDSplit, a novel reinforcement
learning (RL)-driven framework that dynamically tailors DNN split points for
each device, optimizing efficiency without sacrificing accuracy. Specifically,
a Q-learning agent acts as an adaptive orchestrator, balancing workloads and
latency thresholds across devices to mitigate computational starvation or
overload. By framing split layer selection as a finite-state Markov decision
process, ReinDSplit convergence ensures that highly constrained devices
contribute meaningfully to model training over time. Evaluated on three insect
classification datasets using ResNet18, GoogleNet, and MobileNetV2, ReinDSplit
achieves 94.31% accuracy with MobileNetV2. Beyond agriculture, ReinDSplit
pioneers a paradigm shift in SL by harmonizing RL for resource efficiency,
privacy, and scalability in heterogeneous environments. | [
"cs.LG",
"cs.DC",
"cs.ET"
] |
# 1 Introduction
AI Agents have recently proven themselves as a competitive way of scaling test-time compute, especially in SE (Chowdhury et al., 2024). A crucial yet underexplored component of AI agents is their memory, which allows them to dynamically adapt their behavior based on prior experiences. Early approaches, such as ReAct (Yao et al., 2023b), rely on the agent’s immediate trajectory or short-term memory for decision-making. Reflexion (Shinn et al., 2023) extends this by introducing long-term memory in the form of self-reflections on past failed task attempts, enabling agents to improve their reasoning and planning on a single task instance through In-Context Learning (ICL). While this yields performance gains on the current task instance, Reflexion discards these self-reflections after task completion. This results in inefficient use of computational resources and loss of valuable cross-task-instance learning opportunities. Zhao et al. (2024) address this limitation through Experiential Learning (EL), which is learning from past experiences across task instances. Their approach ExpeL achieves promising results on HotpotQA (Yang et al., 2018), WebShop (Yao et al., 2023a), and Alfworld (Shridhar et al., 2021). To better align with existing terminology, we name the memory consisting of knowledge extracted with EL “CTIM”. Our work investigates whether CTIM generalizes to the more complex2 domain of SE. We choose SE because we expect EL to be particularly valuable for uncovering the structure of a repository, reducing the number of turns taken exploring the codebase.
To adapt EL to SE we extend it to a MoEs inspired Knowledge Distillation (KD) approach that simultaneously captures high-level SE best practices and repository-specific details (e.g., project structure). We experimentally evaluate this approach by augmenting AutoCodeRover (Zhang et al., 2024) with CTIM, which we name “CTIMRover”, and comparing the results of CTIM-Rover with those of the AutoCodeRover on a subset of SWE-bench Verified. We find that our adapted CTIM does not generalize to SE and instead degrades performance in all configurations compared to AutoCodeRover. Our detailed qualitative analysis identifies noisy CTIM items as culprits and we propose the use of embedding-based retrieval methods to provide relevant, task-similar CTIMs items. The potential of this approach in the SE domain was recently demonstrated by (Su et al., 2025) who provided relevant sub-trajectories for ICL at each agent turn.
Figure 1: CTIM-Rover Overview. Figure inspired by ExpeL (Zhao et al., 2024). CTIM-Rover first gathers new experiences on the train set of SWE-bench Verified which we introduce in Section 3 (details in Appendix A). Then, it combines these experiences with existing experiences of AutoCodeRover (Zhang et al., 2024) on SWE-bench Lite (Jimenez et al., 2023). Next, it distills high-level and repository-level knowledge from these experiences. During evaluation, it recalls a past experience and conditions on the distilled knowledge. Key departures from ExpeL or AutoCodeRover in blue: (A) We extend AutoCodeRover with Reflexion (Shinn et al., 2023), allowing the agent to retry an instance up to three times while learning from its mistakes through self-reflection. (B) Compared to ExpeL, we also source experiences from past successful trajectories outside our system. (C) We introduce a novel domain-specific Knowledge Distillation (KD) phase (Figure 2) that extracts repository-level insights (e.g., common bug patterns).
# 2 Related Work
# 2.1 Agentic Reasoning Frameworks
A core element of popular agentic reasoning frameworks (Yao et al., 2023b; Shinn et al., 2023; Wang et al., 2024) is the agent’s trajectory or short-term memory, consisting of its past actions, reasoning and environment observations. Shinn et al. (2023) introduce a long-term memory consisting of selfreflections over the short-term memory of unsuccessful previous attempts. However, after concluding a task instance, existing reasoning frameworks used in SE agents do not further use the short- or long-term memory. Our work addresses this key limitation by adapting ExpeL (Zhao et al., 2024) to the SE domain.
# 2.2 SE Agents
SWE-agent (Yang et al., 2024) was the first openly available SE agent and leverages the ReAct reasoning framework (Yao et al., 2023b). The agent’s basic search tooling combined with its interleaved bug localization and patch generation approach offers flexibility, but results in long and expensive trajectories. AutoCodeRover (Zhang et al., 2024) on the other hand, explicitly structures the task into two distinct phases: bug localization and patch generation. Additionally, it provides sophisticated search tooling during localization and constrains the patch generation phase to a maximum of three retry attempts. This ensures shorter, cost-efficient trajectories and a guaranteed termination shortly after the patch generation step. A key limitation of this approach is that the agent cannot gather additional context once it enters the patch generation phase. However, current SE agents are not yet capable of recovering from early mistakes, and their performance stagnates at later turns (Yang et al., 2025). Furthermore, neither of these agents employ CTIM. Thus, our work expands the cost-efficient AutoCodeRover with CTIM.
# 2.3 Concurrent Work
Lingam et al. (2024) perform self-reflection on the same task instance while prompting for a diverse set of self-reflections and additionally enhance the context with exemplar trajectories from other task instances. This approach demonstrates performance gains on programming benchmarks with comparatively short trajectories (e.g., HumanEval (Chen et al., 2021)). Especially the latter setup is closely related to CTIM-Rover with an exemplar trajectory. However, we evaluate on SWEbench (Jimenez et al., 2023) which more closely resembles real SE tasks. Instead of abstracting from the trajectory by constructing a CTIM, (Su et al., 2025) directly retrieve synthetic sub-trajectories at each step of the agent and achieve strong performance on SWE-bench. Furthermore, we provide the full CTIM with the user prompt at the start of an agent’s trajectory instead of select subset at each turn.
# 3 Dataset
We use SWE-bench Verified (Chowdhury et al., 2024) without samples from the pylint, astropy and pydata/xarray repositories due to environment setup issues 3 as basis for our experiments. For details see Section 6. For our experiments, we rely on SWE-bench Verified, opposed to SWEbench (Jimenez et al., 2023), as it guarantees that samples are theoretically solvable (Chowdhury et al., 2024). For the collection of past successful trajectories (Section 3.1) we use 401 samples from this benchmark and for the evaluation 45 samples.
# 3.1 Systematic Collection of Past Successful Trajectories
To construct a high quality CTIM, we require a diverse and representative set of successful past trajectories. These are past experiences on SWEbench in which the agent solved an instance. This section details our systematic approach to collecting these trajectories.
To generate as many successful past trajectories as possible, we extend the baseline AutoCodeRover (Zhang et al., 2024) implementation with self-reflection capabilities. Following Shinn et al. (2023), we retry an instance up to three times and allow self-reflections to inform each subsequent attempt. While AutoCodeRover allows up to three patch generation attempts, this does not entail a complete retry on the full trajectory, nor a self-reflection between the patch generation attempts. During training we reduce the patch generation retries of AutoCodeRover from three to two to amortize some of the additional cost incurred by Reflexion retries. With this setup we gather the trajectories of 183 successfully solved instances. To further increase our training set, we supplement the collected trajectories with 53 successful AutoCodeRover trajectories from SWE-bench Lite. Because CTIM-Rover’s trajectories only differ from vanilla AutoCodeRover trajectories by the addition of self-reflections, and both SWE-bench Verified and SWE-bench Lite are subsets of SWEbench we consider this operation valid with respect to our data distribution. We use these 236 past successful trajectories to construct our CTIM. For details on their distribution see Appendix D.1.
# 4 Experiments
To adapt EL to SE, we extend the CTIM with a MoE (Jacobs et al., 1991) inspired repositorylevel CTIM (Section 4.1) and investigate ICL with successful, task-similar exemplar trajectories (Section 4.2). For distilling knowledge from trajectories, we use the reasoning model o1 (OpenAI, 2024b) because we suspect that its capabilities are beneficial when identifying pivotal agent decisions in complex SE agent trajectories (i.e., cause-effect relationships). We use GPT-4o (OpenAI, 2024a) to power the agent during training trajectory collection and the final evaluations due to budget constraints.
# 4.1 Cross-Task-Instance Memory (CTIM)
Our approach shares the core principle of using knowledge extracted from past successful trajectories to guide the agent on future instance with ExpeL (Zhao et al., 2024). We provide a highlevel system overview of in Figure 1. To adapt this approach to SE, we extract repository-level knowledge following and conditioned on general SE knowledge in a two-phase approach detailed below (Figure 2).
Repository-Level Knowledge Distillation Our approach re-uses the KD methodology (extracting knowledge from sets of successful trajectories from distinct instances and tuples of successful and failing attempts in the same instance) and operations (add, edit, upvote or downvote4) introduced by Zhao et al. (2024) with the following modifications. First, we double the initial importance value of CTIM items, because we expect longer intervals between instances for which a CTIM item is applicable. This is motivated by the limited state space of ExpeL’s environments, compared to the complexity of real world software repositories. Furthermore, some of our trajectories contain self-reflections. We expect these trajectories to produce especially high-quality CTIM items when extracting knowledge from tuples of successful and failing attempts in the same instance as they already contain the insights that lead to an eventual resolution. After the first phase of general CTIM construction, we build a repository-specific CTIM by constraining all instances shown to the distilling Large Language Model (LLM) (see Section 4) to be from the same repository. Finally, we limit the maximum size of the CTIM to $c ( n ) = \lceil { \sqrt { n } } \rceil$ , where $n$ represents the number of available successful trajectories for constructing this CTIM. With this we aim to iteratively refine the CTIM to contain a concise set of high-quality insights and avoid degrading the agent’s performance with noisy knowledge. For prompts see Appendix D.2, for sample CTIM items Appendix D.3.
Figure 2: CITM-Rover Knowledge Distillation (KD). Key departure from ExpeL (Zhao et al., 2024) in blue. Top: (1) Distill generally applicable SE knowledge from pairs of successful trajectories from different task instances and (2) tuples of a successful task instance and its self-reflection retries. Bottom: (3) Use the generally applicable knowledge and past experience to distill repository-level knowledge from pairs of successful trajectories from different task instances within the same repository and (4) tuples of a successful task instance and its self-reflection retries for a given repository.
Table 1: Success rates $( \% )$ on our test set across CTIM-Rover configurations and repositories. Values in parentheses indicate the number of samples in our test set per repository.
Using the repository-level knowledge, we expect the agent will more efficiently explore its environment by re-using knowledge relating to previously explored areas of its environment. This knowledge may provide insights on (1) the structure of the project, (2) entry points or data flow and architectural patterns, (3) coding conventions encountered, (4) common failure modes relating to the application domain of the software (e.g., failure modes for image processing in OpenCV), or (5) common bugs that the agent encountered in that past.
# 4.2 Exemplar retrieval
In addition to providing the CTIM for ICL, we investigate if ICL with the most task-similar past successful trajectory improves performance. For this, we construct a Milvus (Wang et al., 2021a) index consisting of problem statement embeddings, using Code-T5 (Wang et al., 2021b) base as the embedding model. This model’s size allows local use and it is trained for language and code, which our problem statements consist of. During evaluation, we retrieve the most task-similar past successful trajectory based on cosine similarity scores with a $90 \%$ threshold. This ensures an exemplar is only shown if a relevant one is available $( \approx 6 2 \%$ of samples).
# 5 Results
We evaluate CTIM-Rover’s performance across the configurations listed in Table 1. CTIM
Rover achieves only a $40 \%$ success rate, which is two percent points worse than our baseline AutoCodeRover. Surprisingly, “Exemplar only” configuration matches this performance. The “CTIM only” configuration unexpectedly degraded the performance to just $31 \%$ , 11 percent points less than the baseline. Seeing how poorly CTIM-Rover performed in the “Repo-level CTIM only” configuration, we partially attribute the performance degradation in the “CTIM only” configuration, to the repository-specific CTIM. Moreover, we observe a performance degradation even for the “django” repository, which our train set is heavily skewed towards (Figures 4 and 5). We expected instances in this repository to disproportionally benefit from the additional repository-level knowledge due to the reasons discussed in Section 4.1. Surprisingly the performance is somewhat stable compared to the baseline, even for underrepresented repositories (e.g., Pytest). This suggests source of the observed performance degradation may relate to the CTIM usage and quality rather than the quantity. We hypothesize that (1) providing all CTIM items may introduce unexpected noise because we do not filter these items for relevance regarding the instance’s context, and (2) our CTIM optimization constraint leads to an overly smooth, uninformative and thus noisy CTIM. To diagnose the reasons for the poor performance, we next perform a detailed qualitative investigation of two randomly chosen samples.
# 5.1 Qualitative Performance Degradation Analysis
We first consider “django__django-13933”, a sample that our baseline solves, but CTIM-Rover with “Repo-level CTIM only” does not. Initially, both systems invoke the correct API returning to_python, the function that needs a patch. However, our system decides to further investigate the clean function, which is also returned by the API, and does not further investigate to_python. This indicates an unexpected bias towards the tokens constituting “clean”. In the repository-level CTIM for “django” we notice that the item in Figure 3 contains the word clean. Upon removing this item from the CTIM and retrying, our system correctly identifies the to_python function as the location for the patch and solves the sample.
Next, we focus on “django__django-15987”, a sample that both AutoCodeRover and CTIM-Rover with “Repo-level CTIM only” solved, but CTIMRover failed to solve in the “CTIM only” configu
# Problematic “django” CTIM Item
[...] Ensure to separate resolution from the final redirect to keep path_info clean while preserving the prefix in the final URL, preventing forced [...]
ration. The problem statement of this sample explicitly mentions the constant FIXTURE_DIRS and AutoCodeRover correctly searches the repository for this constant. However, CTIM-Rover with the “CTIM only” configuration does not. We notice that our CTIM does not refer to any constants and suspect that this biases our system towards lower snake-case names. Upon adding the arbitrary, capitalized item “GRANDMA LIKES PASTA” to the CTIM and retrying, our system again solves the sample. This suggests noisy CTIM biases CTIMRover toward suboptimal initial steps rather than helping it skip initial exploration turns and furthermore hypothesize that lengthy exemplar trajectories likely cause similar issues. | We introduce CTIM-Rover, an AI agent for Software Engineering (SE) built on
top of AutoCodeRover (Zhang et al., 2024) that extends agentic reasoning
frameworks with an episodic memory, more specifically, a general and
repository-level Cross-Task-Instance Memory (CTIM). While existing open-source
SE agents mostly rely on ReAct (Yao et al., 2023b), Reflexion (Shinn et al.,
2023), or Code-Act (Wang et al., 2024), all of these reasoning and planning
frameworks inefficiently discard their long-term memory after a single task
instance. As repository-level understanding is pivotal for identifying all
locations requiring a patch for fixing a bug, we hypothesize that SE is
particularly well positioned to benefit from CTIM. For this, we build on the
Experiential Learning (EL) approach ExpeL (Zhao et al., 2024), proposing a
Mixture-Of-Experts (MoEs) inspired approach to create both a general-purpose
and repository-level CTIM. We find that CTIM-Rover does not outperform
AutoCodeRover in any configuration and thus conclude that neither ExpeL nor
DoT-Bank (Lingam et al., 2024) scale to real-world SE problems. Our analysis
indicates noise introduced by distracting CTIM items or exemplar trajectories
as the likely source of the performance degradation. | [
"cs.SE",
"cs.AI"
] |
introduction of "shortcuts"—instructional transitions learned from historically successful trajectories—which allows to bypass redundant reasoning agents and expedite the collective problem-solving process. Experiments for software development tasks demonstrate significant advantages over existing methods. Specifically, compared to the state-of-the-art MAS ChatDev, our method achieves an average reduction of $5 0 . 8 5 \%$ in token usage, and improves the overall code quality by $1 0 . 0 6 \%$ .
# 1 Introduction
In recent years, Large Language Models (LLMs) have achieved remarkable success in various domains, including text generation, code synthesis, and long context comprehension [1, 2, 3]. However, the inherent limitations of standalone LLMs become apparent when they confront complex tasks that extend beyond conversational interactions, often exhibiting behaviors that are not sufficiently robust or adaptive [4]. Research in autonomous agents recently has improved LLMs by empowering them with features such as contextual memory [5], multi-step planning [6] and utilization of external tools [7].
Although these enhanced agents represent a significant leap forward, the increasing complexity of many challenges often surpasses the capabilities of any agent. This necessitates a further evolution towards collaborative approaches, providing a strong motivation for the development of MAS.
MAS collaborate through mechanisms such as role assignment, task decomposition, and iterative communication [5, 8, 9], forming a chat chain between agents and thus achieving sophisticated goals that would be intractable for a single agent. MAS evidently offers advantages in its superior modularity, allowing for specialized agent roles; enhanced scalability, enabling the distribution of tasks across numerous agents; and increased robustness, providing resilience through redundancy and collective problem solving. These benefits have led to notable advancements in complex scenarios such as collaborative software development [10, 11, 9], graphical user interface (GUI) automation [12], social simulation [5, 13, 14], game playing [15, 16, 17, 18] and scientific research [19, 20].
Figure 1: A schematic representation of the executing process, including reference chain and inference chain. The reference chain is based on historically excellent trajectories, while the inference chain is the execution process of the current task.
However, MAS often confront unawareness [21] of resources such as substantial token consumption and excessive time usage, which directly incurs inefficiency of system. As the scale of tasks expands and the number of participating agents increases, the frequency and complexity of agent interactions correspondingly increase, exacerbating operational overhead. Thus, effectively managing and reducing the operational overhead, while simultaneously enhancing resource efficiency, becomes imperative for the MAS. To address these limitations, we propose Co-Saving, a resource-aware multiagent collaboration that leverages experiential knowledge to enhance both operational efficiency and solution quality. Our key innovation lies in introducing the concept of "shortcuts"—instructional transitions mined from historically successful trajectories. The shortcut serves as learned "fast track", enabling to effectively bypass redundant reasoning agents and accelerate problem-solving processes, particularly in familiar task contexts.
As the interaction process of agents in the MAS proceeds, a chat chain is accordingly formed, where the nodes correspond to the solutions generated by agents and the edges represent the instructions in the interaction process of agents. To fully utilize shortcuts to advance the current task execution process, a comprehensive evaluation for shortcuts is designed, involving both effectiveness and efficiency, and shortcuts filtering is implemented accordingly, which is shown schematically in Figure 1. A force termination mechanism is also integrated to prevent resource exhaustion.
Experiments conducted on the SRDD dataset [9] for software development tasks. Compared to baselines that including single-agent framework (e.g., GPT-Engineer [11]) and existing multi-agent systems (e.g., MetaGPT [22], ChatDev [9]), our method achieves higher quality evaluated by colearning [23] with lower cost. Specifically, compared to ChatDev, Co-Saving achieves an average reduction of $5 0 . 8 5 \%$ in token consumption, along with a $1 0 . 0 6 \%$ improvement of code overall quality.
# 2 Method
In task-solving scenarios, particularly when addressing newly assigned tasks, it is often challenging to accurately estimate their inherent complexity or the resources required for successful completion by a multi-agent system (e.g., time, token consumption). To enhance the monitoring and management of task progress, we propose a strategy that involves retrieving reference tasks from historical records. These reference tasks function as a form of memory, guiding the agent in its current task execution.
To leverage these references effectively, experiential knowledge is extracted from a repository of past tasks and integrated into the task-solving process. However, not all prior experiences are directly transferable or beneficial to the task at hand. Consequently, a critical step in this strategy is the evaluation and selection of relevant experiences, aimed at optimizing task execution efficiency.
Figure 2: Overview of reference chain and inference chain to represent the shortcut filtering process. Being selected, evaluated and applied, shortcuts guide the current task to be completed in multiple steps.
# 2.1 Shortcut Formalization
We introduce a type of instruction, termed a "shortcut", which connects two nodes within a reasoning chain by bypassing certain intermediate reasoning steps. This design aims to reduce the overall length of the reasoning chain, thereby enhancing reasoning efficiency. Figure 2 shows a illustration of shortcut filtering. To validate the effectiveness of this approach, it is essential to conduct a comprehensive and quantitative evaluation of the shortcut mechanism.
To enable a more rigorous representation and analysis of the multi-agent collaboration process, we abstract each complete task execution as a directed graph. During the interaction, an instructor issues a series of instructions $( I = \{ i _ { 1 } , i _ { 2 } , \cdots , i _ { n } \} )$ , and an assistant generates corresponding solutions as responses. Each instruction includes comments or feedback on the preceding solution, while each solution represents a complete software code snippet. Accordingly, the entire collaboration process can be formally represented by a directed graph $G = ( N , E )$ , as defined below.
$$
\begin{array} { l } { { N = \{ n _ { j } | j = 0 , 1 , \cdots , n \} } } \\ { { E = \{ ( n _ { j } , i _ { j + 1 } , n _ { j + 1 } ) | n _ { j } , n _ { j + 1 } \in N , i _ { j + 1 } \in I \} } } \end{array}
$$
Here, $N$ denotes the set of nodes, each corresponding to a solution state, with $n _ { 0 }$ representing the initial state (typically an empty solution). $E$ represents the set of the edges corresponding to the instructions. Each edge connects two nodes and represents the transition from one solution $s _ { j }$ to the modified one $s _ { j + 1 }$ , guided by the instruction $i _ { j + 1 }$ .
Here, $N$ denotes the set of nodes, each corresponding to a solution state, with $n _ { 0 }$ representing the initial state (typically an empty solution). $E$ denotes the set of directed edges, where each edge represents an instruction guiding the transition from one solution $s _ { j }$ to its subsequent modification $s _ { j + 1 }$ , based on the instruction $i _ { j + 1 }$ .
To enhance task completion efficiency, we aim for agents to achieve equivalent outcomes with fewer reasoning steps. For example, a solution that originally evolves through two steps (from $s _ { 0 }$ to $s _ { 1 }$ to $s _ { 2 }$ ) could be optimized into a single-step transition (from $s _ { 0 }$ directly to $s _ { 2 }$ ). To this end, we introduce the concept of a shortcut, which is also modeled as a directed edge in graph $G$ . A shortcut connects two non-adjacent nodes, always pointing forward in the interaction sequence, effectively bypassing intermediate reasoning steps while preserving the correctness of the final solution.
Let $S$ denote the set of all shortcuts, formally defined as follows:
$$
S = \{ ( n _ { i } , n _ { j } ) | n _ { i } , n _ { j } \in N , i < j \}
$$
We extract shortcuts from all tasks in the training set and store them in the form of instructions, serving as experiential knowledge. Subsequently, these shortcuts are incorporated into the agent’s memory, allowing the agent to leverage prior experiences to enhance task-solving performance.
# 2.2 Shortcut Filtering
Not all extracted shortcuts are effective and efficient in improving solution generation or reducing resource consumption for a given task. Therefore, evaluating and selecting appropriate shortcuts is essential. We heuristically score shortcuts across multiple dimensions and ultimately derive a comprehensive metric to assess their overall utility.
Throughout the task execution process, we continuously monitor the current resource consumption, including time and token usage. When considering a shortcut, agents are guided to refer to its content and provide feedback accordingly, facilitating the optimization of candidate solutions. Shortcuts whose estimated resource consumption exceeds the remaining available resources are discarded from consideration. Only the feasible subset of shortcuts is retained for further evaluation. This selection process can be formalized as follows:
$$
S \gets \{ s | s \in S , t _ { s } < t _ { r } , \tau _ { s } < \tau _ { r } ) \}
$$
where $t _ { s }$ and $\tau _ { s }$ denote the time and tokens required to generate the shortcut, respectively, while $t _ { r }$ and $\tau _ { r }$ represent the currently remaining time and tokens.
Value The contribution of a shortcut is primarily reflected in the transition it facilitates between two solutions—specifically, the transition from one node to another in the solution graph. For a given solution denoted by $n _ { j }$ located at a specific node, we define its score as follows:
$$
w ( n _ { j } ) = \mathrm { s i m } ( n _ { j } , \mathrm { t a s k } ) \times \mathrm { s i m } ( n _ { j } , s _ { | N | } ) \times \left[ \left[ s _ { j } \right] \right]
$$
Here, $s _ { | N | }$ denotes the solution at the final node in the graph, representing the ultimate goal. The variable task refers to the original software development requirement expressed in natural language. The two similarity terms are computed as the cosine similarity between the embedding vectors of the corresponding texts or code. The indicator function $\mathbb { I } \mathbb { \ }$ is binary: it equals 1 if the code corresponding to $s _ { j }$ can be successfully compiled using an external compiler, and 0 otherwise.
Based on this node-level score, the value of a shortcut $( n _ { i } , n _ { j } )$ is defined as:
$$
v ( n _ { i } , n _ { j } ) = w ( n _ { j } ) - w ( n _ { i } )
$$
This value quantifies the incremental benefit that the shortcut brings to the software development process by enabling a more effective and efficient transition between solutions.
Cost Considering task-solving in multi-agent systems, the primary cost components are of two distinct types: time and tokens. These represent different dimensions of resource consumption and exhibit distinct distribution patterns within the dataset. To enable a unified evaluation, we normalize their raw values into percentile ranks based on their empirical distributions in the dataset. By integrating these normalized values, we derive a composite metric referred to as cost.
For a given shortcut $s _ { 0 }$ , let its normalized time and token consumption be $t _ { 0 }$ and $\tau _ { 0 }$ , respectively. Denote by $T$ the set of normalized time values for all shortcuts in the dataset, denoted as $s$ , and by $\tau$ the corresponding set of normalized token values.
We define the relative rankings of $s _ { 0 }$ in terms of time and tokens as follows:
$$
\alpha = \frac { | \{ t < t _ { 0 } | t \in T \} | } { | S | } , \beta = \frac { | \{ \tau < \tau _ { 0 } | \tau \in T \} | } { | S | }
$$
The composite cost is then computed using the harmonic mean of $\alpha$ and $\beta$ :
$$
C = F _ { \gamma } ( \alpha , \beta ) = \frac { 2 \alpha \beta } { \alpha + \beta }
$$
This formulation balances the trade-off between time and token efficiency, where $\gamma$ is the emergency factor that will be introduced in the following section.
Emergency Factor The value and cost metrics represent two distinct dimensions in evaluating task execution: value reflects the improvement in solution quality, while cost measures the efficiency of task completion. At different stages of task execution, the relative importance of these two aspects may vary. For instance, during the early stages—when resources are still abundant—the primary focus is typically on achieving high-quality solutions. Conversely, as resources approach depletion, the emphasis shifts toward completing the task promptly and within budget.
To accommodate these dynamic shifts in priority, we introduce the emergency factor $\gamma$ , which regulates the relative weighting of value and cost throughout the task execution process. Unlike value and cost—which are determined solely by the characteristics of the shortcuts and the dataset—the emergency factor is explicitly linked to the user-defined resource budget, rendering it inherently dynamic and adaptive.
Let $t$ and $\tau$ denote the allocated budgets for time and tokens, respectively, and let $t _ { \mathrm { u } }$ and $\tau _ { \mathrm { u } }$ represent the corresponding amounts consumed thus far. The emergency factor $\gamma$ is then defined as follows:
$$
\begin{array} { l } { \displaystyle \gamma _ { t } : = \frac { t _ { \mathrm { u } } } { t } , \gamma _ { \tau } : = \frac { \tau _ { \mathrm { u } } } { \tau } . } \\ { \displaystyle \gamma = F _ { 1 } ( \gamma _ { t } , \gamma _ { \tau } ) = \frac { 2 \gamma _ { t } \gamma _ { \tau } } { \gamma _ { t } + \gamma _ { \tau } } . } \end{array}
$$
# 3 Experiments
Baselines To evaluate the effectiveness of our method, we select a diverse set of representative LLMdriven software engineering methods and pure LLMs to facilitate a comprehensive multidimensional comparison:
GPT-3.5-Turbo [24], GPT-4 [25], LLaMA 3 70B [26]and are widely adopted foundation models that serve as baselines for pure LLM performance, covering a range of capabilities from efficient instruction-following to strong multimodal reasoning and open-source adaptability. GPT-Engineer [11] exemplifies a single-agent approach and serves as a foundational framework in this domain. Its key strength lies in its ability to interpret natural language requirements and autonomously perform development tasks such as code generation and execution through single step reasoning.
• ReAct [27] integrates reasoning and acting within LLMs by jointly generating reasoning traces and environment-interacting actions. Unlike approaches that separate thought and execution, ReAct enables LLMs to iteratively refine their understanding and update the environment through interleaved reasoning and action steps.
MetaGPT [22] adopts a MAS design, introducing a novel role-assignment mechanism where agents are assigned specific responsibilities. These agents collaborate through a standardized communication protocol to accomplish software engineering tasks.
ChatDev [9] presents a comprehensive multi-agent collaboration framework that decomposes the software development lifecycle into distinct phases, including demand analysis, code implementation, code review, and system testing. Within this framework, agents engage in multi-turn dialogues to iteratively propose instructions and solutions, thereby enhancing the quality and robustness of the software development process.
Datasets We use a subset of the SRDD [9] as our experimental corpus, containing diverse software development requirements. The dataset is organized into five primary categories: Education, Work, Life, Game, and Creation, and further divided into 40 fine-grained subcategories. We partition it into a training set for shortcut extraction and a test set for evaluation and data collection.
Metrics Our primary research objective is to enhance both the quality and efficiency of task completion in MAS, using software development as the application context. Accordingly, we evaluate task outcomes—specifically code generation—along two key dimensions: quality and efficiency.
For quality assessment, we adopt a comprehensive evaluation framework inspired by co-learning [23], which integrates multiple dimensions into a unified metric for holistic evaluation. Efficiency is measured by the Budgeted Completion Rate (BCR), defined as the proportion of tasks completed within the specified resource constraints.
Completeness: Measures whether the generated code provides a structurally complete implementation of the software requirement. It is quantified as the proportion of source files that do not contain placeholders such as "TODO".
Executability: Assesses the ability of the generated software to compile and run successfully in a real operating system environment. It is calculated as the ratio of programs that compile and execute without errors.
Consistency: Evaluates the semantic alignment between the generated code and the original natural language requirement, computed as the cosine similarity between their respective embedding vectors.
Granularity: Assesses the level of detail in the generated code. Given the inherent challenges in objectively quantifying code granularity and completeness—especially across tasks of varying complexity—we adopt the average number of lines of code per task as a practical proxy. A higher value indicates greater code detail.
Quality: A comprehensive metric obtained by integrating completeness, executability, consistency, and granularity. Specifically, it is defined as the product of these four metrics, serving as an overall indicator of code quality. Budgeted Completion Rate (BCR): Measures the proportion of tasks completed within the predefined resource budget (time and tokens). It reflects resource efficiency without considering the quality of the generated solution; thus, even low-quality code produced quickly is counted under this metric.
Implementation Details The software development process is divided into multiple phases, including demand analysis, language selection, code completion, code review, and system testing. Our work primarily focuses on phases directly related to code generation. For these tasks, we adopt GPT-3.5-Turbo as the base model. For node evaluation, metric consistency computation, and reference task retrieval, we employ text-embedding-ada-002 as the semantic embedder, due to its strong performance in both textual and code-related embeddings. Python 3.9.19 serves as the external feedback environment, enabling compilation, execution, and assessment of generated code.Throughout the experiments, we monitor agent interactions and implicitly construct the interaction graph. The number of edges in the graph corresponds to the number of interaction rounds. To prevent excessive interactions, once the current interaction graph reaches or exceeds the number of edges in the reference task graph, we forcibly terminate the task.
Table 1: overall performance of selected baseline and our Co-Saving. The highest scores are formatted in bold and the second-highest scores are underlined.
# 3.1 Overall Performance
As shown in Table 1, our proposed approach (denoted as Co-Saving1) significantly outperforms all baselines in terms of Quality and surpasses other multi-agent baselines in BCR. These results indicate that Co-Saving effectively accelerates the reasoning trajectory toward generating high-quality solutions.
As single-agent frameworks, GPT-Engineer and ReAct typically do not decompose or subdivide tasks based on user instructions. Instead, they perform code generation through a one-shot reasoning process. Consequently, they exhibit low execution time and resource consumption. The same observation holds for pure LLM-based paradigms. However, for more complex software development tasks, these approaches often fail to produce functionally complete code. In many cases, they define interfaces or modules related to complex requirements but leave them partially or entirely unimplemented. This limitation artificially inflates the Executability metric, as syntactically correct but semantically incomplete code can still compile and run. Such shortcomings are reflected in the relatively low Granularity scores, which indicate insufficient implementation detail.
In contrast, ChatDev adopts a multi-stage reasoning paradigm that iteratively refines solutions, leading to more complete implementations. However, this iterative process incurs higher resource consumption, resulting in a lower BCR. MetaGPT achieves a BCR between GPT-Engineer and ChatDev. It leverages multi-agent collaboration through role-based coordination to perform multistep reasoning, but still struggles to generate logically coherent code for complex tasks, leading to a relatively lower Executability score.
For the Completeness metric, ChatDev slightly outperforms Co-Saving. We hypothesize that this advantage stems from Co-Saving’s resource-awareness and dynamic execution control. When encountering tasks that exceed the available resource budget, Co-Saving may opt to terminate reasoning prematurely, prioritizing efficiency over completeness. In contrast, ChatDev lacks such resource sensitivity and continues execution regardless of task complexity, achieving higher completeness at the expense of increased resource usage.
Additionally, Consistency scores across all four experimental settings show only minor differences, with Co-Saving achieving a modest improvement. This result may reflect the limitations of current embedding models in capturing fine-grained semantic distinctions between code and textual requirements. Consequently, these models are insufficiently sensitive to subtle inconsistencies, highlighting the need for more precise evaluation methods to better assess code-text alignment.
# 3.2 Ablation Study
In the Method section, we introduced key components of our approach: shortcut selection, cost design, and the emergency factor. To validate the effectiveness of each component, we design corresponding ablation studies. The results of the full model and the ablation variants are summarized in Table 2.
Table 2: Ablation study on main design in Co-Saving, \ denote the removing operation, the three ablations remove selection, cost, emergency factor $( \gamma )$ respectively.
As we can see, removing the cost-based shortcut selection mechanism results in all candidate shortcuts being retained for evaluation, including those that significantly exceed the available resource budget. Consequently, this variant exhibits a substantially lower BCR compared to other configurations. In the second ablation, where cost is removed from the value-cost evaluation metric (i.e., only value is considered), the system achieves relatively good performance in Executability and Granularity. However, the lack of resource awareness makes it difficult to complete tasks within time constraints, leading to lower Completeness and a reduced BCR. In the third ablation, the emergency factor is excluded. Without this dynamic adjustment, the system continues to prioritize high-value shortcuts even under resource-limited conditions. Although the BCR remains relatively high due to the forced termination mechanism, both Completeness and Granularity are lower compared to the full Co-Saving configuration, indicating suboptimal task outcomes.
Figure 3: Distribution of path length, time cost and number of tokens. experiments with Co-Saving and without Co-Saving are indicated in red and blue color, respectively, as shown in legend.
# 3.3 Resource Distribution Shift
To further evaluate the effectiveness of Co-Saving, we conducted a comparative study between software development MAS with and without Co-Saving. Specifically, we analyzed the distribution of path lengths—defined as the number of edges in the execution graph, reflecting the number of reasoning iterations—on the same dataset. Additionally, we examined the distribution of resource consumption, including execution time and token usage. The experimental results are presented in Figure 3.
The inclusion of the Co-Saving algorithm results in a significant reduction in the number of reasoning iterations required for task execution. Additionally, both total execution time and token consumption are notably decreased. These findings demonstrate that Co-Saving effectively streamlines the multiagent reasoning process, accelerating task execution and enhancing overall development efficiency.
This improvement is largely attributed to Co-Saving’s ability to accurately assess and utilize shortcuts. By extracting precise and efficient instructions from reference tasks, Co-Saving enables agents to make more informed decisions, thereby reducing the occurrence of inefficient or ineffective actions.
# 3.4 Case Study
In order to illustrate how Co-Saving operates within a MAS, we present a case study of a specific task. Using ChatDev as the underlying software development framework, we select the task "Photo Defogger" as an example. At the initial stage, the system retrieves the reference task "Background Blur Editor" from the training dataset. This reference task forms an execution graph comprising three rounds of reasoning.
For the current task, after the programmer generates node $n _ { 0 }$ in the Code Complete stage, our system evaluates the shortcuts $( n _ { 0 } , n _ { 1 } )$ , $( n _ { 0 } , n _ { 2 } )$ , and $( n _ { 0 } , n _ { 3 } )$ to select the optimal path. Eventually, $( n _ { 0 } , n _ { 2 } )$ is chosen.In the reference task, the transition from $n _ { 0 }$ to $n _ { 1 }$ involves fixing a function to prevent file overwrite issues and adding necessary import statements. For the current task, given the programmer’s initial code and the shortcut $( n _ { 0 } , n _ { 1 } )$ as input, the code reviewer generates an instruction to adjust function details to avoid file overwrites. Based on this instruction, the programmer produces a new solution, corresponding to node $n _ { 2 }$ in the reference task. It is worth noting that the shortcut $( n _ { 0 } , n _ { 2 } )$ is not a simple merge of the edges $( n _ { 0 } , i _ { 1 } , n _ { 1 } )$ and $( n _ { 1 } , i _ { 2 } , n _ { 2 } )$ , but is related to $n _ { 0 }$ and $n _ { 2 }$ , containing more complete and detailed information about how to transition from the source to the target. For instance, a shortcut says "To transition from the initial code version to the final version, follow these instructions: Modules and Classes: 1. In the game.py file, add the following import statement at the top... Data Structure: 1. In the player.py file, add the following attribute to the Player class... Main Program Flow: 1. In the game.py file, modify the take_turn method as follows... Exception Handling...". Without the shortcut input, the code reviewer could simply output short, abbreviated feedback.
Next, the shortcut originating from $n _ { 2 }$ in the reference task—specifically, $( n _ { 2 } , n _ { 3 } )$ —is considered. After evaluation, this shortcut is selected for code review, leading to the generation of another solution in the subsequent code modification stage. At this point, the number of reasoning steps in the current task reaches the predefined limit (matching the reference task’s path length), prompting termination of further inference.The execution processes of both the current and reference tasks are illustrated in Figure 2. Ultimately, Co-Saving successfully generates an executable program with a correct GUI interface and essential functions within three iterations. In contrast, ChatDev requires more iterations to produce a comparable solution, incurring higher token consumption.
# 4 Related Work
Understanding and processing natural language remains a central challenge in artificial intelligence. LLMs [2, 1, 3, 28, 26, 29, 30, 31, 32, 24, 33, 34, 35], empowered by large-scale pretraining and parameter-rich architectures, have achieved remarkable advancements in this area. With the rapid development of LLMs, there is increasing interest in building autonomous agents [36, 15, 5, 13, 37, 4, 11] that leverage LLMs for domain-specific tasks. These agents combine LLMs’ reasoning and language understanding capabilities with external tools [7, 38, 39, 40, 41], context memory management [5, 42], and task decomposition and planning strategies [11, 43, 44, 6], enabling them to tackle increasingly complex problems [45, 36, 46, 47, 48, 49, 50]. In parallel, techniques such as self-evolving [51], self-instruct [52], and other enhancement methods [53, 54, 55, 56, 57, 58, 59] have been proposed to further improve agent capabilities.Beyond single-agent research, MAS have emerged as a critical area of study [22, 14, 60, 61, 43, 62, 63, 49, 64]. Unlike single-agent frameworks, which attempt to solve complex problems independently, MAS introduce greater variability and design flexibility. This includes assigning distinct roles and identities to different agents, designing workflows for decomposing complex tasks into subtasks, and establishing communication protocols, information exchange pathways, and coordination structures to facilitate collaborative task execution.
Recent studies have explored how the number and structure of agents influence the performance and scalability of MAS [57]. As agent count and task complexity increase, interaction frequency and resource consumption also grow. This highlights key challenges in enhancing resource utilization, minimizing redundant communication, and designing efficient collaboration mechanisms. For instance, AgentDropout [65] improves communication efficiency by pruning redundant agents and interactions in multi-round dialogues, enhancing token efficiency and task performance. BTP (BudgetConstrained Tool Learning with Planning) [66] formulates budget-aware tool selection strategies to maximize utility under resource constraints. TimeArena [21] provides a simulated environment with complex temporal dynamics, revealing that current LLMs lack robust temporal reasoning, especially in multitasking or concurrent scenarios—underscoring the need for more temporally-aware agent designs. | Recent advancements in Large Language Models (LLMs) and autonomous agents
have demonstrated remarkable capabilities across various domains. However,
standalone agents frequently encounter limitations when handling complex tasks
that demand extensive interactions and substantial computational resources.
Although Multi-Agent Systems (MAS) alleviate some of these limitations through
collaborative mechanisms like task decomposition, iterative communication, and
role specialization, they typically remain resource-unaware, incurring
significant inefficiencies due to high token consumption and excessive
execution time. To address these limitations, we propose a resource-aware
multi-agent system -- Co-Saving (meaning that multiple agents collaboratively
engage in resource-saving activities), which leverages experiential knowledge
to enhance operational efficiency and solution quality. Our key innovation is
the introduction of "shortcuts" -- instructional transitions learned from
historically successful trajectories -- which allows to bypass redundant
reasoning agents and expedite the collective problem-solving process.
Experiments for software development tasks demonstrate significant advantages
over existing methods. Specifically, compared to the state-of-the-art MAS
ChatDev, our method achieves an average reduction of 50.85% in token usage, and
improves the overall code quality by 10.06%. | [
"cs.CL",
"cs.AI",
"cs.MA",
"cs.SE"
] |
# Introduction
Chest X-ray (CXR) imaging remains a cornerstone of thoracic diagnostics, enabling rapid detection of critical conditions such as pneumonia, pneumothorax, and cardiomegaly. Despite its ubiquity, clinical interpretation of CXRs still largely relies on manual reading by radiologists, which is subject to inter-observer variability, time constraints, and the growing volume of imaging studies. These limitations underscore the urgent need for effective foundation models equipped with high explainability and context-aware reasoning, capable of enhancing clinical decision-making with greater speed, accuracy, and transparency.
While recent advances in automated CXR foundation models show promise, existing approaches still fall short in several critical areas: $\textcircled{1}$ Narrow Pathology Coverage. Most existing foundation models have narrow pathology coverage and act as specialized expert systems, often showing inconsistent performance across different pathologies. For instance, they often excel in detecting pathologies like pleural effusion (with performance up to 0.783), but fail to generalize to others such as enlarged cardiomediastinum or lung lesions. $\textcircled{2}$ Limited Clinical Applicability. Clinical applicability is often limited by inadequate integration of visual information and a lack of interactivity with clinical environments. Some foundation models are unable to integrate visual information such as lesion localization or anatomical context with clinical reasoning, which limits their diagnostic effectiveness in complex cases. Although recent advances in multi-modal large language models (LLMs) show promise in lesion detection and report generation, these models remain disconnected from real-world clinical workflows. They cannot interact with external systems, revise their reasoning based on new evidence, or incorporate contextual information. This lack of interactivity limits their practical use in clinical settings, where adaptability and real-time decisionmaking are critical. To address these aforementioned two limitations, one promising solution lies in leveraging the complementary strengths of small vision models and large language models. Small vision models have usually demonstrated strong performance in pathology and lesion detection, recognition, and classification tasks, proving effective in specialized visual domains. In contrast, although LLMs often fall short in visual pathology detection, they offer advanced reasoning and contextual understanding. Combining their complementary strengths can improve visual integration and enable dynamic interactivity, supporting effective interaction with clinical environments, continuous updating based on new evidence, and real-time incorporation of contextual information.
Recent advances in large reasoning models (LRMs), such as DeepSeek-R1 and OpenAIO1, have demonstrated strong capabilities in reasoning and contextual understanding, highlighting their strong potential for applications in medical AI. Specifically, these models excel at synthesizing multi-source textual data, resolving contradictions, and generating logically coherent conclusions. However, their reliance on text-only paradigms restricts their capacity to interpret visual information, which is essential in radiology and directly informs clinical decisions. For instance, distinguishing pneumonia from atelectasis requires not only detecting lung opacity but also correlating its spatial distribution with clinical indicators. Therefore, how to bridge the gap and develop a unified framework that integrates both visual and textual reasoning has emerged as a key challenge. A deeper challenge involves converting visual findings into anatomically accurate clinical descriptions, which demands both detailed visual understanding and clinical expertise. Overcoming this requires a flexible framework that enables dynamic interaction between the reasoning agent, the environment, and the data. Such rich interaction enables the model not only to interpret findings accurately, but also to iteratively refine its understanding through continuous engagement with both contextual and visual information.
Figure 1: Illustrative diagram of the RadFabric framework
To address these challenges, we introduce RadFabric, a multimodal reasoning framework that unifies visual and textual reasoning and supports effective interaction with clinical environments for a comprehensive interpretation of CXR. As shown in Fig. 1, the proposed RadFabric contains four parts: The CXR Agent, which employs small, highly effective vision models for precise pathology detection and generates interpretable Grad-CAM maps that highlight regions of interest (e.g., fracture sites, pleural effusion). These specialized models excel in detecting and localizing pathologies with accuracy, addressing the limitations of large language models in direct visual analysis. The Anatomical Interpretation Agent, which anchors these visual findings to segmented anatomical structures (e.g., left lung, diaphragm), transforming heatmaps into precise clinical descriptions (e.g., “effusion localized to the left costophrenic angle”). By integrating these specialized vision models, RadFabric significantly enhances the diagnostic performance of the overall system. The Report Agent, which utilizes multimodal models (e.g., Qwen2-VL-7b) to generate structured clinical reports. And the Reasoning Agent, which integrates visual maps, anatomical context, and textual reports, is interactive, and explicitly trainable to produce step-bystep reasoning before generating the final diagnosis. This process enhances interpretability, as the reasoning trajectory itself provides a transparent, clinically meaningful rationale for each diagnosis. The proposed RadFabric integrates visual and textual reasoning through a modular, multi-agent architecture that enables dynamic interaction with both data and environment. By decoupling the roles of model, data, and environment, it promotes flexibility and scalability. Specialized agents, such as lightweight CXR Agent, serve as tools for a central reasoning agent, and can be independently updated to enhance performance over time. This design allows the system to iteratively refine its understanding, leading to more accurate, interpretable, and clinically grounded decisions. Empirically, RadFabric achieves near-perfect fracture detection (1.000 vs. 0.096–0.269 in legacy systems) and significantly improves lung lesion identification (0.850 vs. 0.176–0.197), setting a new standard for reliable and actionable CXR interpretation.
This study presents the development, validation, and clinical evaluation of RadFabric. Subsequent sections detail its methodology, benchmark performance, and implications for AI-driven radiology. By unifying visual and textual reasoning—and integrating specialized models to enhance performance—RadFabric demonstrates the potential for robust, multimodal diagnostic systems in medical imaging.
# Results of RadFabric with Frozen Reasoning Agent
Traditional CXR Agents (1-7) demonstrate modest diagnostic capabilities with overall performance scores ranging from 0.229 to 0.527, and substantial variability across different pathologies. These agents generally perform better on conditions like Pleural Effusion (0.375-0.783) and Edema (0.312-0.609), while struggling with Fracture detection (0.096-0.269) and Lung Lesion identification (0.176-0.197). Notably, most traditional agents exhibit significant coverage gaps, with many unable to detect certain pathologies entirely, suggesting specialized rather than comprehensive diagnostic utility.
In contrast, the novel RadFabric agents (RadFabric-o1 and RadFabric-R1) represent a significant advancement with overall performance scores of 0.799 and 0.739 respectively, substantially outperforming all traditional counterparts. These agents provide comprehensive coverage across all 14 pathologies, with RadFabric-o1 achieving perfect scores (1.000) for Enlarged Cardiomediastinum and Fracture detection—conditions where traditional agents either perform poorly or lack capabilities entirely. This marked improvement in both performance and pathology coverage suggests that newer RadFabric technology offers considerably more reliable and versatile diagnostic support for clinical chest X-ray interpretation.
Table 1: Chest X-Ray (CXR) Agents and their pathology coverage (Acc).
Figure 2: Comparative results from our RadFabric system, CXR agents, and established report generation methods (CAMMAL and CheXagent) for chest x-ray image 1.
In Table 2, the results demonstrate the strengths and limitations of different methods in identifying lung opacity and pneumonia in chest X-rays. While the CXR agent (classification model) shows strong predictive capabilities for lesions, as seen in cases such as lung opacity (e.g., CXR Agent#3: 0.7804) and pneumonia (e.g., CXR Agent#3: 0.8529), report generation models like CAMMAL and CheXAgent may occasionally fail to explicitly mention these findings. For instance, CAMMAL noted ”hazy opacities at the lung bases” but attributed them to epicardial fat, while CheXAgent reported negative findings for the lungs. This highlights the potential for complementary use, where the classification model can detect lesions that report generators might overlook. RadFabric, our proposed method, integrates multiple CXR agents and report generation models, enabling a more robust analysis. By leveraging diverse perspectives, RadFabric minimizes the likelihood of missing lesions, achieving predictions that closely align with the ground truth labels (e.g., lung opacity: 0.7804, pneumonia: 0.7665). This integration underscores its potential for improving diagnostic accuracy.
As displayed in Fig. 3, the results also highlight the variability and potential biases of individual models, emphasizing the importance of integrating multiple perspectives. For example, while CXR Agent#2 and CXR Agent#3 provided high scores for lung opacity (0.861 and 0.7746, respectively), they differed significantly in their pneumonia predictions, with CXR Agent#2 failing to provide a score and CXR Agent#3 predicting 0.6436. Similarly, CXR Agent#7 showed a high pneumonia prediction (0.9656) but did not provide a lung opacity score. This inconsistency across models indicates that relying on a single agent may lead to incomplete or skewed results. Additionally, the report generation models, CAMMAL and CheXAgent, not only differ in the findings they report but also in how they interpret the clinical context. CAMMAL mentioned both ”pulmonary edema” and ”a right-sided pneumonia,” showing a broader diagnostic scope, while CheXAgent focused on ”mild pulmonary edema” and omitted any mention of pneumonia. This discrepancy indicates that report generation models are subject to interpretive limitations and may miss critical findings depending on the phrasing or contextual emphasis.
RadFabric addresses these challenges by combining the strengths of both classification models and report generators. Its ability to aggregate and reconcile outputs ensures a more balanced and complete understanding of potential abnormalities. For instance, RadFabric captures the high lung opacity score (0.861) from key agents like CXR Agent#2 while maintaining sensitivity to pneumonia findings (0.6081) by incorporating information from agents and contextual cues from report generators. This multi-faceted approach reduces reliance on any single model’s performance and mitigates the risk of diagnostic gaps, making RadFabric a more reliable and robust solution for clinical applications.
# Results of RadFabric with Trainable Reasoning Agent
When we train the reasoning agent with the GRPO strategy, the overall accuracy achieves 0.897, which surpasses the RadFabric with the frozen reasoning agent by a large margin of $9 . 8 \%$ . This suggests the trainable reasoning agent can learn and adapt to the nuanced requirements of clinical diagnosis, improving its ability to synthesize multimodal data for more precise and contextually relevant conclusions. The Guided Reward Policy Optimization (GRPO) strategy further enhances the agent’s capacity to prioritize clinically significant reasoning pathways, ensuring that the generated diagnoses are both evidence-based and aligned with real-world medical expectations. This adaptability is particularly significant in complex cases where subtle or overlapping pathologies may otherwise be misinterpreted or overlooked. For instance, the trainable reasoning agent demon
# Input CXR Image
# <think>
八 1. For Atelectasis, the highest-ranked model is Chexpert with a score of 0.3857, followed by JFHealthcare with 0.6386 and Torchxrayvision_all with 0.8503. The highest score is from Torchxrayvision_all.
2. For Cardiomegaly, the highest-ranked model is Chexpert with a score of 0.0285, followed by JFHealthcare with 0.4693 and Torchxrayvision_all with 0.7747. The highest score is from Torchxrayvision_all.
3. For Consolidation, the highest-ranked model is Chexpert with a score of 0.1023, followed by JFHealthcare with 0.8629 and Torchxrayvision_all with 0.7348. The highest score is from JFHealthcare.
4. For Edema, the highest-ranked model is Chexpert with a score of 0.3867, followed by JFHealthcare with 0.7159 and Torchxrayvision_all with 0.6482. The highest score is from JFHealthcare.
5. For Enlarged Cardiomediastinum, the highest-ranked model is Torchxrayvision_all with a score of 0.5783, followed by Torchxrayvision_mimic with 0.6209. The highest score is from Torchxrayvision_all.
6. For Fracture, the highest-ranked model is Torchxrayvision_pc with a score of 0.5464, followed by Torchxrayvision_chex with 0.7484. The highest score is from Torchxrayvision_chex.
7. For Lung Lesion, the highest-ranked model is Torchxrayvision_all with a score of 0.5989, followed by Torchxrayvision_chex with 0.7427. The highest score is from Torchxrayvision_chex.
8. For Lung Opacity, the highest-ranked model is Torchxrayvision_all with a score of 0.8354, followed by Torchxrayvision_chex with 0.9507. The highest score is from Torchxrayvision_chex.
9. For No Finding, since there are multiple findings, the probability is low.
10. For Pleural Effusion, the highest-ranked model is Chexpert with a score of 0.3005, followed by JFHealthcare with 0.7408 and Torchxrayvision_all with 0.8891. The highest score is from Torchxrayvision_all.
11. For Pleural Other, no model supports this, so the probability is 0.
12. For Pneumonia, the highest-ranked model is Torchxrayvision_pc with a score of 0.6681, followed by Torchxrayvision_all with 0.6453. The highest score is from Torchxrayvision_pc.
13. For Pneumothorax, the highest-ranked model is Torchxrayvision_all with a score of 0.5357, followed by Torchxrayvision_chex with 0.5220. The highest score is from Torchxrayvision_all.
14. For Support Devices, no model supports this, so the probability is 0. </think> \boxed{[(Atelectasis, 0), (Cardiomegaly, 0), (Consolidation, 0), (Edema, 0), (Enlarged Cardiomediastinum, 0), (Fracture, 0), (Lung Lesion, 0), (Lung Opacity, 0), (No Finding, 0), (Pleural Effusion,
1), (Pleural Other, 0), (Pneumonia, 0), (Pneumothorax, 0), (Support Devices, 0)]} Ground-truth Answer: (Atelectasis, 1), (Cardiomegaly, 0), (Consolidation, 0), (Edema, 0), (Enlarged Cardiomediastinum, 0), (Fracture, 0), (Lung Lesion, 0), (Lung Opacity, 0), (No Finding, 0), (Pleural Effusion, 1), (Pleural Other, 0), (Pneumonia, 0), (Pneumothorax, 0), (Support Devices, 0)
strates superior performance in distinguishing conditions with similar visual manifestations, such as pneumonia versus atelectasis. By leveraging cross-modal attention mechanisms and iterative learning, the agent refines its understanding of spatial patterns in visual data (e.g., Grad-CAM heatmaps) and correlates them with textual inputs like patient history or symptom descriptions. This capability not only improves diagnostic accuracy but also enhances transparency, as the reasoning process can be traced back to specific visual and textual evidence.
In Fig. 4, the visual result generated by the RadFabric system with a trainable reasoning agent demonstrates both the strengths and current limitations of multi-agent, multimodal CXR analysis. As shown in the ¡think¿ reasoning trace, the system leverages multiple specialized models—such as Chexpert, JFHealthcare, and various Torchxrayvision variants—to independently assess a wide spectrum of pathologies. For most conditions, the model correctly identifies the absence of findings, and it successfully detects pleural effusion, in agreement with the ground-truth label. Notably, the agent assigns the highest probability for pleural effusion based on the Torchxrayvision all model, which aligns with the reference standard. However, the system fails to recognize the presence of atelectasis, despite high scores from several component models (e.g., Torchxrayvision all: 0.8503), ultimately outputting a negative prediction for this pathology. This discrepancy highlights a challenge in model aggregation and decision fusion, where high individual model confidence does not always translate into a positive final prediction—potentially due to conservative thresholding or conflicting evidence among agents. The visual evidence, likely reflected in the Grad-CAM heatmaps, supports the model’s high confidence for pleural effusion, suggesting robust localization and anatomical grounding for this finding. Overall, the result exemplifies RadFabric’s ability to synthesize multi-source data and generate interpretable outputs, yet also underscores the importance of further optimizing integration strategies to reduce false negatives, particularly in cases of co-existing pathologies such as atelectasis.
# Method
# Overview
To address the need for faster, more accurate, and transparent chest X-ray (CXR) diagnosis, we developed RadFabric. This multi-agent system functions as an explainable, context-aware foundation model that integrates visual analysis with clinical reasoning to assist or automate radiological interpretation. The analytical workflow of RadFabric is managed by our distinct agents. The process begins with two parallel inputs: 1) the CXR Agent Group provides an initial diagnosis and visual map of potential disease areas, and 2) the Report Agent Group creates a structured clinical report. These outputs are then processed by the Anatomical Interpretation Agent, which analyzes the spatial location of the visual findings and translates them into precise anatomical terminology. In the final stage, the Reasoning Agent integrates all preceding information, including the diagnosis, report, and anatomical analysis, to produce a comprehensive assessment of chest pathologies through higher-order synthesis. The following subsections describe the four components in detail.
# CXR Agent Group
The CXR Agent Group consists of eight specialized agents 1–4, each trained on distinct datasets to detect specific pathologies, as detailed in Table 2. When presented with a CXR image, each agent independently performs two critical functions. First, it generates a textual diagnostic hypothesis (e.g., ”cardiomegaly” or ”atelectasis”). Second, it produces a corresponding visual interpretation map using Gradient-weighted Class Activation Mapping (Grad-CAM) 5 to localize the image regions that informed its finding. This dual-output design provides both a clinical assessment and its visual evidence.
The collective outputs from all agents–the set of textual hypotheses and their associated visual maps–are then aggregated. This parallelized analysis ensures comprehensive coverage across a wide range of chest abnormalities, with deliberate overlap between agents enhancing detection robustness. This aggregated, multimodal information is then forwarded to the next stage of the RadFabric pipeline, establishing a rich foundation of text and visual evidence for higher-order diagnostic reasoning.
# Report Agent Group
The Report Agent Group employs two specialized multimodal models—ChexAgent 6 and Qwen2- VL-7b 7—to generate comprehensive clinical reports from chest radiographs. This dual-agent approach is a deliberate design choice to enhance the system’s robustness and interpretive depth. By having each VLM independently analyze the image, the system benefits from complementary perspectives, as the models may highlight different abnormalities or interpret the same findings with varying clinical emphasis.
For a given input image, each agent produces a detailed clinical report that documents relevant observations, potential pathologies, and a preliminary interpretation. These two reports are then aggregated and passed to the subsequent stages of the RadFabric pipeline. There, they serve as critical narrative inputs for the final diagnostic synthesis, ensuring the system’s ultimate assessment is informed by both the comprehensive radiological reporting from this group and the targeted pathology detection described in the preceding section.
# Anatomical Interpretation Agent
The Anatomical Interpretation Agent contextualizes visual findings by mapping highlighted disease regions from the CXR Agent Group to their precise locations within the chest radiograph. This process anchors the abstract visual markers to a standardized anatomical framework, thereby enhancing their diagnostic value.
Specifically, given a CXR image, the agent first performs anatomical segmentation, dividing the radiograph into key structural regions including the esophagus, left lung, right lung, and diaphragmatic surfaces. This segmentation establishes a standardized anatomical framework that serves as a reference map for subsequent analysis. The agent then employs a spatial correlation algorithm to analyze the relationship between GradCAM-highlighted regions from the CXR Agent Group and the segmented anatomical structures. This analysis quantifies the degree of overlap and spatial positioning of potential pathological areas relative to specific anatomical landmarks.
Based on these spatial correlations, the agent generates precise anatomical descriptions in clinical language. For example, if Grad-CAM highlights indicating “pleural effusion” predominantly overlap with the left lung segment, the agent produces the statement: “The effusion is localized to the left lower lung field, with associated blunting of the costophrenic angle.” This step effectively translates the visual evidence into clinically meaningful spatial information. By providing this anatomical precision, the agent enhances the interpretability of the visual findings and facilitates more accurate clinical reasoning in the subsequent stages of the diagnostic pipeline. This anatomical grounding is particularly valuable for conditions where location significantly influences differential diagnosis and treatment planning.
# Reasoning Agent
The Reasoning Agent represents the culmination of our RadFabric system, synthesizing inputs from all previous agents to perform sophisticated clinical reasoning and generate comprehensive diagnostic assessments. This agent integrates initial diagnosis results, anatomical context, and preliminary clinical interpretations into a cohesive diagnostic framework.
We leverage advanced large reasoning models—specifically OpenAI o1 8 or DeepSeekR1 9—as the foundation for our reasoning agent due to their exceptional capabilities in complex logical inference, medical knowledge integration, and clinical decision-making. These models have demonstrated superior performance in connecting disparate pieces of evidence and resolving potential contradictions between different information sources. In addition to OpenAI o1 and DeepSeek-R1, we can use other open-source multi-modal large language models (MLLMs) as our reasoning model.
The reasoning process follows a structured multi-step approach: First, the agent aggregates inputs from all preceding components, including initial diagnosis results from the CXR Agent Group, anatomical correlations from the Anatomical Interpretation Agent, and structured reports from the Report Agent Group. This aggregation creates a comprehensive information package with textual evidence. Second, the agent conducts a systematic cross-validation of findings across initial diagnosis results from different CXR agents, identifying consistencies and resolving apparent contradictions. Finally, the agent generates a comprehensive assessment for each potential pathology, assigning confidence levels based on the strength and consistency of supporting evidence.
To enable the reasoning agent to develop robust and interpretable reasoning capabilities, we employ Guided Reward Policy Optimization (GRPO) as our core training strategy. Under this approach, the model is trained using structured prompts that encourage a ”think-then-answer” reasoning pattern: the agent first explicitly articulates its reasoning process—enclosed within delimiters for clarity—and then presents its final diagnostic conclusions in a standardized, easily extractable format. GRPO provides reward signals that incentivize both accurate predictions and adherence to the required format, promoting not only diagnostic correctness but also transparency in the agent’s logical pathway. This explicit separation of reasoning and conclusion ensures that each diagnostic output is accompanied by a clear, step-by-step explanation, facilitating interpretability and enabling thorough downstream evaluation of both format adherence and clinical accuracy.
This reasoning-centric approach enhances diagnostic transparency and enables explainable AI by providing clinicians with not only the final diagnostic conclusions but also the logical pathway through which these conclusions were reached. By maintaining a clear chain of evidence from visual findings to anatomical context to clinical reasoning, the RadFabric system offers interpretable and clinically sound chest X-ray analysis that can supplement radiological expertise in clinical settings.
Table 2: Chest X-Ray (CXR) Agents and their pathology coverage.
# Implementation Details
The RadFabric framework is implemented on the MCP server, which utilizes the MCP protocol to communicate with various MCP servers. All components of the framework—including CXR agents, report agents, the anatomical interpretation agent, and the reasoning agent—are developed and deployed on the MCP server. An MCP client is designed to interact with the server, enabling the processing of chest X-ray images and generating diagnostic predictions. The reasoning agent in our RadFabric system is trained using a reinforcement learning approach built on the EasyR1 framework, employing Generative Reward-conditioned Policy Optimization (GRPO) to enhance both diagnostic accuracy and interpretability. The base model, Qwen2.5-14B-Instruct, is fine-tuned for chest X-ray analysis within this framework. During training, carefully designed system prompts guide the agent to follow a structured ”think-then-answer” reasoning pattern, where the model first explicitly articulates its step-by-step reasoning (enclosed in delimiter tags), followed by presenting final disease probability predictions inside a box block. The GRPO algorithm optimizes the model by providing reward signals that incentivize both accurate predictions and strict adherence to the specified output format. Training is conducted for up to 3 epochs on 8 A100 GPUs, with a batch size of 512 and a learning rate of 1.0e-6. The evaluation framework assesses performance by checking for format adherence using regular expression pattern matching and by comparing disease probability predictions against ground truth labels.
1. T. Dai, R. Zhang, F. Hong, J. Yao, Y. Zhang, and Y. Wang, “Unichest: Conquer-and-divide pre-training for multi-source chest x-ray classification,” IEEE Transactions on Medical Imaging, 2024.
2. J. P. Cohen, J. D. Viviano, P. Bertin, P. Morrison, P. Torabian, M. Guarrera, M. P. Lungren, A. Chaudhari, R. Brooks, M. Hashir et al., “Torchxrayvision: A library of chest $\mathbf { X }$ -ray datasets and models,” in International Conference on Medical Imaging with Deep Learning. PMLR, 2022, pp. 231–249.
3. D. Banik, “Robust stochastic gradient descent with momentum based framework for enhanced chest x-ray image diagnosis,” Multimedia Tools and Applications, pp. 1–24, 2024.
4. P. Rajpurkar, J. Irvin, K. Zhu, B. Yang, H. Mehta, T. Duan, D. Ding, A. Bagul, C. Langlotz, K. Shpanskaya et al., “Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning,” arXiv preprint arXiv:1711.05225, 2017.
5. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
6. Z. Chen, M. Varma, J.-B. Delbrouck, M. Paschali, L. Blankemeier, D. Van Veen, J. M. J. Valanarasu, A. Youssef, J. P. Cohen, E. P. Reis et al., “Chexagent: Towards a foundation model for chest $\mathbf { \boldsymbol { x } }$ -ray interpretation,” in AAAI 2024 Spring Symposium on Clinical Foundation Models.
7. S. Bai, K. Chen, X. Liu, J. Wang, W. Ge, S. Song, K. Dang, P. Wang, S. Wang, J. Tang, H. Zhong, Y. Zhu, M. Yang, Z. Li, J. Wan, P. Wang, W. Ding, Z. Fu, Y. Xu, J. Ye, X. Zhang, T. Xie, Z. Cheng, H. Zhang, Z. Yang, H. Xu, and J. Lin, “Qwen2.5-vl technical report,” arXiv preprint arXiv:2502.13923, 2025.
8. A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney et al., “Openai o1 system card,” arXiv preprint arXiv:2412.16720, 2024.
9. DeepSeek-AI, “Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning,” 2025. [Online]. Available: https://arxiv.org/abs/2501.12948 | Chest X ray (CXR) imaging remains a critical diagnostic tool for thoracic
conditions, but current automated systems face limitations in pathology
coverage, diagnostic accuracy, and integration of visual and textual reasoning.
To address these gaps, we propose RadFabric, a multi agent, multimodal
reasoning framework that unifies visual and textual analysis for comprehensive
CXR interpretation. RadFabric is built on the Model Context Protocol (MCP),
enabling modularity, interoperability, and scalability for seamless integration
of new diagnostic agents. The system employs specialized CXR agents for
pathology detection, an Anatomical Interpretation Agent to map visual findings
to precise anatomical structures, and a Reasoning Agent powered by large
multimodal reasoning models to synthesize visual, anatomical, and clinical data
into transparent and evidence based diagnoses. RadFabric achieves significant
performance improvements, with near-perfect detection of challenging
pathologies like fractures (1.000 accuracy) and superior overall diagnostic
accuracy (0.799) compared to traditional systems (0.229 to 0.527). By
integrating cross modal feature alignment and preference-driven reasoning,
RadFabric advances AI-driven radiology toward transparent, anatomically
precise, and clinically actionable CXR analysis. | [
"cs.CV",
"cs.CL"
] |
# I. Introduction
R hRaEsCbEeNenTLpYr, poesresdo, asluiczhe f fdedrearteatd laeranrinig P(FL) with meta-learning. It tailors learning processes to provide personalized models for individual clients while benefiting from the global perception offered by FL, thereby capturing both generalization and personalization in the models. PFL can strike a balance between personalized models and the global model, e.g., via a global-regularized multi-task framework [1]. It can provide customized services for applications with heterogeneous local data distributions or tasks, e.g., intelligent Internet of Things networks with geographically dispersed clients [2]–[4]. A popular PFL technique is called
Ditto, which was developed to adapt to the heterogeneity in FL settings by simultaneously learning a global model and distinct personal models for multiple agents [1].
While preserving personal data locally throughout its training process, similar to FL, PFL can still suffer from privacy leakage due to the incorporation of FL. Differential privacy (DP) [5] can be potentially applied to protect the privacy of Ditto. In each round, every client trains two models separately, including its local model and personalized model, based on its local dataset and the global model broadcast by the server in the last round. With DP, the clients perturb their local models by adding noise before uploading them to the server, where the perturbed local models are aggregated to update the global model needed for the clients to train their local models and personalized models further. This privacy-preserving PFL model, namely DP-Ditto, is a new framework. It is important to carefully balance convergence, privacy, and fairness in DP-Ditto. However, the impact of privacy preservation, i.e., the incorporation of DP, on the convergence and fairness of the personalized models has not been investigated in the literature. Let alone a PFL design with a balanced consideration between convergence, privacy, and fairness.
Although privacy and fairness have been studied separately in the contexts of both FL and PFL, e.g., [6]–[21], they have never been considered jointly in PFL, such as Ditto [1]. Their interplay has been overlooked. The majority of the existing studies, e.g., [21]–[30], have focused on the model accuracy of PFL. Some other existing works, e.g., [1], [31], [32], have attempted to improve the performance distribution fairness of FL. None of these studies has addressed the influence of DP on the fairness and accuracy of PFL.
This paper studies the trade-off between privacy guarantee, model convergence, and performance distribution fairness of privacy-preserving PFL, more specifically, DP-Ditto. We analyze the convergence upper bound of the personalized learning (PL) and accordingly optimize the aggregation number of FL given a privacy budget. We also analyze the fairness of PL in DP-Ditto on a class of linear problems, revealing the possibility of maximizing the fairness of privacypreserving PFL given a privacy budget and aggregation number. To the best of our knowledge, this is the first work that investigates the trade-off among the privacy, convergence, and fairness of PFL, and optimizes PFL for convergence under the constraints of performance distribution fairness and privacy requirements. The major contributions of this paper are summarized as follows:
• We propose a new privacy-preserving PFL framework, i.e., DP-Ditto, by incorporating an $( \epsilon , \delta )$ -DP perturbation mechanism into Ditto. This extension is non-trivial and necessitates a delicate balance between convergence, privacy, and fairness. A convergence upper bound of DP-Ditto is derived, capturing the impact of DP on the convergence of personalized models. The number of global aggregations is identified by minimizing the convergence upper bound. We analyze the fairness of DP-Ditto on a class of linear problems to reveal the conditional existence and uniqueness of the optimal setting balancing convergence and fairness, given a privacy requirement.
Extensive experiments validate our convergence and fairness analysis of DP-Ditto and the obtained optimal FL aggregation number and weighting coefficients of FL versus PL. Three image classification tasks are performed using deep neural network (DNN), multi-class linear regression (MLR), and convolutional neural network (CNN) on the Federated MNIST, Federated FMNIST, and Federated CIFAR10 datasets. DP-Ditto can outperform its benchmarks, i.e., the DP-perturbed FedAMP [24], pFedMe [22], APPLE [25], and FedALA [26], by $9 9 . 9 8 \%$ , $3 2 . 7 1 \%$ , $9 7 . 0 4 \%$ , and $9 9 . 7 2 \%$ , respectively, in fairness and $5 9 . 0 6 \%$ , $9 . 6 6 \%$ , $2 8 . 6 7 \%$ , and 64.31% in accuracy.
The rest of this paper is structured as follows. Section II presents a review of related works. Section III outlines the system and threat models of DP-Ditto and analyzes its privacy and DP noise variance. In Section IV, the convergence upper bound of DP-Ditto is established, and the optimal number of FL global aggregations is obtained accordingly. In Section V, we analyze the fairness of PL on a class of linear regression problems to demonstrate the feasibility of fairness maximization. The experimental results are discussed in Section VI. The conclusions are given in Section VII.
# II. Related Work
# A. Personalization
PFL frameworks have been explored to combat statistical heterogeneity through transfer learning (TL) [28], metalearning [21], [27], and other forms of multitask learning (MTL) [22]–[26]. None of these has addressed the fairness among the participants of PFL. TL conveys knowledge from an originating domain to a destination domain. TL-based FL enhances personalization by diminishing domain discrepancy of the global and local models [33]. FedMD [28] is an FL structure grounded in TL and knowledge distillation (KD), enabling clients to formulate autonomous models utilizing their individual private data. Preceding the FL training and KD, TL is implemented by employing a model previously trained on a publicly available dataset.
Meta-learning finds utility in FL in enhancing the global model for rapid personalization. In [27], a variation of FedAvg, named Per-FedAvg, was introduced, leveraging the Model-Agnostic Meta-Learning (MAML). It acquired a proficient initial global model that is effective on a novel heterogeneous task and can be achieved through only a few gradient descent steps. You et al. [29] further proposed a Semi-Synchronous Personalized FederatedAveraging (PerFedS) mechanism based on MAML, where the server sends a meta-model to a set of UEs participating in the global updating and the stragglers in each round. In another meta-learning-based PFL framework [21], a privacy budget allocation scheme based on R´enyi DP composition theory was designed to address information leakage arising from two-stage gradient descent.
MTL trains a model to simultaneously execute several related tasks. By considering an FL client as a task, there is the opportunity to comprehend the interdependence among the clients manifested by their diverse local data. In [22], pFedMe employing Moreau envelopes as the regularized loss functions for clients was recommended to disentangle the optimization of personalized models from learning the global model. The global model is obtained by aggregating the local models updated based on the personalized models of the clients. Each client’s personalized model maintains a bounded distance from the global model. In [23], FedProx was formulated by incorporating a proximal term into the local subproblem. Consequently, the contrast was delineated between the global and local models to ease the modulation of the influence of local updates. In [30], a federated multitask learning (FMTL) framework was developed, where the server broadcasts a set of global models aggregated based on the local models of different clusters of clients, and each client selects one of the global models for its local model updating.
Huang et al. [24] integrated PFL with supplementary terms and employed a federated attentive message passing (FedAMP) strategy to mitigate the impact of diverse data. Consequently, the convergence of the FedAMP was guaranteed. A protocol named APPLE [25] was proposed to improve the personalized model of each client based on the others’ models. Clients obtain the personalized models locally by aggregating the core models of other clients downloaded from the server. The aggregation weights and the core models are locally learned from the personalized model by adding a proximal term to the local objectives. Instead of overwriting the old local model with the downloaded global model, FedALA [26] aggregates the downloaded global model and the old local model for local model initialization.
These existing PFL frameworks [21]–[28] have focused primarily on model accuracy. None of these has taken the fairness of the personalized models into consideration.
# B. Privacy
Existing studies [9]–[13] have explored ways to integrate privacy techniques into FL to provide a demonstrable assurance of safeguarding privacy. However, little to no consideration has been given to the personalization of learning models and their fairness. In [9], a DP-based framework was suggested to avert privacy leakage by introducing noise to obfuscate the local model parameters. In [10], three local DP (LDP) techniques were devised to uphold privacy, where LDP was incorporated into FL to forecast traffic status, mitigate privacy risks, and diminish communication overhead in crowdsourcing scenarios. The authors of [11] suggested FL with LDP, wherein LDP-based perturbation was applied during model uploading, adhering to individual privacy budgets. Liu et al. [17] proposed a transceiver protocol to maximize the convergence rate under privacy constraints in a MIMO-based DP FL system, where a server performs over-the-air model aggregation and parallel private information extraction from the uploaded local gradients with a DP mechanism.
In [12], DP noises were adaptively added to local model parameters to preserve user privacy during FL. The amplitude of DP noises was adaptively adjusted to balance preserving privacy and facilitating convergence. FedDual [13] was designed to preserve user privacy by adding DP noises locally and aggregating asynchronously via a gossip protocol. Noise-cutting was adopted to alleviate the impact of the DP noise on the global model. Hu et al. [14] proposed privacy-preserving PFL using the Gaussian mechanism, which provides a privacy guarantee by adding Gaussian noise to the uploaded local updates. In [15], the Gaussian mechanism was considered in a mean-regularized MTL framework, and the accuracy was analyzed for singleround FL using a Bayesian framework. In [21], the allocation of a privacy budget was considered for meta-learning-based PFL. In [18], differentially private federated MTL (DPFML) was designed for human digital twin systems by integrating DPFML and a computational-efficient blockchain-enabled validation process.
These existing works [9]–[15], [21] have given no consideration to fairness among the participants in FL, especially in the presence of statistical heterogeneity.
# C. Fairness
Some existing studies, e.g., [1], [31], [32], have attempted to improve performance distribution fairness, i.e., by mitigating the variability in model accuracy among different clients. Yet, none has taken user privacy into account. In [31], $q \cdot$ - FFL was proposed to achieve a more uniform accuracy distribution across clients. A parameter $q$ was used to reweight the aggregation loss by assigning bigger weights to clients undergoing more significant losses. In [32], Fed$\mathbf { M G D A + }$ was suggested to enhance the robustness of the model while upholding fairness with positive intentions. A multi-objective problem was structured to diminish the loss functions across all clients. It was tackled by employing Pareto-steady resolutions to pinpoint a collective descent direction suitable for all the chosen clients. Li et al. [1] designed a scalable federated MTL framework Ditto, which simultaneously learns personalized and global models in a global-regularized framework. Regularization was introduced to bring the personalized models in proximity to the optimal global model. The optimal weighting coefficient of Ditto was designed in terms of fairness and robustness. These studies [1], [31], [32] have overlooked privacy risks or failed to address the influence of DP on fairness.
# III. Framework of PFL
# A. PFL
PFL consists of a server and $N$ clients. $\mathbb { N }$ denotes the set of clients. $\mathcal { D } _ { n }$ denotes the local dataset at client ${ \boldsymbol { n } } \in { \mathbb { N } } , \ { \mathcal { D } }$ is the collection of all data samples. $\begin{array} { r } { \left| \mathcal { D } \right| = \sum _ { n = 1 } ^ { N } \left| \mathcal { D } _ { n } \right| } \end{array}$ is the total size of all data samples, $\left| \cdot \right|$ stands for cardinality. Like Ditto, PFL has both global and personal objectives for FL and PL, respectively. At the server, the global objective is to learn a global model with the minimum global training loss:
$$
\operatorname* { m i n } _ { \omega } F ( F _ { 1 } ( \omega ) , \cdot \cdot \cdot , F _ { N } ( \omega ) ) ,
$$
where $\boldsymbol \omega \in \mathbb { R } ^ { d }$ is the model parameter with $d$ elements, $F _ { n } ( \cdot )$ is the local loss function of client ${ \boldsymbol { n } } \in \mathbb { N }$ , and $F ( \cdot , \cdot \cdot \cdot , \cdot )$ is the global loss function:
$$
F ( F _ { 1 } ( \omega ) , \cdot \cdot \cdot , F _ { N } ( \omega ) ) = \sum _ { n = 1 } ^ { N } p _ { n } F _ { n } ( \omega ) ,
$$
where $p _ { n } \triangleq \left| \mathcal { D } _ { n } \right| / \left| \mathcal { D } \right|$ with $\begin{array} { r } { \sum _ { n = 1 } ^ { N } p _ { n } = 1 } \end{array}$ . We assume the size of each client’s local dataset is the same, i.e., $\begin{array} { r } { p _ { n } = \frac { 1 } { N } } \end{array}$
To capture both generalization and personalization as in Ditto, for client $n$ , we encourage its personalized model to be close to the optimal global model, i.e.,
$$
\operatorname* { m i n } _ { \pmb { \varpi } _ { n } } f _ { n } ( \pmb { \varpi } _ { n } ; \boldsymbol { \omega } ^ { * } ) = \left( 1 - \frac { \lambda } { 2 } \right) F _ { n } ( \pmb { \varpi } _ { n } ) + \frac { \lambda } { 2 } \parallel \pmb { \varpi } _ { n } - \boldsymbol { \omega } ^ { * } \parallel ^ { 2 }
$$
$$
\mathrm { s . t . } \ \omega ^ { * } = \underset { \omega } { \arg \operatorname* { m i n } } \frac { 1 } { N } \sum _ { n = 1 } ^ { N } F _ { n } \left( \omega \right) ,
$$
where $f _ { n } ( \cdot )$ is the loss function of the personalized model; $\lambda \in [ 0 , 2 ]$ is a weighting coefficient that controls the tradeoff between the global and local models. When $\lambda = 0$ , PFL trains a local model for each client based on its local datasets. When $\lambda = 2$ , the personal objective becomes obtaining an optimal global model with no personalization. Let $\boldsymbol { { u } } _ { n } ^ { * }$ and $\varpi _ { n } ^ { * }$ be the optimal local model based on the local data and the optimal personalized model, i.e.,
$$
\begin{array} { r } { \pmb { u } _ { n } ^ { * } = \underset { \pmb { u } _ { n } } { \arg \operatorname* { m i n } } F _ { n } ( \pmb { u } _ { n } ) ; \pmb { \varpi } _ { n } ^ { * } = \underset { \pmb { \varpi } _ { n } } { \arg \operatorname* { m i n } } f _ { n } ( \pmb { \varpi } _ { n } ; \pmb { \omega } ^ { * } ) . } \end{array}
$$
According to (1)–(3), a local model $\mathbf { \delta } _ { \mathbf { u } _ { n } }$ is trained for global aggregation. A personalized model $\varpi _ { n }$ is adjusted according to local training and the global model at each client $n$ . The training of the global model and that of the personalized models are assumed to be synchronized, i.e., at each round $t + 1$ . Client $n$ updates its personalized model $\varpi _ { n } ^ { t + 1 }$ based on the global model $\omega ^ { t }$ updated at the $t$ -th round.
# B. Threat Model
The server may attempt to recover the training datasets or infer the private features based on the models uploaded by the clients. There may be external attackers who intend to breach the privacy of the clients. Although the clients train models locally, the local models that the clients share with the server can be analyzed to potentially compromise their privacy under inference attacks during learning [34] and model-inversion attacks during testing [35]. The private information can be recovered by the attackers.
Typical privacy-preserving methods for FL include homomorphic encryption, secure multi-party computation, and DP [36]. While preventing the server from deciphering local models, homomorphic encryption requires all devices to use the same private key and cannot stop them from eavesdropping on each other. Secure multi-party computation enables clients to collaboratively compute an arbitrary functionality, but requires multiple interactions in a learning process, e.g., public key sharing among clients for key agreement, at the expense of high computation and communication overhead [37]. Typically, homomorphic encryption and secure multi-party computation are computationally expensive and need a trusted third party for key agreement [38]. To this end, DP is employed to preserve the privacy of PFL in this paper.
TABLE 1. Notation and definitions
# C. FL With DP
Consider the threat model described in Section III–B. The risk of privacy breaches arises from uploading FL local models to the server for FL global model aggregation. To
preserve data privacy from the uploaded local models, a Gaussian DP mechanism can be used to guarantee $( \epsilon , \delta )$ -DP by adding artificial Gaussian noises [39]. Let $\mathbf { z } _ { n } ^ { t } \sim \mathcal { N } ( 0 , \sigma _ { u } ^ { 2 } )$
coonmthmeugnliocbatailonmrooduelnda. eisgatthieon $n$ Tishe iamdpd $t$ ievde $\begin{array} { r } { \mathbf { z } ^ { t } = \sum _ { n = 1 } ^ { N } \mathbf { z } _ { n } ^ { \dot { t } } } \end{array}$ $t$ noise of each client is independent and identically distributed
(i.i.d.). Each element in $\mathbf { z } ^ { t }$ follows $\mathcal { N } ( 0 , \sigma _ { z } ^ { 2 } )$ with $\sigma _ { z } ^ { 2 } = N \sigma _ { u } ^ { 2 }$ .
Note that the DP noise is only added when a client uploads its local model for global model aggregation. Before uploading its local model in round $t$ , each client $n$ clips its local model to prevent gradient explosion, as given by
$$
{ \pmb u } _ { n } ^ { t + 1 } = { \pmb u } _ { n } ^ { t + 1 } / \operatorname* { m a x } ( 1 , \frac { \parallel { \pmb u } _ { n } ^ { t + 1 } \parallel } { C } ) ,
$$
where $C$ is the pre-determined clipping threshold to ensure $\Vert { \textbf { \em u } } _ { n } \parallel \leq C$ [12].
To guarantee $( \epsilon , \delta )$ -DP with respect to the data used in the training of the local model, the standard deviation of $\mathbf { z } _ { n } ^ { t }$ from the Gaussian mechanism should satisfy $\sigma _ { u } \ =$ $\frac { \Delta s \sqrt { 2 q T \ln ( 1 / \delta ) } } { \epsilon }$ [9], where $T$ is the maximum number of communication rounds and $q$ is the sampling ratio. Assume that all clients train and upload their local models at each communication round, i.e., $q = 1 / N$ . $\Delta s$ is the sensitivity of client $n$ ’s local training process, which captures the magnitude that a sample can change the training model in the worst case, as given by
$$
\Delta s = \operatorname* { m a x } _ { \mathscr { D } _ { n } , \mathscr { D } _ { n } ^ { \prime } } \left\| \pmb { u } _ { n } ( \mathscr { D } _ { n } ) - \pmb { u } _ { n } ( \mathscr { D } _ { n } ^ { \prime } ) \right\| ,
$$
where ${ \pmb u } _ { n } ( { \pmb D } _ { n } )$ and ${ \bf { u } } _ { n } ( \mathcal { D } _ { n } ^ { \prime } )$ are the local models obtained from the datasets $\mathcal { D } _ { n }$ and $\mathcal { D } _ { n } ^ { \prime }$ , respectively. Here, $\mathcal { D } _ { n } = \mathcal { D } \cup _ { s }$ and $\mathcal { D } _ { n } ^ { \prime } = \mathcal { D } \cup s ^ { \prime }$ are two adjacent datasets with the same size and differ by one sample, i.e., $s \in { \mathcal { D } } _ { n }$ , $s ^ { \prime } \in \mathcal { D } _ { n } ^ { \prime }$ , and $s \neq s ^ { \prime }$ . Considering the local model training from $\mathcal { D } _ { n }$ and $\mathcal { D } _ { n } ^ { \prime }$ , we have [9]
$$
\begin{array} { r l } & { \displaystyle \Delta s = \frac { \operatorname* { m a x } _ { \theta } } { D _ { n } , D _ { n } ^ { \prime } } \left\| \frac { 1 } { | \mathcal { D } _ { n } | } \sum _ { s ^ { \prime } \in \mathcal { P } _ { n } } \arg \operatorname* { m i n } _ { \omega } F _ { n } ( \omega , \mathcal { D } _ { n } ) \right. } \\ & { \quad \left. \quad - \frac { 1 } { | \mathcal { D } _ { n } ^ { \prime } | } \sum _ { s ^ { \prime \prime } \in \mathcal { P } _ { n } ^ { \prime } } \arg \operatorname* { m i n } _ { \omega } F _ { n } ( \omega , \mathcal { D } _ { n } ^ { \prime } ) \right\| } \\ & { \displaystyle \quad \operatorname* { m a x } _ { \theta , s ^ { \prime } } \| \arg \operatorname* { m i n } _ { \omega } F _ { n } ( \omega , s ) - \arg \operatorname* { m i n } _ { \omega } F _ { n } ( \omega , s ^ { \prime } ) \| } \\ & { \quad \quad = \frac { \operatorname* { m a x } _ { \theta } } { | \mathcal { D } _ { n } | } , } \end{array}
$$
Clearly, $\Delta s$ only depends on the size of datasets $\mid { \mathcal { D } } _ { n } \mid$ and the clipping threshold $C$ .
According to (6), the standard deviations of the DP noise $\sigma _ { u }$ per client and $\mathbf { z } ^ { t }$ are given by
$$
\sigma _ { u } = \frac { \Delta s \sqrt { 2 T N \ln \left( 1 / \delta \right) } } { \epsilon N } ; \sigma _ { z } = \frac { \Delta s \sqrt { 2 T \ln \left( 1 / \delta \right) } } { \epsilon } .
$$
The operations of PFL are summarized in Algorithm 1 and illustrated in Fig. 1. At each round $t$ , client $n$ executes local training, and updates its local model $\boldsymbol { \omega } _ { n } ^ { t + 1 }$ and personalized model $\varpi _ { n } ^ { t + 1 }$ . The learning rates of the local and personalized models are $\eta _ { \mathrm { G } }$ and $\eta _ { \mathrm { L } }$ , respectively. The noisy local model $\underset { \pmb { x } } { \sim } { } ^ { t + 1 }$ $\overset { \sim } { \boldsymbol { u } } _ { n } ^ { \phantom { \star } }$ after clipping and DP perturbation is uploaded by client $n$ to the server for updating the global model $\widetilde { \boldsymbol { \omega } } ^ { t + 1 }$
# IV. Convergence of Privacy-Preserving PFL
This section establishes the convergence upper bound of PFL and optimizes the number $T$ of aggregation rounds to minimize the convergence upper bound of PL. The following assumptions facilitate the convergence analysis of PFL.
# Assumption 1. $\forall n \in \mathbb { N }$ ,
$F _ { n } \left( \cdot \right)$ is $\mu$ -strongly convex $\ [ 4 0 ] - [ 4 2 ]$ and $L$ -smooth [41]–[43], i.e., $\begin{array} { r } { F \left( \omega \right) - F \left( \omega ^ { * } \right) \leq \frac { 1 } { 2 \mu } \parallel \nabla F \left( \omega \right) \parallel ^ { 2 } } \end{array}$
# Algorithm 1 Privacy-Preserving PFL
Input: $T , \lambda , \omega ^ { 0 }$ , $\{ \varpi _ { n } ^ { 0 } \} _ { n \in \mathbb { N } }$ , $N$ , $\eta _ { \mathrm { G } }$ , ηL, ϵ and $\delta$ .
Output: $\omega ^ { T }$ , $\{ \Lleftarrow \infty _ { n } ^ { T } \} _ { n \in \mathbb { N } }$ .
1: for $t = \{ 0 , \cdots , T - 1 \}$ do
2: $/ /$ Local training process for global model;
3: for ${ \boldsymbol { n } } \in \mathbb { N }$ do
4: Obtain $\omega ^ { t }$ and let ${ \boldsymbol u } _ { n } ^ { t } = \boldsymbol { \omega } ^ { t }$ ;
5: Update the local model: ${ \mathbf { \boldsymbol { \mathsf { u } } } } _ { n } ^ { t + 1 } = { \mathbf { \boldsymbol { \mathsf { u } } } } _ { n } ^ { t } - \eta _ { \mathrm { G } } \nabla F _ { n } ( { \mathbf { \boldsymbol { \mathsf { u } } } } _ { n } ^ { t } )$ . 6: Clip the local model: utn+1
$\begin{array} { r } { \mathbf u _ { n } ^ { t + 1 } / \operatorname* { m a x } ( 1 , \frac { \| \mathbf u _ { n } ^ { t + 1 } \| } { C } ) } \end{array}$
7: Add noise and upload: $\widetilde { \pmb { u } } _ { n } ^ { t + 1 } = { \pmb { u } } _ { n } ^ { t + 1 } + { \pmb { \mathrm { z } } } _ { n } ^ { t + 1 }$
8: $/ /$ Local training process for personalized model; 9: Update personalized model $\varpi _ { n } ^ { t + 1 }$ :
10: $\begin{array} { r } { \varpi _ { n } ^ { t + 1 } = \varpi _ { n } ^ { t } - \eta _ { \mathrm { L } } \big ( \left( 1 - \frac { \lambda } { 2 } \right) \nabla \bar { F } _ { n } ( \varpi _ { n } ^ { t } ) + \lambda ( \varpi _ { n } ^ { t } - } \end{array}$ $\omega ^ { t } )$ );
11: end for
12: $/ /$ Global model aggregating process;
13: Update the global model: $\begin{array} { r } { \tilde { \omega } ^ { \tilde { t } + \mathrm { i } } = \frac { 1 } { N } \sum _ { n = 1 } ^ { N } \tilde { \boldsymbol { u } } _ { n } ^ { t + 1 } } \end{array}$ ; and $\| ~ \nabla F ( \omega ) - \nabla F ( \omega ^ { \prime } ) ~ \| \leq L ~ \| ~ \omega - \omega ^ { \prime } ~ \|$ . Here, $\mu$ and $L$ are constants;
The global learning rate $\begin{array} { r } { \eta _ { \mathrm { G } } \le \frac { 2 } { L } } \end{array}$ , and $\textstyle \mu > { \frac { 2 - 2 \lambda } { 2 - \lambda } }$ ;
$\bullet \mathbb { E } \left[ \| \nabla F _ { n } ( \omega ^ { t } ) \| ^ { 2 } \right] \leq G _ { 0 } ^ { 2 }$ with $G _ { 0 }$ being a constant.; $\parallel \boldsymbol { \mathbf { u } } _ { n } ^ { * } - \boldsymbol { \omega } ^ { * } \parallel \leq M$ , where $M$ is a constant.
FIGURE 1. The diagram of PFL: In each round, every client trains its local model and its personalized model based on its local dataset and the global model broadcast by the server in the last round. Then, the clients perturb and upload their local models to the server, and the server aggregates the perturbed local models into the global model and broadcasts the global model.
# A. Convergence Analysis
# 1) Convergence of FL
The convergence upper bound of $\mathrm { F L }$ with DP has been established in the literature [9, Eq. (16)]
$$
\mathbb { E } \left[ F \left( \stackrel { \sim } { \omega } ^ { T } \right) - F \left( \omega ^ { * } \right) \right] \leq \varepsilon _ { \mathrm { G } } ^ { T } \Psi _ { 1 } + \left( 1 - \varepsilon _ { \mathrm { G } } ^ { T } \right) \frac { \varphi T } { N \epsilon ^ { 2 } } ,
$$
where, for conciseness, $\varepsilon _ { \mathrm { G } } = 1 - 2 \mu \eta _ { \mathrm { G } } + \mu \eta _ { \mathrm { G } } ^ { 2 } L$ , $\Psi _ { 1 } =$ $F \left( \omega ^ { 0 } \right) - F \left( \omega ^ { * } \right)$ , and = L2∆s2 ln(1/δ) . Clearly, the DP noise increases the convergence upper bound with $\begin{array} { r } { \varphi = \frac { L ^ { 2 } \sigma _ { z } ^ { 2 } } { 2 T } } \end{array}$
# 2) Convergence of PL
Under Assumption 1, the convergence upper bound of PL is established in the following.
Lemma 1. Given the $P L$ rate $\eta _ { \mathrm { L } }$ and the weighting coefficient $\lambda ,$ under Assumption 1, the expected difference between the personalized model $\widetilde { \boldsymbol { \varpi } } _ { n } ^ { t + 1 }$ and the optimum $\varpi _ { n } ^ { * }$ at the end of the $t$ -th communication round is upper-bounded by
$$
\begin{array} { r l } & { \mathbb { E } \left[ \Vert \widetilde { \textbf { \varpi } } _ { n } ^ { t + 1 } - \pmb { \varpi } _ { n } ^ { * } \Vert ^ { 2 } \right] \leq \varepsilon _ { \mathrm { L } } \mathbb { E } \left[ \Vert \widetilde { \textbf { \varpi } } _ { n } ^ { t } - \pmb { \varpi } _ { n } ^ { * } \Vert ^ { 2 } \right] + } \\ & { \left( \eta _ { \mathrm { L } } ^ { 2 } + \eta _ { \mathrm { L } } ^ { 2 } \lambda ^ { 2 } \right) G + \frac { 4 \eta _ { \mathrm { L } } ^ { 2 } \lambda ^ { 2 } + 2 \eta _ { \mathrm { L } } \lambda ^ { 2 } } { \mu } \mathbb { E } \left[ F \left( \widetilde { \omega } ^ { t } \right) - F ( \omega ^ { * } ) \right] , } \end{array}
$$
Proof: See Appendix B.
Theorem 1. Under Assumption 1, the convergence upper bound of $P L$ after $T$ aggregation rounds is given as follows: When $\varepsilon _ { \mathrm { L } } \neq \varepsilon _ { \mathrm { G } }$ ,
$$
\begin{array} { r l } & { \mathbb { E } \left[ \| \widetilde { \mathbf { \varpi } } _ { n } ^ { T } - \boldsymbol { \varpi } _ { n } ^ { * } \| ^ { 2 } \right] \leq \varepsilon _ { \mathrm { L } } ^ { T } \Psi _ { 2 } + \left( 1 + \lambda ^ { 2 } \right) \eta _ { \mathrm { L } } ^ { 2 } G \frac { \varepsilon _ { \mathrm { L } } ^ { T } - 1 } { \varepsilon _ { \mathrm { L } } - 1 } } \\ & { + \frac { \left( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } \right) \lambda ^ { 2 } } { \mu } \Bigg [ \frac { \varepsilon _ { \mathrm { L } } ^ { T } - \varepsilon _ { \mathrm { G } } ^ { T } } { \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } } \Psi _ { 1 } + \left( \frac { \varepsilon _ { \mathrm { L } } ^ { T - 1 } - \varepsilon _ { \mathrm { G } } ^ { T - 1 } } { \varepsilon _ { \mathrm { G } } } - \frac { \varepsilon _ { \mathrm { L } } ^ { T - 1 } - 1 } { \varepsilon _ { \mathrm { L } } - 1 } \right) \frac { \varphi _ { \mathrm { L } } T } { \varepsilon _ { \mathrm { G } } - 1 } \Bigg ] ; } \end{array}
$$
When εL = εG,
$$
\begin{array} { r l } & { \mathbb { E } \left[ \| \widetilde { \boldsymbol { \varpi } } _ { n } ^ { T } - \boldsymbol { \varpi } _ { n } ^ { * } \| ^ { 2 } \right] \leq \varepsilon _ { \mathrm { L } } ^ { T } \Psi _ { 2 } + \left( 1 + \lambda ^ { 2 } \right) \eta _ { \mathrm { L } } ^ { 2 } G \frac { \varepsilon _ { \mathrm { L } } ^ { T } - 1 } { \varepsilon _ { \mathrm { L } } - 1 } } \\ & { + \frac { \left( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } \right) \lambda ^ { 2 } } { \mu } \Bigg [ T \varepsilon _ { \mathrm { L } } ^ { T - 1 } \Psi _ { 1 } + \left( ( T - 1 ) \varepsilon _ { \mathrm { L } } ^ { T - 1 } - \frac { \varepsilon _ { \mathrm { L } } ^ { T - 1 } - 1 } { \varepsilon _ { \mathrm { L } } - 1 } \right) \frac { \varphi _ { \mathrm { L } } T } { \varepsilon _ { \mathrm { L } } - 1 } \Bigg ] , } \end{array}
$$
where, for conciseness, $\Psi _ { 2 } = \parallel \varpi _ { n } ^ { 0 } - \varpi _ { n } ^ { * } \parallel ^ { 2 }$ .
Proof:
# See Appendix C.
where $\begin{array} { r } { G \triangleq \Big ( \big ( 1 - \frac { \lambda } { 2 } \big ) G _ { 0 } + \lambda \Big ( \frac { G _ { 0 } } { \mu } + M \Big ) \Big ) ^ { 2 } } \end{array}$ , and $\varepsilon _ { \mathrm { L } } = 1 -$ $\eta _ { \mathrm { L } } \left( \left( 1 - \frac { \lambda } { 2 } \right) \mu + \dot { \lambda } \right) + \eta _ { \mathrm { L } }$ . Clearly, $\varepsilon _ { \mathrm { L } }$ increases with $\lambda$ when $\mu > 2$ .
Proof:
See Appendix A.
Lemma 1 can be verified in two special cases. One is that PFL is only executed locally $( \lambda = 0 )$ ). By Lemma 1, we have
$$
\begin{array} { r } { \mathbb { E } \left[ \| \widetilde { \pmb { \varpi } } _ { n } ^ { t + 1 } - { \pmb { \varpi } } _ { n } ^ { * } \| ^ { 2 } \right] \leq \varepsilon _ { \mathrm { L } } \mathbb { E } \left[ \| \widetilde { \pmb { \varpi } } _ { n } ^ { t } - { \pmb { \varpi } } _ { n } ^ { * } \| ^ { 2 } \right] + \eta _ { \mathrm { L } } ^ { 2 } G , } \end{array}
$$
which depends only on the $\mathrm { F L }$ local training. The obtained personalized models are obviously unaffected by the DP noise.
Another special case involves no $\mathrm { P L }$ , i.e., $\lambda = 2$ . Then, $\pmb { \varpi } _ { n } ^ { * } = \underset { \pmb { \varpi } _ { n } } { \arg \operatorname* { m i n } } \parallel \pmb { \varpi } _ { n } - \omega ^ { * } \parallel ^ { 2 }$ , i.e., $\varpi _ { n } ^ { * } = \omega ^ { * }$ . Hence,
$$
\begin{array} { r l } & { \mathbb { E } \bigg [ \| \widetilde { \mathbf { \varpi } } \widetilde { \pmb { \varpi } } _ { n } ^ { t + 1 } - { \boldsymbol { \omega } } ^ { * } \| ^ { 2 } \bigg ] \leq \varepsilon _ { \mathrm { L } } \mathbb { E } \left[ \| \widetilde { \mathbf { \varpi } } \widetilde { \pmb { \omega } } _ { n } ^ { t } - { \boldsymbol { \omega } } ^ { * } \| ^ { 2 } \right] + 5 \eta _ { \mathrm { L } } ^ { 2 } G } \\ & { \quad + \frac { 1 6 \eta _ { \mathrm { L } } ^ { 2 } + 8 \eta _ { \mathrm { L } } } { \mu } \mathbb { E } \left[ F \left( \widetilde { \pmb { \omega } } ^ { t } \right) - F \left( { \boldsymbol { \omega } } ^ { * } \right) \right] } \end{array}
$$
$$
\leq ( \varepsilon _ { \mathrm { L } } + 5 \eta _ { \mathrm { L } } ^ { 2 } ) ( \frac { G _ { 0 } } { \mu } + M ) ^ { 2 } + \frac { 1 6 \eta _ { \mathrm { L } } ^ { 2 } + 8 \eta _ { \mathrm { L } } } { \mu } \mathbb { E } \Big [ F \Big ( \widetilde { \omega } ^ { t } \Big ) - F ( \omega ^ { * } ) \Big ] ,
$$
where (12b) is obtained by substituting (39) into (12a). As revealed in (12b), the convergence of PL depends only on the $\mathrm { F L }$ and hence the DP in PFL.
Lemma 2. Given $\begin{array} { r } { F \left( \omega \right) - F \left( \omega ^ { * } \right) \leq \frac { 1 } { 2 \mu } \parallel \nabla F \left( \omega \right) \parallel ^ { 2 } } \end{array}$ under Assumption $^ { 1 }$ , the expectation of the difference between the $F L$ model in the $t$ -th communication round, i.e., $\widetilde { \boldsymbol { \omega } } ^ { t + 1 }$ , and the optimal global model $\omega ^ { \ast }$ is upper bounded by
$\begin{array} { r } { \mathbb { E } \left[ F ( \widetilde { \pmb { \omega } } ^ { t + 1 } ) - F \left( \pmb { \omega } ^ { * } \right) \right] \leq \varepsilon _ { \mathrm { G } } \mathbb { E } \left[ F ( \widetilde { \pmb { \omega } } ^ { t } ) - F \left( \pmb { \omega } ^ { * } \right) \right] + \varphi _ { \mathrm { L } } T , } \end{array}$ where $\begin{array} { r } { \varphi _ { \mathrm { L } } = \sigma _ { z } ^ { 2 } \frac { d L } { 2 T N ^ { 2 } } = \frac { \Delta s ^ { 2 } L d \ln ( 1 / \delta ) } { N ^ { 2 } \epsilon ^ { 2 } } } \end{array}$ ∆s2Ld ln(1/δ) based on (8).
A trade-off between convergence and privacy is revealed in (14) and (15): As the DP noise variance $\sigma _ { z } ^ { 2 }$ increases, the convergence upper bound of $\mathrm { P L }$ increases since the taenrdm $\begin{array} { r } { \sum _ { x = 0 } ^ { t - 1 } \sum _ { y = 0 } ^ { t - 1 - x } \varepsilon _ { \mathrm { L } } ^ { x } \varepsilon _ { \mathrm { G } } ^ { y } > 0 } \end{array}$ oisne ihse pbosuitnivd.e sMinocre $\varphi _ { \mathrm { { L } } } > 0$ convergence upper bound of PL depends on $\lambda$ . A larger $\lambda$ leads to a more significant impact of the DP noise on the convergence of $\mathrm { P L }$ when $\mu \geq 2$ . The reason is that, when $\mu > 2$ , $\varepsilon _ { \mathrm { L } } = 1 + \eta _ { \mathrm { L } } - \eta _ { \mathrm { L } } \left[ \mu + ( 1 - \textstyle { \frac { \mu } { 2 } } ) \lambda \right]$ increases with $\lambda$ (see Lemma 1); when $\mu = 2$ , $\varepsilon _ { \mathrm { L } }$ is independent of $\lambda$ , but the impact of DP still grows with $\lambda$ due to the coefficient $\lambda ^ { 2 }$ in (43). Note that Theorem 1 holds under DP noises with other distributions, since the DP noise has no impact on Lemma 1 while the RHS of (13) in Lemma 2 depends only on the mean and variance $\sigma _ { z } ^ { 2 }$ of the DP noise.
On the other hand, the impact of DP on the convergence of PL may not always intensify with $\lambda$ when $\mu < 2$ , because the sign of the first derivative of $\begin{array} { r } { \lambda ^ { 2 } \left( \frac { \varepsilon _ { \mathrm { L } } ^ { t } - \varepsilon _ { \mathrm { G } } ^ { t } } { \frac { \varepsilon _ { \mathrm { L } } } { \varepsilon _ { \mathrm { G } } } - 1 } - \frac { \varepsilon _ { \mathrm { L } } ^ { t } - 1 } { \varepsilon _ { \mathrm { L } } - 1 } \right) } \end{array}$ in (43b) and $\begin{array} { r } { \lambda ^ { 2 } \left( t \varepsilon _ { \mathrm { L } } ^ { t } - \frac { \varepsilon _ { \mathrm { L } } ^ { t } - 1 } { \varepsilon _ { \mathrm { L } } - 1 } \right) } \end{array}$ in (44b) with respect to $\lambda$ depends on $\eta _ { \mathrm { L } } , \varepsilon _ { \mathrm { G } }$ , and $t$ when $\mu < 2$ .
# B. Optimization for Convergence
Let $h ( T , \lambda )$ denote the convergence upper bound of $\mathrm { P L }$ after the $T$ aggregation rounds of $\mathrm { F L }$ , given the privacy budget.
1) When $\varepsilon _ { \mathrm { L } } \neq \varepsilon _ { \mathrm { G } }$
$h ( T , \lambda )$ is given by the right-hand side (RHS) of (14). After
reorganization, we have
$$
\begin{array} { r l } & { \displaystyle h ( T , \lambda ) = \left( \Psi _ { 2 } + \frac { \beta \Psi _ { 1 } } { \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } } - \eta _ { \mathrm { L } } ^ { 2 } \left( 1 + \lambda ^ { 2 } \right) \frac { G } { 1 - \varepsilon _ { \mathrm { L } } } - \frac { \beta \varphi _ { \mathrm { L } } } { \left( 1 - \varepsilon _ { \mathrm { L } } \right) \left( \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } \right) } T \right) \varepsilon _ { \mathrm { L } } ^ { T } + } \\ & { \quad \quad \Big ( - \frac { \beta \Psi _ { 1 } } { \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } } + \frac { \beta \varphi _ { \mathrm { L } } } { \left( 1 - \varepsilon _ { \mathrm { G } } \right) \left( \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } \right) } T \Big ) \varepsilon _ { \mathrm { G } } ^ { T } + } \\ & { \quad \quad \quad \frac { \beta \varphi _ { \mathrm { L } } } { \left( 1 - \varepsilon _ { \mathrm { G } } \right) \left( 1 - \varepsilon _ { \mathrm { L } } \right) } T + \eta _ { \mathrm { L } } ^ { 2 } \left( 1 + \lambda ^ { 2 } \right) \frac { G } { 1 - \varepsilon _ { \mathrm { L } } } \qquad ( 1 6 \lambda ^ { 2 } ) \frac { \left( \beta \varphi _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } \right) } { \left( 1 - \varepsilon _ { \mathrm { G } } \right) \left( 1 - \varepsilon _ { \mathrm { G } } \right) } } \end{array}
$$
$$
= ( H _ { 1 } + H _ { 2 } T ) \varepsilon _ { \mathrm { L } } ^ { T } + ( H _ { 3 } + H _ { 4 } T ) \varepsilon _ { \mathrm { G } } ^ { T } + H _ { 5 } T + H _ { 6 } \ ,
$$
where 4η2Lλ2+2ηLλ2 , $\begin{array} { r } { { \cal H } _ { 1 } \ = \ \Psi _ { 2 } + \frac { \beta \Psi _ { 1 } } { \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } } \ - \ \eta _ { \mathrm { L } } ^ { 2 } \big ( 1 { + } \lambda ^ { 2 } \big ) \frac { G } { 1 - \varepsilon _ { \mathrm { L } } } } \end{array}$ : $\begin{array} { r } { H _ { \mathrm { 2 } } ~ = ~ - \frac { \beta \varphi _ { \mathrm { L } } ^ { \ r } } { ( 1 - \varepsilon _ { \mathrm { L } } ) ( \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } ) } } \end{array}$ $\begin{array} { r } { H _ { \mathrm { 3 } } ~ = ~ - \frac { \beta \Psi _ { \mathrm { 1 } } } { \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } } } \end{array}$ $\begin{array} { r } { H _ { 4 } ^ { \mathrm { ~ - ~ } } = \frac { \beta \varphi _ { \mathrm { L } } } { ( 1 - \varepsilon _ { \mathrm { G } } ) ( \varepsilon _ { \mathrm { L } } - \varepsilon _ { \mathrm { G } } ) } } \end{array}$ , $\begin{array} { r } { H _ { 5 } = \frac { \beta \varphi _ { \mathrm { L } } } { ( 1 - \varepsilon _ { \mathrm { G } } ) ( 1 - \varepsilon _ { \mathrm { L } } ) } > 0 } \end{array}$ , and $\begin{array} { r } { H _ { 6 } = \eta _ { \mathrm { L } } ^ { 2 } \left( 1 + \lambda ^ { 2 } \right) \frac { G } { 1 - \varepsilon _ { \mathrm { L } } } > 0 } \end{array}$ .
Let $H _ { 0 }$ denote the lower bound of $( H _ { 1 } + \bar { H _ { 2 } } T ) \varepsilon _ { \mathrm { L } } ^ { T } + ( H _ { 3 } +$ $H _ { 4 } T ) \varepsilon _ { \mathrm { { G } } } ^ { T } + H _ { 6 }$ in (16b). Then, $h ( T , \lambda )$ has a linear lower bound, denoted by $h _ { \mathrm { L o w } } ( T )$ ; that is
$$
\begin{array} { r } { h ( T , \lambda ) \geq h _ { \mathrm { L o w } } ( T ) = H _ { 0 } + H _ { 5 } T , ~ } \\ { \mathrm { w i t h } \left( H _ { 1 } + H _ { 2 } T \right) \varepsilon _ { \mathrm { L } } ^ { T } + ( H _ { 3 } + H _ { 4 } T ) \varepsilon _ { \mathrm { G } } ^ { T } + H _ { 6 } \geq H _ { 0 } , ~ } \end{array}
$$
1) When $\varepsilon _ { \mathrm { L } } > \varepsilon _ { \mathrm { G } }$ , $H _ { 2 } < 0$ . Then, $H _ { 2 } T \varepsilon _ { \mathrm { { L } } } ^ { T }$ first decreases and then increases in $T$ . There exists $\operatorname* { m i n } _ { T } ( H _ { 2 } T \varepsilon _ { \mathrm { L o w } } ^ { T } )$ . Also, $H _ { 3 } < 0$ . $H _ { 3 } \varepsilon _ { \mathrm { G } } ^ { T } \geq H _ { 3 }$ since $H _ { 3 } \varepsilon _ { \mathrm { G } } ^ { T }$ is an increasing function of $T$ . $H _ { 4 } > 0$ ; thus, $H _ { 4 } T \varepsilon _ { \mathrm { { G } } } ^ { T } \geq 0$ . Considering two possible cases concerning $H _ { 1 } , H _ { 0 }$ is given by a) If $H _ { 1 } \geq 0$ , then $H _ { 0 } = \operatorname* { m i n } _ { T } ( H _ { 2 } T \varepsilon _ { \mathrm { L o w } } ^ { T } ) + H _ { 3 } + H _ { 6 }$ ; b) If $H _ { 1 } < 0$ , then $H _ { 0 } = H _ { 1 } + \operatorname* { m i n } _ { T } ( H _ { 2 } T \varepsilon _ { \mathrm { L o w } } ^ { T } ) + H _ { 3 } +$ $H _ { 6 }$ .
2) When $\varepsilon _ { \mathrm { L } } < \varepsilon _ { \mathrm { G } }$ , $H _ { 2 } > 0$ . Hence, $H _ { 2 } T \varepsilon _ { \mathrm { { L } } } ^ { T } \geq 0$ . Also, $H _ { 3 } \ > \ 0$ and hence $H _ { 3 } \varepsilon _ { \mathrm { G } } ^ { T } > 0$ . Moreover, $H _ { 4 } ~ < ~ 0$ and thus $H _ { 4 } T \varepsilon _ { \mathrm { G } } ^ { T }$ first decreases and then increases with respect to $T$ , with the existence of $\operatorname* { m i n } _ { T } ( H _ { 4 } T \varepsilon _ { \mathrm { L o w } } ^ { T } )$ confirmed. Considering two possible cases concerning $H _ { 1 } , H _ { 0 }$ can be given by a) If $H _ { 1 } \geq 0$ , then $H _ { 0 } = \operatorname* { m i n } _ { T } ( H _ { 4 } T \varepsilon _ { \mathrm { L o w } } ^ { T } ) + H _ { 6 }$ ; b) If $H _ { 1 } < 0$ , then $H _ { 0 } = H _ { 1 } + \operatorname* { m i n } _ { T } ( H _ { 4 } T \varepsilon _ { \mathrm { L o w } } ^ { T } ) + H _ { 6 } ,$
As illustrated in Fig. 2, given $\lambda$ , the optimal $T$ , denoted by $T ^ { * }$ , which minimizes $h ( T , \lambda )$ , is within $( 0 , T ^ { \prime } )$ , where $T ^ { \prime }$ satisfies $h _ { \mathrm { L o w } } ( T ^ { \prime } ) = h ( 0 , \lambda )$ , i.e., $\begin{array} { r } { T ^ { \prime } = \frac { H _ { 1 } + H _ { 3 } + H _ { 6 } - H _ { 0 } } { H _ { 5 } } } \end{array}$ . This is because $h ( T , \lambda ) > h _ { \mathrm { L o w } } ( T ^ { \prime } ) = h ( 0 , \lambda ) \ge \mathbf { \bar { \mathfrak { h } } } ( T ^ { * } , \lambda )$ , $\forall T >$ $T ^ { \prime }$ . As a result, $T ^ { * }$ can be obtained effectively using a onedimensional search with a step size of 1 within $( 0 , T ^ { \prime } )$ .
2) When $\varepsilon _ { \mathrm { L } } = \varepsilon _ { \mathrm { G } }$ $h ( T , \lambda )$ is given by the RHS of (15):
$$
\begin{array} { r l } & { h ( T , \lambda ) = \Big ( - \frac { \big ( 1 + \lambda ^ { 2 } \big ) \eta _ { \mathrm { L } } ^ { 2 } G } { 1 - \varepsilon _ { \mathrm { L } } } + \Psi _ { 2 } + \frac { \big ( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } \big ) \lambda ^ { 2 } } { \varepsilon _ { \mathrm { L } } \mu } \cdot } \\ & { \qquad \frac { \Psi _ { 1 } \big ( 1 - \varepsilon _ { \mathrm { L } } \big ) ^ { 2 } - \varepsilon _ { \mathrm { L } } \varphi _ { \mathrm { L } } } { \big ( 1 - \varepsilon _ { \mathrm { L } } \big ) ^ { 2 } } T - \frac { \big ( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } \big ) \lambda ^ { 2 } \varphi _ { \mathrm { L } } } { \varepsilon _ { \mathrm { L } } \mu \big ( 1 - \varepsilon _ { \mathrm { L } } \big ) } T ^ { 2 } \Big ) \varepsilon _ { \mathrm { L } } ^ { T } } \\ & { \qquad - \frac { \big ( 1 + \lambda ^ { 2 } \big ) \eta _ { \mathrm { L } } ^ { 2 } G } { 1 - \varepsilon _ { \mathrm { L } } } - \frac { \big ( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } \big ) \lambda ^ { 2 } } { \mu \big ( 1 - \varepsilon _ { \mathrm { L } } \big ) } \qquad ( 1 8 \mathfrak { L } } \\ & { \qquad = \big ( \mathcal { H } _ { 1 } + \mathcal { H } _ { 2 } T + \mathcal { H } _ { 3 } T ^ { 2 } \big ) \varepsilon _ { \mathrm { L } } ^ { T } + \mathcal { H } _ { 4 } , } \end{array}
$$
where, for the brevity of notation, $\begin{array} { r } { \mathcal { H } _ { 1 } = - \frac { \left( 1 + \lambda ^ { 2 } \right) \eta _ { \mathrm { L } } ^ { 2 } G } { 1 - \varepsilon _ { \mathrm { L } } } + \Psi _ { 2 } } \end{array}$ $\begin{array} { r } { \mathcal { H } _ { 2 } = \frac { ( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } ) \lambda ^ { 2 } } { \varepsilon _ { \mathrm { L } } \mu } ( \Psi _ { 1 } - \frac { \varepsilon _ { \mathrm { L } } \varphi _ { \mathrm { L } } } { ( 1 - \varepsilon _ { \mathrm { L } } ) ^ { 2 } } ) } \end{array}$ , $\begin{array} { r } { \mathrm { \mathcal { H } _ { 3 } } = - \frac { ( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \overline { { \eta } } _ { \mathrm { L } } ) \lambda ^ { 2 } \varphi _ { \mathrm { L } } } { \varepsilon _ { \mathrm { L } } \mu ( 1 - \varepsilon _ { \mathrm { L } } ) } < } \end{array}$ $\begin{array} { r } { \mathcal { H } _ { 4 } = - \frac { \left( 1 + \lambda ^ { 2 } \right) \eta _ { \mathrm { L } } ^ { 2 } G } { 1 - \varepsilon _ { \mathrm { L } } } - \frac { ( 4 \eta _ { \mathrm { L } } ^ { 2 } + 2 \eta _ { \mathrm { L } } ) \lambda ^ { 2 } } { \mu ( 1 - \varepsilon _ { \mathrm { L } } ) } < 0 } \end{array}$ .
Let $\mathcal { H } _ { 0 }$ denote the lower bound of $\dot { ( \mathcal { H } _ { 1 } + \mathcal { H } _ { 2 } T ) } \varepsilon _ { \mathrm { L } } ^ { T } + \mathcal { H } _ { 4 }$ in (18b). Then, the lower bound of $h ( T , \lambda )$ is given by
$$
h ( T , \lambda ) \geq h _ { \mathrm { L o w } } ( T ) = \mathcal { H } _ { 0 } + \mathcal { H } _ { 3 } T ^ { 2 } \varepsilon _ { \mathrm { L } } ^ { T } ,
$$
FIGURE 2. Comparison between $h ( T , \lambda )$ and its lower bound $h _ { \mathrm { L o w } } ( T )$ , where the simulation parameters are as specified for the MLR model trained on the MNIST dataset, i.e., $\lambda = 0 . 1$ , $\epsilon = 1 0 0$ , and $\delta = 0 . 0 1$ ; see Section VI-B.
Since $\mathcal { H } _ { 3 } < 0$ , $\mathcal { H } _ { 3 } T ^ { 2 } \varepsilon _ { \mathrm { L } } ^ { T }$ first decreases and then increases.
1) When $\mathcal { H } _ { 2 } ~ < ~ 0 \$ , $\mathcal { H } _ { 2 } T \varepsilon _ { \mathrm { L } } ^ { T }$ first decreases and then increases with respect to $T$ . Then, $\operatorname* { m i n } _ { T } ( \mathcal { H } _ { 2 } T \varepsilon _ { \mathrm { L o w } } ^ { T } )$ exists. Considering two possible cases concerning $\mathcal { H } _ { 1 } , \mathcal { H } _ { 0 }$ is given by a) If $\mathcal { H } _ { 1 } \geq 0$ , then $\mathcal { H } _ { 0 } = \operatorname* { m i n } _ { T } ( \mathcal { H } _ { 2 } T \varepsilon _ { \mathrm { L o w } } ^ { T } ) + \mathcal { H } _ { 4 }$ ; b) If $\mathcal { H } _ { 1 } < 0$ , then $\mathcal { H } _ { 0 } = \mathcal { H } _ { 1 } + \operatorname* { m i n } _ { T } ( \mathcal { H } _ { 2 } T \varepsilon _ { \mathrm { L o w } } ^ { T } ) + \mathcal { H } _ { 4 } .$ .
2) When $\mathcal { H } _ { 2 } ~ \geq ~ 0$ , $\mathcal { H } _ { 2 } T \varepsilon _ { \mathrm { L } } ^ { T } \ \ge \ 0$ . Considering the two possible cases concerning $\mathcal { H } _ { 1 } , \mathcal { H } _ { 0 }$ is given by a) If $\mathcal { H } _ { 1 } \geq 0$ , then $\mathcal { H } _ { 0 } = \mathcal { H } _ { 4 }$ ; b) If $\mathcal { H } _ { 1 } < 0$ , then $\mathcal { H } _ { 0 } = \mathcal { H } _ { 1 } + \mathcal { H } _ { 4 }$ .
Similarly, the optimal $T ^ { * }$ under a given $\lambda$ can be obtained using a one-dimensional search with a step size of 1 within $( 0 , T ^ { \prime \prime } )$ , where $T ^ { \prime \prime }$ satisfies $h _ { \mathrm { L o w } } ( T ^ { \prime \prime } ) = h ( 0 , \lambda )$ .
In both cases, clearly, the convergence of PL depends heavily on that of $\mathrm { F L }$ and, in turn, the DP in PFL.
# V. Fairness Analysis of Privacy-Preserving PFL
In this section, we analyze the fairness of PL under privacy constraints and uncover an opportunity to maximize the fairness by optimizing $\lambda$ . We focus on the fairness of performance distribution to measure the degree of uniformity in performance across the clients. Several fairness metrics have been adopted in FL, including distance (e.g., cosine distance [44] and Euclidean distance [45]), variance [1], risk difference (e.g., demographic parity [46] and equal opportunity [47]), and Jain’s fairness index (JFI) [48]. Among these metrics, variance and JFI are suitable for measuring the fairness of performance distribution for FL/PFL. This is because distance metrics are suitable for the contribution fairness of participants, while risk difference metrics are adequate for group fairness to eliminate prejudice toward some specific groups. While variance and JFI are typically negatively correlated, variance is more sensitive to outliers. For this reason, we adopt variance as the metric of fairness to encourage uniformly distributed performance across the
PL models. The definition of fairness measured by variance is provided as follows.
Definition 1 (Fairness [1]). For a group of personalized models $\{ \varpi _ { n } \} _ { n \in \mathbb { N } } ,$ , the performance distribution fairness is measured by
$$
\varrho ( \pmb { \varpi } _ { n } ) = \mathrm { v a r } _ { N } \left[ F _ { n } \left( \pmb { \varpi } _ { n } \left( \lambda \right) \right) \right] ,
$$
where $\operatorname { v a r } _ { N } \left[ F _ { n } \left( \varpi _ { n } \left( \lambda \right) \right) \right]$ is the variance across the local training losses of the personalized models of all clients. $A$ set of models $\{ \varpi _ { n } ^ { \prime } \} _ { n \in \mathbb { N } }$ is fairer than another set $\{ \varpi _ { n } \} _ { n \in \mathbb { N } }$ $\displaystyle { \mathrm { ~ \it ~ f ~ } \varrho ^ { \prime } = \mathrm { v a r } _ { N } \{ F _ { n } ( \varpi _ { n } ^ { \prime } ) \} _ { n \in \mathbb { N } } < \varrho }$ . The optimal $\lambda$ , denoted by $\lambda ^ { * }$ , is defined as
$$
\lambda ^ { * } = \arg \operatorname* { m i n } _ { \lambda } \mathbb { E } \left\{ \varrho ( \varpi _ { n } ^ { * } ( \lambda ) ) \right\} ,
$$
where, given $\lambda$ , $\varpi _ { n } ^ { * } \left( \lambda \right)$ is client $n$ ’s optimal $P L$ model.
# A. Personalized Bayesian Linear Regression With DP
It is generally different to analyze the performance fairness of PFL, since the performance, e.g., loss, of PFL is typically analytically unavailable and can only be empirically obtained a-posteriori after PFL training. With reference to Ditto [1], we consider that each client trains a personalized Bayesian linear regression model to shed useful light on the fairness of general ML models. Bayesian linear regression models treat regression coefficients and the disturbance variance as random variables [49]. We set the optimal $\mathrm { F L }$ global model $\omega ^ { \ast }$ as the non-information prior on $\mathbb { R } ^ { d }$ , i.e., uniformly distributed on Rd.
Suppose the optimal $\mathrm { F L }$ local model $\boldsymbol { u } _ { n } ^ { * }$ of client $n$ is distributed around the optimal $\mathrm { F L }$ global model $\omega ^ { \ast }$ :
$$
\begin{array} { r } { { \pmb u } _ { n } ^ { * } = { \pmb w } ^ { * } + { \pmb \tau } _ { n } , } \end{array}
$$
where $\boldsymbol { \tau } _ { n } \sim \mathcal { N } ( 0 , \zeta ^ { 2 } \mathbf { I } _ { d } ) ,$ $\forall n$ are i.i.d. random variables, and $\mathbf { I } _ { d }$ is the $d \times d$ identity matrix. The local data of client $n$ satisfies
$$
\begin{array} { r } { \mathbf { Y } _ { n } = \mathbf { X } _ { n } \pmb { u } _ { n } ^ { * } + \pmb { \nu } _ { n } . } \end{array}
$$
Here, $\mathbf { X } _ { n } \in \mathbb { R } ^ { b \times d }$ and $\mathbf { Y } _ { n } \in \mathbb { R } ^ { b }$ denote $b$ samples of client $n$ , and $\nu _ { n } \in \mathbb { R } ^ { b }$ under the assumption of $\pmb { \nu } _ { n } \sim \mathcal { N } ( 0 , \sigma ^ { 2 } \mathbf { I } _ { b } )$ .
Given the samples $( \mathbf { X } _ { n } , \mathbf { Y } _ { n } )$ , the loss function of the Bayesian linear regression problem at client $n$ is written as
$$
F _ { n } \left( { \pmb u } _ { n } \right) = \frac { 1 } { b } \parallel { \bf X } _ { n } { \pmb u } _ { n } - { \bf Y } _ { n } \parallel ^ { 2 } .
$$
By minimizing (24), the estimate of $\pmb { u } _ { n } ^ { * }$ is [1, Eq. (12)]
$$
\begin{array} { r } { \hat { { \boldsymbol { u } } } _ { n } = \left( \mathbf { X } _ { n } ^ { \top } \mathbf { X } _ { n } \right) ^ { - 1 } \mathbf { X } _ { n } ^ { \top } \mathbf { Y } _ { n } . } \end{array}
$$
By plugging (24) into (3b), the optimal global model is
$$
\begin{array} { l } { { \displaystyle \boldsymbol \omega ^ { * } = \arg \operatorname* { m i n } _ { \omega } \frac { 1 } { N b } \sum _ { n = 1 } ^ { N } \| \mathbf { X } _ { n } \boldsymbol \omega - \mathbf { Y } _ { n } \| ^ { 2 } } } \\ { { \displaystyle ~ = \sum _ { n = 1 } ^ { N } \left( \mathbf { X } ^ { \mathsf { T } } \mathbf { X } \right) ^ { - 1 } \mathbf { X } _ { n } ^ { \mathsf { T } } \mathbf { X } _ { n } \hat { \boldsymbol u } _ { n } } , } \end{array}
$$
where $\mathbf { X } \in \mathbb { R } ^ { N b \times d }$ collects $N b$ samples of all clients, i.e., $\mathbf { X } = ( \mathbf { X } _ { 1 } ^ { \top } \mathbf { X } _ { 2 } ^ { \top } \cdot \cdot \cdot \mathbf { X } _ { N } ^ { \top } ) ^ { \top }$ .
With DP, the server receives the perturbed version of the estimated local model, denoted by $\widetilde { \pmb u } _ { n }$ , as given by
$$
\widetilde { \pmb { u } } _ { n } = \hat { \pmb { u } } _ { n } + \mathbf { z } _ { n } = \left( \mathbf { X } _ { n } ^ { \mathsf { T } } \mathbf { X } _ { n } \right) ^ { - 1 } \mathbf { X } _ { n } ^ { \mathsf { T } } \mathbf { Y } _ { n } + \mathbf { z } _ { n } .
$$
By replacing $\hat { \pmb u } _ { n }$ with $\widetilde { \pmb u } _ { n }$ , the DP perturbed version of the optimal global model, denoted by ω∼ , is given by
$$
\tilde { \boldsymbol { \omega } } ^ { * } = \sum _ { n = 1 } ^ { N } \left( \mathbf { X } ^ { \intercal } \mathbf { X } \right) ^ { - 1 } \mathbf { X } _ { n } ^ { \intercal } \mathbf { X } _ { n } \tilde { \boldsymbol { u } } _ { n } ,
$$
Lemma 3. With $\mathbf { X } _ { n } ^ { \top } \mathbf { X } _ { n } = \rho \mathbf { I } _ { d } .$ , $\forall n \in \mathbb { N }$ under independently sampled or generated data samples $\it l I J ,$ , the optimal $P L$ model with DP, $\varpi _ { n } ^ { * } \left( \lambda \right)$ , and the optimal $F L$ local model, $\pmb { u } _ { n } ^ { * }$ in (22), can be written as
$$
\begin{array} { r l r } { { \tilde { \bar { \boldsymbol { \varpi } } } _ { n } ^ { * } ( \lambda ) = \frac { b } { ( 2 - \lambda ) \rho + b \lambda } ( ( \frac { ( 2 - \lambda ) \rho } { b } + \frac { \lambda } { N } ) \hat { \pmb { u } } _ { n } } } \\ & { } & { \quad + \frac { \lambda } { N } \sum _ { m \in \mathbb { N } , m \ne n } \hat { \pmb { u } } _ { m } + \frac { \lambda } { N } \sum _ { n = 1 } ^ { N } \mathbf { z } _ { n } ) ; } \end{array}
$$
$$
\pmb { u } _ { n } ^ { * } = \frac { \sigma _ { w } ^ { 2 } \rho } { \sigma ^ { 2 } } \hat { \pmb { u } } _ { n } + \frac { \sigma _ { w } ^ { 2 } \rho } { \sigma ^ { 2 } + N \zeta ^ { 2 } \rho } \sum _ { m \in \mathbb { N } , m \neq n } \hat { \pmb { u } } _ { m } + \pmb { \vartheta } _ { n } ,
$$
# Proof:
# See Appendix D.
We note that in Ditto, the optimal personalized models $\varpi _ { n } ^ { * } ( \lambda ) , \forall n \in \mathbb { N }$ are deterministic, provided the estimated local models $\hat { \pmb u } _ { n }$ $( \forall n \in \mathbb { N } )$ and $\lambda$ . By contrast, in DPDitto, the PL models $\widetilde { \varpi } _ { n } ^ { * } \left( \lambda \right)$ , $\forall n \in \mathbb { N }$ are not deterministic. This is because the DP noise, ${ \bf z } _ { n }$ , is added on the estimated local models $\hat { \pmb u } _ { n }$ for the $\mathrm { F L }$ global model updating and, consequently, it is coupled with $\lambda$ in $\widetilde { \varpi } _ { n } ^ { * } \left( \lambda \right)$ ; see (29). Considering the difference between the optimal FL local model $\boldsymbol { { \boldsymbol { u } } } _ { n } ^ { * }$ in (30) and its estimate $\hat { \pmb u } _ { n }$ is random and captured by $\vartheta _ { n }$ , we have to analyze the joint distribution of the two random variables ${ \bf z } _ { n }$ in (29) and $\vartheta _ { n }$ in (30), $\forall n \in \mathbb { N }$ , and obtain the fairness expression with respect to $\lambda$ .
Given the optimal FL local model without DP, $\pmb { u } _ { n } ^ { * }$ , and the optimal PL model with DP, $\widetilde { \varpi } _ { n } ^ { * } ( \lambda )$ , the fairness $\varrho ( \breve { \varpi } _ { n } ^ { * } ( \lambda ) )$ of the PL models among all clients is established, as follows.
Theorem 2. Given $\lambda$ and the variance $\sigma _ { z } ^ { 2 }$ of the $D P$ noise, the fairness of the personalized Bayesian linear regression model, $R ( \lambda )$ , can be measured by
$$
\begin{array} { r l r } & { } & { R ( \lambda ) = 2 d \left[ \sigma _ { w } ^ { 2 } + \left[ \alpha _ { 0 } \left( \lambda \right) \right] ^ { 2 } \frac { \sigma _ { z } ^ { 2 } } { N ^ { 2 } } \right] + 4 \left[ \sigma _ { w } ^ { 2 } + \left[ \alpha _ { 0 } \left( \lambda \right) \right] ^ { 2 } \frac { \sigma _ { z } ^ { 2 } } { N ^ { 2 } } \right] \times } \\ & { } & { \left( S _ { 1 } - S _ { 2 } \alpha _ { 0 } \left( \lambda \right) \right) ^ { 2 } G _ { 1 } + \left[ S _ { 1 } - S _ { 2 } \alpha _ { 0 } \left( \lambda \right) \right] ^ { 4 } \left( G _ { 2 } - G _ { 1 } ^ { 2 } \right) , } \end{array}
$$
# Proof:
# See Appendix E.
According to Theorem 2, DP degrades the fairness of PL, as $R ( \lambda )$ increases with $\sigma _ { z } ^ { 2 }$ . On the other hand, the dependence of fairness on $\lambda$ is much more complex, which is different from Ditto. As will be revealed later, given $\sigma _ { z } ^ { 2 }$ , fairness depends on not only $\lambda$ but also the model clipping threshold $C$ . The uniqueness of the optimal $\lambda$ , i.e., $\lambda ^ { * }$ , can be ascertained when $C$ is sufficiently small. Note that Theorem 2 holds under DP noises with other distributions, since the fairness $R ( \lambda )$ depends only on the mean and variance of the DP noises, according to Definition 1.
# B. Convergence-Privacy-Fairness Trade-off
We analyze the existence of the optimal $\lambda ^ { * }$ and $T ^ { * }$ to balance the trade-off between the convergence, privacy, and fairness of DP-Ditto. For conciseness, we rewrite $\alpha _ { 0 } \left( \lambda \right)$ and $R ( \lambda )$ as $\alpha _ { 0 }$ and $R$ , respectively.
Theorem 3. Given the $D P$ noise variance $\sigma _ { z } ^ { 2 }$ , the optimal $\lambda ^ { * }$ , which maximizes $R ( \lambda )$ , exists and is unique when the model clipping threshold $\begin{array} { r } { \dot { C } < \frac { \sqrt { d } } { 2 N S _ { 1 } } } \end{array}$ 2N√dS . λ∗ ∈ [0, 2] satisfies
$$
\begin{array} { c } { { 4 d \displaystyle \frac { \sigma _ { z } ^ { 2 } } { N ^ { 2 } } \alpha _ { 0 } ^ { * } { + 8 G _ { 1 } } \displaystyle \frac { \sigma _ { z } ^ { 2 } } { N ^ { 2 } } ( S _ { 1 } { - } S _ { 2 } \alpha _ { 0 } ^ { * } ) ^ { 2 } \alpha _ { 0 } ^ { * } { - 8 } S _ { 2 } G _ { 1 } \left( \sigma _ { w } ^ { 2 } { + } [ \alpha _ { 0 } ^ { * } ] ^ { 2 } \displaystyle \frac { \sigma _ { z } ^ { 2 } } { N ^ { 2 } } \right) . } } \\ { { \left( S _ { 1 } - S _ { 2 } \alpha _ { 0 } ^ { * } \right) - 4 S _ { 2 } \left( G _ { 2 } - G _ { 1 } ^ { 2 } \right) \left( S _ { 1 } - S _ { 2 } \alpha _ { 0 } ^ { * } \right) ^ { 3 } = 0 , \quad \left( 3 2 \right. } } \end{array}
$$
where α0∗ = (2 λb∗λ)ρ+bλ∗ .
Proof:
# See Appendix F
With the privacy consideration, we jointly optimize $\lambda$ and $T$ to improve the trade-off between the convergence, privacy, and fairness of DP-Ditto. From (16) and (32), $\lambda ^ { * }$ and $T ^ { * }$ satisfy
$$
\operatorname* { m i n } _ { \lambda , T } h ( T , \lambda ) , \quad s . t . \ ( 3 2 ) ,
$$
which can be solved through an iterative search. For $T$ , a one-dimensional search can be carried out. Given the aggregation round number $T$ , (32) can be solved analytically, e.g., using the Cardano method [50]. The optimal $\lambda ^ { * }$ depends on $\sigma _ { z } ^ { 2 }$ .
Corollary 1. The optimal $\lambda ^ { * }$ , which minimizes the fairness measure $R ( \lambda )$ , decreases as the $D P$ noise variance $\sigma _ { z } ^ { 2 }$ increases (i.e., the privacy budget ϵ decreases).
# Proof:
# See Appendix G.
For a given $T$ , up to three feasible solutions to $\lambda$ can be obtained by solving (32), as (32) is a three-order polynomial equation. As revealed in Theorem 3, one of the three solutions is within [0, 2]. By comparing $h ( T , \lambda )$ among all the obtained $( T , \lambda )$ pairs, the optimal $( T ^ { * } , \lambda ^ { * } )$ can be achieved and the existence of $( T ^ { * } , \lambda ^ { * } )$ is guaranteed. The complexity of this iterative search is determined by the one-dimensional search for $T$ and the Cardano method for solving (32)
under each given $T$ . The worst-case complexity of the onedimensional search with a step size of 1 is $\mathcal { O } ( T _ { \mathrm { m a x } } )$ , where $T _ { \mathrm { m a x } }$ is the maximum number of communication rounds permitted. Being an analytical method, the Cardano method provides closed-form solutions and incurs a complexity of $\mathcal { O } ( 1 )$ [51]. As a result, the overall complexity of the iterative search is $\mathcal { O } ( T _ { \mathrm { m a x } } )$ .
In a more general case, the ML model is not linear, and $\lambda$ cannot be analytically solved since there is no explicit analytical expression of $\lambda$ . Different $\lambda$ values can be tested. Per $\lambda$ , the corresponding optimal $T$ can be obtained via a onedimensional search. Given the optimal $T$ , the corresponding optimal $\lambda$ can be obtained by testing different $\lambda$ values (e.g., one-dimensional search for $\lambda$ ). We can restart the search for the optimal $T$ corresponding to the optimal $\lambda$ , so on and so forth, until convergence (i.e., the optimal $T ^ { * }$ and $\lambda ^ { * }$ stop changing), as done experimentally in Section VI.
# VI. Experiments and Results
In this section, we assess the trade-off between the convergence, accuracy, and fairness of DP-Ditto experimentally. The impact of privacy considerations on those aspects of DP-Ditto is discussed. We set $N = 2 0$ clients by default. The clipping threshold is $C = 2 0$ and the privacy budget is $\delta = 0 . 0 1$ [9]. We consider three network models, i.e., MLR, DNN, and CNN.
MLR: This classification method generalizes logistic regression to multiclass problems. It constructs a linear predictor function to predict the probability of an outcome based on an input observation. DNN: This model consists of an input layer, a fully connected hidden layer (with 100 neurons), and an output layer. The rectified linear unit (ReLU) activation function is applied to the hidden layer. CNN: This model contains two convolutional layers with 32 and 64 convolutional filters per layer, and a pooling layer between the two convolutional layers to prevent over-fitting. Following the convolutional layers are two fully connected layers. We use the ReLU in the convolutional and fully connected layers.
The learning rates of FL and $\mathrm { P L }$ are $\eta _ { \mathrm { G } } = 0 . 0 0 5$ and $\eta _ { \mathrm { { L } } } =$ 0.005, respectively.
We consider four widely used public datasets, i.e., MNIST, Fashion-MNIST (FMNIST), and CIFAR10. Cross-entropy loss is considered for the datasets. Apart from Ditto [1], the following benchmarks are considered:
pFedMe [22]: The global FL model is updated in the same way as the typical FL. Learning from the global model, each personalized model is updated based on a regularized loss function using the Moreau envelope. APPLE [25]: Each client uploads to the server a core model learned from its personalized model and downloads the other clients’ core models in each round. The personalized model is obtained by locally aggregating the core models with learnable weights.
FIGURE 3. Comparison of testing accuracy and fairness between the benchmarks with DP and DP-Ditto under the optimal $\lambda ^ { * } = 0 . 0 0 5$ , $\epsilon = 1 0$ , and $\delta = 0 . 0 1$ .
FIGURE 4. Comparison of testing accuracy and fairness with the benchmarks with DP and DP-Ditto under the optimal $\lambda ^ { * } = 0 . 0 1$ , $\epsilon = 1 0 0$ , and $\delta = 0 . 0 1$ .
• FedAMP [24]: The server has a personalized cloud model. Each client has a local personalized model. In each round, the server updates the personalized cloud models using an attention-inducing function of the uploaded local models and combination weights. Upon receiving the cloud model, each client locally updates its personalized model based on a regularized loss function.
FedALA [26]: In every round of FedALA, each client adaptively initializes its local model by aggregating the downloaded global model and the old local model with learned aggregation weights before local training.
# 1) Comparison With the State of the Art
We compare the accuracy and fairness between the proposed DP-Ditto and the benchmarks, i.e., FedAMP [24], pFedMe [22], APPLE [25], and FedALA [26], where DP noises are added to perturb the local models in the benchmarks under different $\epsilon$ values (i.e., $\epsilon = 1 0$ , 100) and datasets (i.e., MNIST, FMNIST, and CIFAR10). $\delta = 0 . 0 1$ . The DNN model is considered on the MNIST dataset. The CNN model is considered on the FMNIST and CIFAR10 datasets.
Figs. 3 and 4 plot the testing accuracy and fairness of privacy-preserving PFL with the growth of $T$ , where $\epsilon = 1 0$ and 100, respectively. DP-Ditto (with $\lambda ^ { * } ~ = ~ 0 . 0 0 5$ under $\epsilon = 1 0$ , or $\lambda ^ { * } = 0 . 0 1$ under $\epsilon = 1 0 0 \AA$ ) provides the best accuracy and fairness compared to the privacy-enhanced benchmarks (i.e., FedAMP, pFedMe, APPLE, and FedALA), since $\lambda ^ { * }$ is adapted to the DP perturbation in DP-Ditto. We note that the PL models are obtained by aggregating the downloaded models in APPLE [25] and FedALA [26], or based on a weighted downloaded global model aggregated from the previous personalized models in pFedMe [22] and FedAMP [24], both without considering privacy.
For fair comparisons with DP-Ditto, the PL models of the benchmarks are updated based on the aggregated noisy models perturbed using DP to enhance the privacy aspect of the models. Due to their limited flexibility in balancing personalization and generalization, the benchmarks are highly sensitive to DP noises, resulting in significant performance degradation. By contrast, under DP-Ditto, the impact of DP
E ε = 1 B 00 E 8 ε=10 入=1.0 ε=20 入=2.0 ε= 100 0.2 0.20 0.05
0.2L
Maximum Global AggregationTime () Maximum Global Aggregation Time (7) Global Aggregation time (t) Global Aggregation time (t) 30
(a) Train. loss vs. $T$ (DNN, MNIST) (b) Train. loss vs. $T$ (MLR, MNIST) (a) Accuracy vs. t (DNN, MNIST) (b) Fairness vs. t (DNN, MNIST)
2 1.75 0.8 0001\*90088 1.2
门 0.7 1.0 L E
0.6 0.8 0.25 0.3 0.2 0.2
Maximum Global AggregationTime () Maximum Global AggregationTime (T) 0.0 Global Aggregation time (t) 30 Global Aggregation time (t) 30
(c) Train. loss vs. $T$ (d) Train. loss vs. $T$ (CNN, FM(c) Accuracy vs. t (CNN, FMNIST) (d) Fairness vs. t (CNN, FMNIST)
(CNN,CIFAR10) NIST)
FIGURE 5. Training loss of the personalized model with respect to the 0.50 0.4
maximum global aggregation number $T$ under different $\epsilon$ values. $\delta = 0 . 0 1$ . 0.45
noise can be adjusted by properly configuring the weighting E T
coefficient $\lambda$ between personalization and generalization, 0.20 0.1
hence alleviating the adverse effect of the DP noise. 0.15 0.0
By comparing Figs. 3 and 4, we see that the testing 15 20 30 15 20 30 Global Aggregation time (t) Global Aggregation time (t)
accuracy and fairness of DP-Ditto can be maintained even
under high privacy requirements (e.g., $\epsilon = 1 0 \dot { , }$ ). This is achieved by configuring a smaller $\lambda ^ { * }$ when $\epsilon$ is smaller (i.e., $\sigma _ { u } ^ { 2 }$ is higher), which encourages the PL models to be closer to the local models and hence alleviates the adverse effect of DP noise. By contrast, the benchmarks degrade significantly when $\epsilon$ is small (e.g., $\epsilon = 1 0 \$ ) and/or $T$ is large because of their susceptibility to the DP noises. As $T$ grows, the testing accuracy decreases under FedALA and FedAMP, or increases when $T \le 5$ and then decreases under APPLE. Moreover, the testing accuracy of pFedMe decreases under the CNN model on the FMNIST and CIFAR10 datasets.
# 2) Impact of Privacy Budget
Fig. 5 evaluates the impact of $\epsilon$ and $T$ on the convergence of DP-Ditto, where $\lambda = 0 . 1$ for the DNN and MLR models on the MNIST dataset, the CNN model on the CIFAR10 dataset, and the CNN model on the FMNIST dataset. $\epsilon =$ $1 , 1 0 , 2 0 , 1 0 0$ , or $\epsilon = + \infty$ (i.e., Ditto). Figs. 5(a)–5(d) show that the training loss decreases and eventually approaches the case with no DP (i.e., $\epsilon = + \infty ,$ ), as $\epsilon$ increases. This is because a larger $\epsilon$ leads to a smaller variance of the DP noise and, consequently, the server can obtain betterquality local models from the clients. On the other hand, a smaller $\epsilon$ leads to a larger DP noise variance (i.e., $\sigma _ { u . } ^ { 2 }$ ) based on (8) and, hence, the accumulated effect of the DP noise would be significant. However, when both $\epsilon$ and $T$ are small, the training loss of PL decreases initially due to the benefits of generalization brought from FL. Once $T$ exceeds a certain critical value, the accumulated impact of the DP noises outweighs the benefits of generalization, leading to a degradation in PL performance. This observation validates the discussion in Section IV.
# 3) Impact of $T$ and $\lambda$
Figs. 6 demonstrates the testing accuracy and fairness of DP-Ditto with the growing number $t$ of aggregations under different $\lambda$ values and datasets (i.e., MNIST, FMNIST, and CIFAR10). We set $\epsilon = 1 0$ , $\delta \ : = \ : 0 . 0 1$ , $T = 3 0$ , $\lambda = 0 . 1$ , 0.5, and 1.0, as well as two special cases with $\lambda = 0$ (i.e., local training only with no global aggregation) and $\lambda = 2$ (i.e., FL with no personalization). Fig. 6(a) shows the model accuracy increases with $t$ when $\lambda < 0 . 1$ , but increases first and then decreases when $\lambda > 0 . 1$ . The reason is that the FL global model is most affected by the DP noise through the aggregations of noisy local model parameters. By contrast, the FL local training only depends on the local datasets and is unaffected by the DP noises. Moreover, $\lambda$ can be controlled to balance the accuracy between the FL global and local models. It adjusts the effect of the DP noises on the PL models.
Fig. 6(b) gauges the fairness of DP-Ditto measured by the standard deviation of the training losses concerning the PL models of all clients. The fairness of the PL models first degrades and then improves, and then degrades again as $t$ rises. The reason is that the accuracy of the clients’ models is poor and, hence, fairness is high initially. As $t$ rises, the DP noises increasingly affect the PL models. The fairness is better than that of the global FL model after a specific $t$ because the clients ignore the heterogeneity of their local data when all clients utilize the same global FL model for image classification. The PL model with a smaller $\lambda$ offers better fairness. When $\lambda = 0 . 1$ , the PL model offers the best fairness.
It is observed from Figs. 6(a), 6(c), and 6(e) that the testing accuracy first increases and then decreases with the increase of $t$ under $\lambda < 2$ . This is due to the fact that the effect of the DP noise accumulates as $t$ grows, causing performance degradation when $t$ is excessively large. By contrast, the updates of the local models (i.e., the PL models at $\lambda = 0$ ) are unaffected by the DP noises. As shown in Fig. 6(c)– 6(f), under the CNN models on the FMNIST and CIFAR10 datasets, when $t$ is small, the PL models operate the best in accuracy and fairness at $\lambda = 1$ and 2; in other words, the PL models are closer to the FL global models. This is because the adverse effect caused by noise accumulation is insignificant when $t$ is small, and the PL models can benefit from the generalization offered by the FL global model. On the other hand, when $t$ is large, the PL models perform the best at $\lambda = 0$ ; i.e., the PL models are closer to the FL local models, due to DP noise accumulation.
We further obtain the optimal $( T ^ { * } , \lambda ^ { * } )$ for the DNN model on the MNIST dataset, where $\epsilon \ = \ 1 0 0$ , $\delta \ = \ 0 . 0 1$ , and $C = 1 0$ , as described in Section V-C. Fig. 7(a) demonstrates the fairness of the PL models with $\lambda$ under different $T$ values. Fig. 7(a) shows that the optimal $\lambda ^ { * }$ decreases as $T$ increases. When $\lambda$ is small (i.e., $\lambda \to 0 ^ { + }$ ), the fairness of the PL models improves with $T$ , as DP-Ditto is dominated by PL with little assistance from FL. Consequently, the DP noise has little impact on the PL models. When $\lambda$ is large (i.e., $\lambda 2$ ), the fairness degrades as $T$ increases, as priority is given to generalization over personalization. The adverse effect of the DP noise becomes increasingly strong, compromising the fairness of the PL models. Fig. 7(b) demonstrates the training loss of the PL models against $T$ , under different $\lambda$ values. Given $T$ , the training loss decreases as $\lambda$ decreases. Moreover, the optimal $T ^ { * }$ increases as $\lambda$ decreases. By comparing Figs. 7(a) and 7(b), the optimal configuration $T ^ { * } = 8 0$ , $\lambda ^ { * } = 0 . 1$ ) is obtained to minimize the training loss and guarantee the fairness of the PL models. | Personalized federated learning (PFL), e.g., the renowned Ditto, strikes a
balance between personalization and generalization by conducting federated
learning (FL) to guide personalized learning (PL). While FL is unaffected by
personalized model training, in Ditto, PL depends on the outcome of the FL.
However, the clients' concern about their privacy and consequent perturbation
of their local models can affect the convergence and (performance) fairness of
PL. This paper presents PFL, called DP-Ditto, which is a non-trivial extension
of Ditto under the protection of differential privacy (DP), and analyzes the
trade-off among its privacy guarantee, model convergence, and performance
distribution fairness. We also analyze the convergence upper bound of the
personalized models under DP-Ditto and derive the optimal number of global
aggregations given a privacy budget. Further, we analyze the performance
fairness of the personalized models, and reveal the feasibility of optimizing
DP-Ditto jointly for convergence and fairness. Experiments validate our
analysis and demonstrate that DP-Ditto can surpass the DP-perturbed versions of
the state-of-the-art PFL models, such as FedAMP, pFedMe, APPLE, and FedALA, by
over 32.71% in fairness and 9.66% in accuracy. | [
"cs.LG",
"cs.DC"
] |
# 1 Introduction
Social media platforms like Twitter (now X) provide a space for people to express their opinions and stay informed about trending topics. However, like other social media platforms, Twitter is vulnerable to manipulation by malicious actors. These actors often engage in coordinated attacks that artificially amplify trends using fake accounts and bots. They can operate in a synchronized manner while concealing their identities, misleading users, journalists, and policymakers about what is genuinely trending. Such tactics also coerce users into engaging with fabricated trends, making it increasingly difficult to distinguish between organic trends and those driven by manipulation. Prior research has shown that coordinated campaigns are prevalent in several countries, including Turkey, Pakistan, and India [10, 20, 21].
Gopalakrishnan et al. [16] recently introduced a new graph classification dataset, LEN, consisting of engagement networks including some coordinated campaigns within Turkey’s Twitter sphere during the 2023 Turkish elections. To identify ground-truth campaign graphs, they focus on ephemeral astroturfing, a tactic where a coordinated network of bots rapidly generates a large volume of tweets to manipulate Twitter’s trending list, only to delete them shortly afterward. In each engagement graph, nodes represent users, while edges represent user interactions in the form of retweets, quotes, or replies.
The problem of coordinated campaign detection can be considered as a graph classification task, making it well-suited for message-passing neural networks (MPNNs) [18, 22, 36, 40]. However, Gopalakrishnan et al.’s analysis using established MPNNs highlights the challenges posed by LEN due to its large network sizes. MPNNs are often designed for domains with significantly smaller graphs, such as molecular structures. In contrast, LEN contains approximately ten times more edges, on average, than typical datasets of graphs, such as ogbn-ppa, one of the largest biological graph datasets.
Present work. In this paper, we exploit the fact that campaign-related engagement graphs tend to be denser. We aim to accurately identify coordinated campaigns using our method, called DEnsity-aware walks for COordinated campaign DEtection (DECODE). We incorporate network density into node embeddings by leveraging node-level density properties, such as degree, core number, and truss number, using random weighted walks (RWWs). For the RWW, we sample a new node using the current node’s density, ensuring that each node maintains a similar local density throughout the walk. These RWWs are converted to density-aware embeddings embeddings using Skipgram [26]. We then train a message-passing neural network (MPNN) using these embeddings as input features, enabling the model to leverage density awareness for improved classification. Figure 1 provides a descriptive diagram of our framework. The key contributions of our work can be summarized as follows:
– We leverage multiple density measures, namely degree, core numbers and truss numbers, to distinguish campaign and non-campaign networks based on local density.
– We introduce DECODE, which uses RWWs to encode each node such that its embedding closely resembles those of neighboring nodes with similar densities.
– We train MPNNs on the LEN dataset using the the density-aware embeddings to identify campaigns and their subtypes. To evaluate their effectiveness, we compare our models with the baselines from [16].
Fig. 1: An overview of DECODE. Density-based random weighted walk captures the local densities around a node. This representation is then concatenated with the input node feature available in the dataset. The concatenated embedding is encoded using an MPNN and subsequently aggregated to form the graph embedding, which is used for downstream classification.
The rest of the paper is structured as follows. In Section 2, we review related work, while Section 3 introduces the dataset and essential terminologies. In Section 4, we present our methodology, including the random weighted walk algorithm and its density-awareness encoding using degrees, core numbers, and truss numbers. We verify and discuss the performance improvements using the density-aware embeddings to demonstrate their importance in Section 5. Finally, we conclude by summarizing our findings and addressing potential limitations and future directions in Section 6. The code for DECODE is available at https://github.com/erdemUB/ECMLPKDD25.
# 2 Related Work
In this section we discuss related work in the domain of structural and positional encoding and its relevance in MPNNs and graph transformers (Section 2.1), and coordinated campaigns on social media platforms (Section 2.2).
# 2.1 Structural and Positional Encoding
Structural encoding is the process of ensuring that nodes with similar structural roles in a graph have similar embeddings. Positional encoding captures the proximity between two nodes in a graph. Both of these encodings can be obtained using random weighted walks. In Deepwalk, random walks are generated for each node [30]. These walks are converted to node embeddings using Skipgram [26]. Node2Vec biases the random walks to preserve both local (BFS) and global structure (DFS) [17]. Struct2Vec constructs similarity graphs using time-warping as the similarity function [33]. Random-walks are performed on the similarity graphs to generate node embeddings. Modularized non-negative matrix factorization (M-NMF) preserves community structure by using a community embedding matrix and community modularity score in addition to random walk embeddings [38]. Random walk features have also been used to improve structural awareness of MPNNs [43,44] and graph transformers [6,7,32]. Other commonly used structural and positional encoding include heat kernel [13, 23], subgraphs [3, 4], shortest distance [24], and node degree centralities [41]. In this work, we devise a new random-walk that captures the local density around nodes.
# 2.2 Coordinated Campaigns on Social Media
The process by which users on a social media platform coordinate in large groups to engage in malicious behavior is known as a coordinated campaign (also known as influence operations) [29]. These coordinated campaigns are often designed to mislead users by disseminating misinformation or by propagating falsified ideologies. Examples of coordinated campaigns include, using advertisements and influencers to dominate trends [27], deploying bots to boost user popularity [8, 9], and state-sponsored influence operations, such as Russia’s interference in the 2016 US elections [42] and alleged coordinated attacks by the Chinese Communist Party to sway public opinion [19]. Our work leverages coordinated campaigns driven by ephemeral astroturfing, where bots flood Twitter with random tweets to bypass filters and then delete them immediately [10, 16]. Since Twitter updates trends in windows, deleted tweets remain unaccounted for until the next update, allowing adversaries to exploit the illusion of organic engagement.
Over the years, methods have been developed to counter coordinated efforts on Twitter. These include techniques such as tweet and hashtag similarity [25,28], temporal methods focusing on tweet frequency [28,37], shared URLs and articles [15], and detecting other coordination signals [12, 31, 39]. Recent approaches have explored centrality-based node pruning on similarity networks [25], and graph neural networks [11,15] for detecting these attacks. Our work attempts to identify coordinated campaigns by modeling it as a graph classification problem. Additionally, we encode density-based properties using RWWs. Incorporating density-aware embeddings into MPNN training leads to improved performance in classification compared to using only the raw node features from the dataset.
# 3 Preliminaries
In this section, we describe our dataset and the key terminologies required for this work. We first provide details on the LEN dataset in Section 3.1. Then we
Table 1: Statistics of the engagement networks for LEN, containing 314 networks.
give a brief overview of the notation, the density metrics used (degree, $k$ -core, $k$ -truss) and message passing neural networks in Section 3.2.
# 3.1 Large Engagement Networks (LEN) Dataset
Large Engagement Networks (LEN) is graph dataset that contains coordinated campaigns related to Turkish Twitter. It focuses on the 2023 elections in Turkey when this issue was prevalent. The campaign graphs in the dataset are an outcome of ephemeral astroturfing. The dataset comprises of 314 engagement networks, where each network is associated to a trend. There are 179 campaign graphs and 135 non-campaign graphs. These graphs are further divided into sub-types such as politics, news, finance and more, as shown in Table 1. The nodes represent users and edges represents engagements between the users. A directed edge from node $X$ to $Y$ , signifies that $X$ engaged with (retweeted, replied to, or quoted) $Y$ . The graphs also consist of node and edge features. The features used for the node attributes include user description (bio), follower count, following count, user’s total tweet count, and user’s verification status. The edge attributes include the type of engagement (retweet, reply, or quote), engagement count (e.g., number of retweets), impression count, text, number of likes, whether the tweet is labeled as sensitive or not, and the timestamp of the tweet. Gopalakrishnan et al. provides three benchmarks for the LEN dataset: (1) binary classification to classify the networks into campaign and non-campaign networks; (2) multi-class classification to categorize campaigns into one of the 7 sub-types as shown in Table 1; and (3) binary classification of news networks into campaign and non-campaign. We use the LEN dataset as it is the only ground-truth graph classification dataset that identifies if a trend’s popularity is driven by coordinated campaigns.
# 3.2 Notation, Density Metrics, and MPNNs
A graph is a collection of vertices and edges. It is depicted as $G = ( V , E )$ , where $V$ is the number of vertices and $E$ is the number of edges. A graph can also be represented as $G = ( A , X , y )$ , where $A$ is the adjacency matrix, $X$ is the feature matrix for the nodes, and $y$ is the graph’s label.
Degree, $\boldsymbol { k }$ -core and $\boldsymbol { k }$ -truss: The degree of a node is the number of edges connected to it, providing a simple measure of its local connectivity [14]. A $k$ -core is a subgraph in which every node has at least $k$ connections within the subgraph [35]. The core number of a node represents the largest $k$ -core to which it belongs. Computing core numbers of all the nodes in a graph has a linear cost, $O ( | E | )$ . Similarly, a $k$ -truss is defined as a subgraph where each edge is part of at least $k - 2$ triangles within the subgraph [5]. The truss number of an edge indicates the highest $k$ -truss to which it belongs. Computing truss numbers is a bit costly, $O ( | E | ^ { 1 . 5 } )$ , but is still polynomial and practical for large networks. Since truss numbers are edge-based, we compute a node’s truss number by averaging the truss numbers associated to the connected edges. Degree, core number, and truss number all measure graph density, with degree indicating direct connections, and core and truss numbers reflecting a node’s role in dense substructures.
We use three local density measures for a node: (1) degree of the node, (2) core number of the node, and (3) average truss numbers of all edges incident to the node (we simply refer them as degree, core, and truss number of a node in the rest of the paper). Note that we ignore the edge directions in the engagement networks, hence we use the original definitions of $k$ -core and $k$ -truss for undirected graphs.
Message Passing Neural Networks: MPNNs consist of two steps, aggregate and update, as shown in Equation 1, where $\mathcal { N } ( v )$ is used to represent the neighborhood of node $v$ .
$$
h _ { v } ^ { ( l + 1 ) } = \mathrm { U P D A T E } \left( h _ { v } ^ { ( l ) } , \mathrm { A G G R E G A T E } \left( \{ h _ { u } ^ { ( l ) } \mid u \in \mathcal { N } ( v ) \} \right) \right)
$$
In the aggregate step, each node gathers information from its neighbors. This typically involves summing, averaging, or applying more complex functions (e.g., attention mechanisms) to the neighbors’ feature vectors. In the update step, the aggregated information is combined with the node’s own features to update its representation. To do so, a neural network (example, an MLP) or a simple transformation (example, a weighted sum) is applied. Therefore, the nodes refine their representations based on the information received. MPNNs generally differ in the aggregation strategy used. GCN uses dual-degree normalization to account for the varying number of neighbors each node may have [22]. GAT uses attention weight to assign varying weights to each neighbor [36]. GIN uses an MLP to perform aggregation using a trainable parameter (ϵ) to determine the amount of importance given to the ego node, as compared to its neighbors [40]. GraphSAGE is an inductive graph representational learning model that has the ability to generalize to unseen nodes, unlike transductive models [18]. This is done by learning a message-passing model on a sampled set of nodes in the given graph.
Table 2: Descriptive statistics to indicate the average degree, core number and truss numbers of the nodes across different sub-types spanning campaign and non-campaign graphs. Overall, campaign graphs exhibit higher local densities than the non-campaign ones.
# 4 Methodology
We propose DECODE, a random weighted walk (RWW) approach for learning density-aware node embeddings. Here, node densities are used to determine transition probabilities in the RWWs. This emphasis on density is because campaign graphs in LEN are denser than non-campaign graphs. Specifically, we use degree, core number, and truss number as density metrics due to their widespread use and computational efficiency [2,34]. Table 2 provides detailed statistics showcasing the density metrics across campaign and non-campaign graphs. Notably, the densest campaign graphs belonged to the reform sub-type, which constitutes a large portion of the dataset, as shown in Table 1.
Algorithm 1 provides a formal overview of DECODE. In our algorithm, $\phi$ , represents the density function, where $\phi ( v )$ returns the normalized density of a given node $v$ . The function $\phi$ is defined based on the chosen density metric for
Algorithm 1 Density-aware random weighted walk (DECODE)
RWWs. It can be set to return the degree, core number, or truss number of a node.
Additionally, we introduce $\tau$ , a scalar threshold parameter that differentiates between high and low-density nodes in RWWs. The threshold is set to one of the following values: 0.5, the median node density in the graph, or the midpoint of node densities, as detailed in Section 5.1. The steps for collecting RWWs in our algorithm are as follows:
1. At each step of the RWW, the next node is selected based on the density of the current node.
2. If the current node’s density exceeds the threshold $\tau$ , transitions to higherdensity neighbors are preferred, with sampling weights defined as ( $w _ { u } =$ $\phi ( u )$ ), where $w _ { u }$ represents the weight assigned to node $u$
3. Conversely, if the current node’s density is below $\tau$ , transitions to lowerdensity neighbors are favored by inverting the sampling weights ( $w _ { u } = 1 -$ $\phi ( u )$ ).
4. The transition probabilities for the neighbors are obtained by normalizing the sampling weights and new nodes are sampled using them at each step.
5. Once we obtain the RWWs, we use Skipgram to encode them, following prior methods [17, 30].
These density-aware embeddings and node feature are fed into the MPNNs for downstream classification. The MPNNs used in this paper include GCN,
GAT, GIN, and GraphSAGE. In the following section we discuss the experimental setup used in this paper and discuss our results for binary and multiclass classification by comparing our method to the results provided in [16].
# 5 Experimental Evaluation
We evaluate the performance of DECODE on the LEN dataset using two tasks: (i) campaign vs. non-campaign classification in engagement networks (binary classification) and (ii) campaign sub-type classification, where the sub-types are provided in Table 1 (multi-class classification). Section 5.1 details the experimental setup. Sections 5.2 and 5.3 present the experimental results for binary and multi-class classification, respectively.
# 5.1 Experimental Setup
We run our model on two input configurations: (i) density-aware embeddings and (ii) a concatenation of density-aware embeddings with the input node features available in the dataset. We consider each of the three density-based features in our random walks—degrees, core numbers, and truss numbers—and provide comparisons. To contextualize the empirical results of DECODE, we compare our method against four baselines: GCN, GAT, GIN, and GraphSAGE. These models are trained solely on the input node features available in the dataset. This comparison allows us to evaluate the importance of density-aware embeddings over existing node features. To construct the RWW embeddings, we set the walk length to 100. For encoding the nodes using Skipgram, we use a window length of 4, meaning each node is encoded using its four neighboring nodes in the random weighted walks. The walk embedding size is set to 128. We set the threshold parameter ( $\tau$ ) to the following values:
– 0.5: A fixed value of 0.5.
– Median: The median of the list of the density-based features in a graph.
– Mid-point (abbreviated as mid): This value is calculated as the average of the smallest and largest values of the density-based feature under consideration.
For MPNNs, we perform hyperparameter tuning over hidden layer sizes, $h \in$ $\{ 1 2 8 , 2 5 6 , 5 1 2 , 1 0 2 4 \}$ , and learning rates, $l \in \{ 0 . 0 0 1 , 0 . 0 0 0 1 , 0 . 0 0 0 0 1 \}$ as done in [16]. We also use mean pooling to produce graph embeddings.
# 5.2 Results for Campaign vs Non-campaign Classification
LEN consists of 179 campaign graphs and 135 non-campaign graphs. The results for accuracy and F1-score are presented in Tables 3 and 4, respectively. We observe the following key insights:
Table 3: Accuracy for binary classification. NF denotes the MPNN trained with node features and RWW denotes the one that used random-weighted walks. Best in each group is in bold. Underlined value denotes the best overall accuracy.
Table 4: F1-score for binary classification. NF denotes the MPNN trained with node features and RWW denotes the one that used random-weighted walks. Best in each group is in bold. Underlined value denotes the best overall F1- score.
– Pairing GraphSAGE with degree-based RWW achieves the best performance, yielding an accuracy of $0 . 8 5 2 \pm 0 . 0 1 0$ and an F1-score of $0 . 8 7 7 \pm 0 . 0 1 0$ , surpassing the best baseline in [16] by 0.117 and 0.112 for accuracy and F1-score, respectively. The value of $\tau$ is set to 0.5 in this case.
– RWW features consistently outperform LEN node features, achieving higher accuracy and F1-score in most cases.
– Embeddings learnt from degree-based RWW generally outperforms other density-aware variants, achieving the highest AUROC scores across all models. The only exception is when GCN and GraphSAGE are trained on embedROC Curves for Degree-based RWW ROC Curves for Core-number-based RWW ROCCurves forTruss-number-based RWW 1.0 1.0 1.0
G GCN (AUC = 0.87) GAT (AUC= 0.83) G GCN (AUC = 0.87) GAT (AUC = 0.79) G GAT (AUC = 0.82) GCN (AUC= 0.84) 0.2 0.2 0.2 GIN (AUC= 0.82) GIN (AUC= 0.81) GIN(AUC=0.80) SAGE (AUC = 0.84) SAGE (AUC = 0.84) SAGE (AUC= 0.75) 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate False Positive Rate False Positive Rate
dings obtained from $k$ -core-based RWW, where the AUC scores are identical. The results are illustrated in Figure 2. – The best-performing threshold varies depending on the model used. Median serves as the best threshold for GCN and GAT, while 0.5 is optimal for GIN and GraphSAGE.
The above insights suggest that RWW based methods yield improvements in performance for both accuracy and F1-score. Additionally, degree-based RWW generally outperforms core or truss-based RWWs. However the choice of threshold is model-dependent.
# 5.3 Results of Campaign-type Classification
The goal here is to classify campaign graphs into one of the seven sub-types described in Table 1. Among these, the most common categories are Politics (62 graphs) and Reform (58 graphs). The results for accuracy and macro F1-scores are provided in Tables 5 and 6, respectively.
From these results, the following inferences can be made:
– Pairing GIN with degree-based RWW achieves the best performance, with an accuracy of $0 . 6 7 9 \pm 0 . 0 0 1$ , surpassing the baseline in [16] by 0.045. – The model accuracies benefit the most when input node features from the dataset are combined with density-aware embeddings, outperforming all the other setups in a majority of the scenarios. – The best-performing thresholds are mid for GCN, 0.5 for GAT, and median for GIN and GraphSAGE, yielding the highest accuracy for each model. – The highest macro-F1 score obtained by our work is $0 . 3 3 8 \pm 0 . 0 5 1$ (for GIN with truss-based RWW and $\tau$ set to 0.5) which is 0.013 less than the best performing baseline provided in [16]. We believe this happens due to label imbalance. Several campaign-type labels (example, finance, entertainment, cult) have very few samples, making them harder to classify.
Table 5: Accuracy results for multiclass classification. NF denotes the MPNN trained with node features and RWW denotes the one that used randomweighted walks. Best in each group is in bold. Underlined value denotes the best overall accuracy.
Table 6: Macro F1-score results for multiclass classification. NF denotes the MPNN trained with node features and RWW denotes the one that used randomweighted walks. Best in each group is in bold. Underlined values denote the best overall Macro-F1 score.
– We also provide confusion matrices for the models across various RWW methods in Figure 3, where we display the confusion matrix for the bestperforming configuration of each model-RWW pair. From this, we again observe that models struggle to accurately classify labels with fewer graphs.
The insights above suggest that RWW-based methods improve performance in terms of accuracy. Additionally, we find that degree is an effective parameter for random weighted walks, and the median is a suitable threshold. However,
GCN (core) GAT (core) GIN (core) GraphSAGE (core)
PO 0 02 1 0 1 5 15 ° 0 0 0 00 TNO 4 21 ° 0 0 1 7 15 ° 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 Ref. 2 1150 0 2 0 15 0 C Ref. 2 14 0 0 2 0 Com. C 0 0 0 0 0 0 0 0 C 0 Com. 0 0 0 0 Cult 0 0 0 0 0 0 0 0 0 0 0 C 0 Cult 0 0 0 0 0 0 0 0 0 Fin. 0 0 0.0 0 0 0 0 0 0 C 0 0 0 Fin. 0 0 0 0 0 0 C 0 0 0 C 0 C m R u F True Label True Label True Label True Label GCN (deg) GAT (deg) GIN (deg) GraphSAGE (deg)
Nt 000000 0 t: 0000 0 0 0 PoLoi1 0 t 000 0 0 5 14010 1 3 5 141 2 3 615 0 Ent. 01 02 i1i 0 Q Ent. : 0 1 5F 0 C 0 Ent. 0 0 14 0 0 C Eet. 00 0 15 C Com. 0 0 0 0 0 0 0 Com. 0 r 0 ? 0 Com. 0 0 0 0 0 0 0 Com. 0 0 Cult 0 0 0 0 0 0 0 Cult 0 0 C ? 0 Cult 0 0 0 0 0 0 0 Cult 0 0 C Fin. 0 0 0 0 0 0 0 Fin. 0 0 C 0 0 0 0 Fin. 0 0 0 0 0 0 C Fin. 0 0 C 1 0 0 C m True Label True Label True Label True Label GCN (truss) GAT (truss) GIN (truss) GraphSAGE (truss)
G e 000 0 0 00 Nent. 1 1 0 0 0 0 0 N 2 20 0 1 0 0 NPoI 1000 0 0 00 6 13 1 2 412 0 0 0 1 2 12 1 1 0 0 3 6140 Eet. 0 02 0 14 0 02 Enet. 0 02 i° 0 0 Eet. 0 02 °14 0 02 0 Eet. °i 0 0 140 0 Com. 0 0 0 0 0 0 0 Com. 0 0 C 0 0 0 0 Com. 0 0 0 0 0 0 0 Com. 0 0 0 Cult 0 0 0 0 0 0 0 Cult 0 0 0 0 C 0 Cult 0 0 0 0 0 0 0 Cult 0 0 L Fin. 0 0 0 0 0 0 0 Fin. 0 0 0 0 0 0 0 Fin. 0 0 0 0 0 0 0 Fin. 0 0 C A 8 True Label True Label True Label True Label
we observe a drop in F1-scores, likely due to the models’ difficulty in classifying graphs associated with labels that have fewer samples. | Coordinated campaigns frequently exploit social media platforms by
artificially amplifying topics, making inauthentic trends appear organic, and
misleading users into engagement. Distinguishing these coordinated efforts from
genuine public discourse remains a significant challenge due to the
sophisticated nature of such attacks. Our work focuses on detecting coordinated
campaigns by modeling the problem as a graph classification task. We leverage
the recently introduced Large Engagement Networks (LEN) dataset, which contains
over 300 networks capturing engagement patterns from both fake and authentic
trends on Twitter prior to the 2023 Turkish elections. The graphs in LEN were
constructed by collecting interactions related to campaigns that stemmed from
ephemeral astroturfing. Established graph neural networks (GNNs) struggle to
accurately classify campaign graphs, highlighting the challenges posed by LEN
due to the large size of its networks. To address this, we introduce a new
graph classification method that leverages the density of local network
structures. We propose a random weighted walk (RWW) approach in which node
transitions are biased by local density measures such as degree, core number,
or truss number. These RWWs are encoded using the Skip-gram model, producing
density-aware structural embeddings for the nodes. Training message-passing
neural networks (MPNNs) on these density-aware embeddings yields superior
results compared to the simpler node features available in the dataset, with
nearly a 12\% and 5\% improvement in accuracy for binary and multiclass
classification, respectively. Our findings demonstrate that incorporating
density-aware structural encoding with MPNNs provides a robust framework for
identifying coordinated inauthentic behavior on social media networks such as
Twitter. | [
"cs.SI",
"cs.LG"
] |
# 1 Introduction
Clustering is a fundamental task in both machine learning and data mining [24, 35, 25]. Edge-colored clustering (ECC), in particular, is a useful model when interactions between the items to be clustered are represented as categorical data [8, 4]. To provide intuition, let us consider the following simple, illustrative example from prior work [34, 4, 37, 51, 48, 19, 20]: given a set of food ingredients, recipes that use them, and a (noisy) labeling of these recipes indicating their cuisine (e.g., Italian or Indian), can we group the food ingredients by their cuisine? To address this question, we can begin by considering a hypergraph whose vertices correspond to ingredients, (hyper)edges represent recipes, and edge colors correspond to cuisines. We can then find a labeling of the ingredients such that, in most recipes, all ingredient labels match the recipe’s label. This is precisely what ECC does: given an edge-colored hypergraph, the goal is to assign
colors to its vertices so that the number of edges where vertex colors differ from the edge color is minimized.
Intuitively, this problem offers an approach for clustering vertices when edge labels are noisy.
However, ECC has an inherent limitation in that it insists on assigning exactly one color to every vertex, enforcing a nonoverlapping and exhaustive clustering. In the above illustrative example, food ingredients are often shared across geographically neighboring cuisines, indicating that overlapping clustering may be preferable. Moreover, some ingredients, such as salt, commonly appear in nearly all cuisines and may be considered outliers that should ideally be excluded from the clustering process. To address these limitations, three generalizations of ECC, namely, Local ECC, Global ECC, and Robust ECC, have been proposed [19]. Among them, Local ECC and Global ECC allows overlapping clustering: in Local ECC, a local budget $b _ { \mathrm { l o c a l } }$ that specifies the maximum number of colors each vertex can receive is given as an input parameter, thereby allowing clusters to overlap. In Global ECC, vertices may be assigned multiple colors, but with the total number of extra assignments constrained by a global budget $b _ { \mathbf { g } \mid \mathbf { o b a } \mid }$ given as input. On the other hand, Robust ECC enhances robustness against vertex outliers by allowing up to $b _ { \mathrm { r o b u s t } }$ vertices to be deleted from the hypergraph. This budget $b _ { \mathrm { r o b u s t } }$ is also specified as part of the input. (Alternatively, this can be viewed as designating those vertices as “wildcards” that can be treated as any color.)
While Local ECC, Global ECC, and Robust ECC are useful extensions of ECC that effectively address its limitations, these problems are unfortunately NP-hard, making exact solutions computationally intractable. This directly follows from the NP-hardness of ECC [8], a common special case of all three problems. This computational intractability naturally motivates the study of approximation algorithms for these problems. Recall that an algorithm is called a $\rho$ -approximation algorithm if it runs in polynomial time and guarantees a solution within a factor of $\rho$ relative to the optimum.
In this paper, we present a new algorithmic framework for overlapping and robust clustering of edgecolored hypergraphs that is linear programming-based (LP-based) yet also combinatorial. Previously, combinatorial algorithms and (non-combinatorial) LP-based algorithms have been proposed for these problems. For Local ECC, Crane et al. [19] gave a greedy combinatorial $r$ -approximation algorithm, where $r$ is the rank of the hypergraph. Their computational evaluation demonstrated that this algorithm runs remarkably faster than their own LP-rounding algorithm, at the expense of a trade-off in solution quality. The theoretical analysis [19] of the LP-rounding algorithm successfully obtains an approximation ratio that does not depend on $r$ : they showed that their algorithm is a $( b _ { \mathsf { l o c a l } } + 1 )$ -approximation algorithm. They state it as an open question whether there exists an $O ( 1 )$ -approximation algorithm for Local ECC. For Robust ECC as well, Crane et al. gave a greedy $r$ -approximation algorithm; however, their LP-rounding algorithm in this case does not guarantee solution feasibility. According to their computational evaluation, solutions produced by the LP-rounding algorithm were of very high quality but violated the budget constraint, which is reflected in the theoretical result: their algorithm is a bicriteria $( 2 + \epsilon , 2 + \textstyle \frac { 4 } { \epsilon } )$ -approximation algorithm for any positive $\epsilon$ , i.e., an algorithm that produces an $( 2 + \epsilon )$ -approximation solution but violates the budget constraint by a multiplicative factor of at most $2 + \textstyle { \frac { 4 } { \epsilon } }$ . Finally for Global ECC, Crane et al. gave similar results: a greedy $r$ -approximation algorithm and a bicriteria $\begin{array} { r } { \left. b _ { \mathrm { g l o b a l } } + 3 + \epsilon , 1 + \frac { b _ { \mathrm { g l o b a l } } + 2 } { \epsilon } \right. } \end{array}$ -approximation algorithm for any positive $\epsilon$ , where the latter, empirically, was slow but produced solutions of high quality. Since their bicriteria approximation ratio is not $( O ( 1 ) , O ( 1 ) )$ for Global ECC, Crane et al. left it another open question whether bicriteria $( O ( 1 ) , O ( 1 ) )$ -approximation is possible for Global ECC.
The primal-dual method is an algorithmic approach that constructs combinatorial algorithms based on LP, allowing one to combine the strengths of both worlds [29, 30]. Our algorithmic framework is designed using the primal-dual method. We analyze its performance both experimentally and theoretically. For Local ECC, our framework yields a combinatorial $( b _ { \mathsf { l o c a l } } + 1 )$ -approximation algorithm, which is the same approximation ratio as Crane et al.’s LP-rounding algorithm; however, our algorithm is combinatorial and runs in linear time. The experiments confirmed that, compared to the previous combinatorial algorithm, our algorithm brings improvement in both computation time and solution quality. We complement this algorithmic result by showing inapproximability results that match our approximation ratio; this answers one of Crane et al.’s open questions. For Robust ECC and Global ECC, our framework gives a true (nonbicriteria) approximation algorithm, avoiding the need for bicriteria approximation.1 Our true approximation algorithm for Robust ECC, with the ratio of $2 ( b _ { \mathrm { r o b u s t } } + 1 )$ , was enabled by our new LP relaxation: the integrality gap of the relaxation used by previous results is $+ \infty$ [19], whereas our LP has an integrality gap of $O ( b _ { \mathsf { r o b u s t } } )$ . In fact, we show that our gap is $\Theta ( b _ { \mathsf { r o b u s t } } )$ , suggesting that our ratio may be asymptotically the best one can achieve based on this relaxation. For Global ECC, our true approximation algorithm has the ratio of $2 ( b _ { \mathsf { g l o b a l } } + 1 )$ , and our bicriteria approximation algorithm has the ratio of $( 2 + \epsilon , 1 + \textstyle \frac { 2 } { \epsilon } )$ . This affirmatively answers another open question of Crane et al.: bicriteria $( O ( 1 ) , O ( 1 ) )$ -approximation for Global ECC is indeed possible. We also show that our relaxation has the integrality gap of $\Theta ( b _ { \mathsf { g l o b a l } } )$ .
Below, we summarize which contributions of our work are presented in which sections of the paper.
- In Section 3.1, we present our algorithm for Local ECC; its performance is analyzed both experimentally (Section 4.2) and theoretically (Section 3.1 and Appendix B.1). We also present the inapproximability result (Theorems 3.3 and 3.4) that answers Crane et al.’s open question [19], whose technical proof is deferred to Appendix B.3.
- In Section 3.2, we present our true approximation algorithm for Robust ECC based on a new stronger LP formulation. Our algorithm’s performance is analyzed both experimentally (Section 4.3) and theoretically (Section 3.2 and Appendix C.3), including an integrality gap lower bound (Section 3.2; note that an upper bound is implied by the proof of Theorem 3.5).
- In Section 3.2 and Appendix D.2, we present our true approximation algorithm for Global ECC, whose performance is analyzed both experimentally (Section 4.3) and theoretically (Appendix D.3). This algorithm extends to the bicriteria setting (Section 3.2 and Appendix D.5), answering another open question of Crane et al. [19].
We note that LP-rounding algorithms based on our relaxations can match the ratios of our combinatorial true approximation algorithms. However, we omit them from this paper, as they offer no improvement in performance guarantees while requiring significantly more computation time to solve LPs.
Related work. ECC has been used for a variety of tasks including categorical community detection, temporal community detection [4], and diverse and experienced group discovery [5]; recently, it has also been applied to fair and balanced clustering [20]. For reasons of space, we review previous work related to it in Appendix A.
# 2 Problem definitions
In this section, we formally define the problems considered in this paper. First, we describe the part of the input that is common to all three problems. We are given a hypergraph $H = ( V , E )$ and a set $C$ of colors as input. Since $H$ is a hypergraph, we have $E \subseteq 2 ^ { V }$ . Each edge $e \in E$ is associated with a color $c _ { e } \in C$ .
Given a node coloring $\sigma : V C$ , we say an edge $e \in E$ is a mistake if there exists a node $v \in e$ whose assigned color $\sigma ( v )$ differs from $c _ { e }$ , i.e., $c _ { e } \neq \sigma ( v )$ . Otherwise, we say that $e$ is satisfied. In Local ECC and Global ECC, a node coloring $\sigma : V \to 2 ^ { C }$ assigns (possibly) a multiple number of colors to each node. In these problems, we say $e \in E$ is a mistake if there exists a node $v \in e$ whose assigned color does not include $c _ { e }$ , i.e., $c _ { e } \notin \sigma ( v )$ .
Definition 2.1. In Local ECC, in addition to $H$ , $C$ , and $\{ c _ { e } \} _ { e \in E }$ , a local budget $b _ { \mathsf { l o c a l } } \in \mathbb { Z } _ { \geq 1 }$ is given as input. The goal is to find a node coloring $\sigma : V \to 2 ^ { C }$ such that $| \sigma ( v ) | \le b _ { \mathsf { l o c a l } }$ for all $v$ to minimize the number of mistakes.
Definition 2.2. In Global ECC, in addition to $H$ , $C$ , and $\{ c _ { e } \} _ { e \in E }$ , a global budget $b _ { \mathsf { g l o b a l } } \in \mathbb { Z } _ { \geq 0 }$ is given as input. The goal is to find a node coloring $\sigma : V \to 2 ^ { C }$ such that $| \sigma ( v ) | \geq 1$ for all $v$ and $\begin{array} { r } { \sum _ { v \in V } | \sigma ( v ) | \leq | V | + b _ { \mathsf { g l o b a l } } } \end{array}$ , to minimize the number of mistakes.
Definition 2.3. In Robust ECC, in addition to $H$ , $C$ , and $\{ c _ { e } \} _ { e \in E }$ , a node-removal budget $b _ { \mathsf { r o b u s t } } \in \mathbb { Z } _ { \geq 0 }$ is given as input. The goal is to remove at most $b _ { \mathrm { r o b u s t } }$ nodes from the hypergraph and find a node coloring $\sigma : ( V \setminus V _ { R } ) \to C$ to minimize the number of mistakes, where $V _ { R }$ denotes the set of removed nodes.
Recall that removing a node from $H$ makes the node disappear from all the incident edges.
We conclude this section by introducing notation to be used throughout this paper. For $F \subseteq E$ , let $\chi ( F ) : = \{ c _ { e } \mid e \in F \}$ be the set of colors of the edges in $F$ . For $v \in V$ , let $\delta ( v )$ be the set of edges that are incident with $v$ ; $d _ { v } : = | \delta ( v ) |$ is the degree of $v$ . Let $\delta _ { c } ( v )$ be the set of edges in $\delta ( v )$ whose color is $c$ , i.e., $\delta _ { c } ( v ) : = \{ e \in \delta ( v ) \mid c _ { e } = c \}$ .
# 3 Proposed algorithms
# 3.1 Local ECC
In this section, we informally present our approximation algorithm for Local ECC. Although we will discuss all the necessary technical details here, we will still present a formal analysis in Appendix B for the completeness’ sake.
The algorithm will be presented for a slightly different version of the problem: instead of $b _ { \mathsf { l o c a l } }$ that uniformly applies to all nodes, we will let each node $v$ specify its own budget $b _ { v }$ . We also introduce edge weights $w _ { e } \in \mathbb { Q } _ { \geq 0 }$ so that we minimize the total weight, not number, of mistakes. Note that it suffices to solve this version of the problem, since we can simply set all $b _ { v }$ as $b _ { \mathrm { l o c a l } }$ and all $w _ { e }$ as 1.
Following are an LP relaxation (left) and its dual (right). Intuitively, $x _ { v , c } = 1$ indicates that node $v$ is colored with $c$ and $x _ { v , c } = 0$ otherwise; $y _ { e } = 1$ if $e$ is a mistake and $y _ { e } = 0$ otherwise.
$$
\begin{array} { r l r l r l r l } & { \sum _ { e \in E } w _ { e } y _ { e } } & & { \quad \mathrm { m a x ~ } \sum _ { e \in E , v \in e } \beta _ { e , v } - \sum _ { v \in V } b _ { v } \alpha _ { v } } & & { } \\ & { \sum _ { c \in C } x _ { v , c } \leq b _ { v } , } & & { \quad \forall v \in V , } & & { \quad \mathrm { s . t . ~ } \sum _ { e \in \delta _ { c } ( v ) } \beta _ { e , v } \leq \alpha _ { v } , } & & { \quad \forall v \in V , c \in C _ { v } \operatorname { I } _ { \mathbb { C } } ^ { 2 } \operatorname { I } _ { \mathbb { C } } ^ { 2 } \operatorname { I } _ { \mathbb { C } } ^ { 2 } \quad } \\ & { x _ { v , c _ { e } } + y _ { e } \geq 1 , } & & { \quad \forall e \in E , v \in e , } & & { \quad \sum _ { v \in e } \beta _ { e , v } \leq w _ { e } , } & & { \quad \forall e \in E , } \\ & { x _ { v , c } \geq 0 , } & & { \quad \forall v \in V , c \in C , } & & { \quad \alpha _ { v } \geq 0 , } & & { \quad \forall v \in V , } \\ & { y _ { e } \geq 0 , } & & { \quad \forall e \in E . } & & { \quad \beta _ { e , v } \geq 0 , } & & { \quad \forall e \in E , v \in e . } \end{array}
$$
As a primal-dual algorithm, our algorithm maintains a dual solution $( \alpha , \beta )$ , which changes throughout the execution of the algorithm but remains feasible at all times. The algorithm constructs the “primal” solution partially guided by the complementary slackness: namely, it allows an edge $e$ to be a mistake only if the corresponding dual constraint $\sum { _ { v \in e } \beta _ { e , v } } \le w _ { e }$ is tight, i.e., $\sum _ { v \in e } \beta _ { e , v } = w _ { e }$ . This is useful since the cost of the algorithm’s output can then be written as $\begin{array} { r } { \sum _ { e \in E _ { m } } w _ { e } = \sum _ { e \in E _ { m } } \sum _ { v \in e } \beta _ { e , v } \le \sum _ { e \in E } \sum _ { v \in e } \beta _ { e , v } } \end{array}$ , where $E _ { m }$ is the set of mistakes in the output. Let $\begin{array} { r } { B _ { v } : = \sum _ { e \in \delta ( v ) } \beta _ { e , v } } \end{array}$ , and the algorithm’s output cost is no greater than $\textstyle \sum _ { v \in V } B _ { v }$ at termination.
In ord∈er to maintain dual feasibility, the algorithm begins with a trivial dual feasible solution $( \alpha , \beta ) =$ $( \mathbf { 0 } , \mathbf { 0 } )$ and only increases dual variables, never decreasing them. The first set of constraints will never be violated because whenever we increase $\sum _ { \mathbf { \boldsymbol { \epsilon } } } { } _ { e \in \delta _ { \boldsymbol { \epsilon } } ( v ) } \beta _ { e , v }$ , we will increase $\alpha _ { v }$ by the same amount. The second set of constraints will never be violated simply because we will stop increasing all $\beta _ { e , v }$ for $v \in e$ once edge $e$ becomes tight.
We are now ready to present the algorithm. We will describe the algorithm as if it is a “continuous” process that continuously increases a set of variables as time progresses. In this process over time perspective (see, e.g., [31]), a primal-dual algorithm starts with an initial (usually all-zero) dual solution at time 0, and the algorithm specifies the increase rate at which each dual variable increases. The dual variables continue to increase at the specified rates until an event of interest—typically, a dual constraint becomes tight—occurs.
At that point, the algorithm pauses the progression of time to handle the event and recompute the increase rates. Once updated, time proceeds again.
Consider the following algorithm. It maintains a set $L$ of all those edges that are not tight. We call these edges loose. One point that requires additional explanation in this pseudocode is that it increases $a$ sum of variables $\sum _ { e \in \delta _ { c } ( v ) \cap L } \beta _ { e , v }$ at unit rate, rather than a single variable. This should be interpreted as increasing the variables in the summation in an arbitrary way, provided that their total increase rate is 1 and that no variable is ever decreased. The algorithm’s analysis holds for any such choice of the increase rates of individual variables as long as their total is 1.
# Algorithm 1 Proposed algorithm for Local ECC
$\alpha \mathbf { 0 } ; \beta \mathbf { 0 }$
$L \gets \{ e \in E \mid w _ { e } > 0 \}$
for $v \in V$ do while $| \chi ( \delta ( v ) \cap L ) | > b _ { v }$ do increase $\alpha _ { v }$ and $\textstyle \sum _ { e \in \delta _ { c } ( v ) \cap L } \beta _ { e , v }$ for each $c \in \chi ( \delta ( v ) \cap L )$ at unit rate, until there exists $e$ such that $\sum { _ { u \in e } \beta _ { e , u } } = w _ { e }$ ifP∃∈e $\sum { _ { u \in e } \beta _ { e , u } } = w _ { e }$ then remove all such edges from $L$ $\sigma ( v ) \gets \chi ( \delta ( v ) \cap L )$
This algorithm can be implemented as a usual discrete algorithm using the standard technique for emulating “continuous” algorithms by discretizing them. Once the increase rates are determined, the discretized algorithm computes, for each edge, after how much time the edge would become tight if we continuously and indefinitely increased the dual variables, and selects the minimum among them. That is the amount of time the emulated algorithm runs before getting paused. The discretized algorithm then handles the event, recomputes the increase rates, and repeat. See Appendix B.5 for the full discretized version of the algorithm.
It is easy to see that Algorithm 1 returns a feasible solution: we assign $\chi ( \delta ( v ) \cap L )$ to $v$ only after ensuring $| \chi ( \delta ( v ) \cap L ) | \leq b _ { v }$ . The analysis can focus on bounding the final value of $\textstyle \sum _ { v \in V } B _ { v }$ : recall that it was an upper bound on the algorithm’s output cost. We will compare $\textstyle \sum _ { v \in V } B _ { v }$ against the dual objective value, which is a lower bound on the true optimum from the LP duality.
Both $\textstyle \sum _ { v \in V } B _ { v }$ and the dual objective value change throughout the algorithm’s execution. At the beginning, both are zeroes because $( \alpha , \beta ) = ( \mathbf { 0 } , \mathbf { 0 } )$ . How do they change over the execution? In each iteration of the while loop, the algorithm increases $\alpha _ { v }$ at unit rate and $\boldsymbol { B } _ { v }$ at rate $| \chi ( \delta ( v ) \cap L ) |$ , where $v$ is the vertex being considered at the moment. (Note that $\begin{array} { r } { B _ { v } = \sum _ { c \in \chi ( \delta ( v ) \cap L ) } \sum _ { e \in \delta _ { c } ( v ) \cap L } \beta _ { e , v } + \sum _ { e \in \delta ( v ) \setminus L } \beta _ { e , v } . , } \end{array}$ ) That is, at any given moment of the algorithm’s execution, the rate by which $\textstyle \sum _ { u \in V } B _ { u }$ gets increased is $| \chi ( \delta ( v ) \cap L ) | > b _ { v }$ , and the increase rate of the dual objective is $| \chi ( \delta ( v ) \cap L ) | - b _ { v }$ . Note that the ratio between these two rates is χ|(χδ(δv()v)L∩)L)|bv ≤ bv + 1 since |χ(δ(v) ∩ L)| > bv. Since the upper bound on the algorithm’s output and the lower bound on the true optimum were initially both zeroes and the ratio between their increase rate is no greater than $b _ { v } + 1$ at all times, the overall approximation ratio is $b _ { \operatorname* { m a x } } + 1$ where $b _ { \mathfrak { m a x } } : = \operatorname* { m a x } _ { v \in V } b _ { v }$ . Note that $\boldsymbol { b _ { \mathrm { { m a x } } } } = \boldsymbol { b _ { \mathrm { { l o c a l } } } }$ under the original definition of Local ECC.
Theorem 3.1. Algorithm $\mathit { 1 }$ is $a$ $\boldsymbol { b } _ { | 0 \mathsf { c a l } } + 1 )$ -approximation algorithm for Local ECC.
Algorithm 1 can be implemented to run in linear time (see Lemma B.3 in Appendix B.1).
Our algorithmic framework harnesses the full “power” of the LP relaxation, in that its approximation ratio matches the integrality gap of the relaxation. We defer the proof of Theorem 3.2 to Appendix B.2.
Theorem 3.2. There is a sequence of instances of Local ECC such that the ratio between a fractional solution and an optimal integral solution converges to $b _ { \mathsf { l o c a l } } + 1$ .
In fact, our inapproximability results further show that our approximation ratio is essentially the best possible. We note that these results answer one of the open questions raised by Crane et al. [19], namely, whether an $O ( 1 )$ -approximation algorithm is possible for Local ECC.
Theorem 3.3. For any constant $\epsilon > 0$ , it is UGC-hard to approximate Local ECC within a factor of $b _ { \mathsf { l o c a l } } + 1 - \epsilon$ .
If one prefers a milder complexity-theoretic assumption, we show the following theorem as well.
Theorem 3.4. For any $b _ { \mathsf { l o c a l } } \geq 2$ and any constant $\epsilon > 0$ , there does not exist a $\left( b _ { \mathsf { l o c a l } } - \epsilon \right)$ -approximation algorithm for Local ECC unless $\mathrm { P } = \mathrm { N P }$ .
The proofs of Theorems 3.3 and 3.4 are deferred to Appendix B.3.
Final remarks. Since our algorithm considers the nodes one by one and operates locally, Algorithm 1 immediately works as an online algorithm, in which vertices are revealed to the algorithm in an online manner.2 In Appendix B.4, we also show that the algorithm can be analyzed in the bicriteria setting, yielding a (1 + ϵ, 1 + b 1 ⌈ $\begin{array} { r } { ( 1 + \epsilon , 1 + \frac { 1 } { b _ { \sf l o c a l } } \lceil \frac { b _ { \sf l o c a l } } { \epsilon } \rceil - \frac { 1 } { b _ { \sf l o c a l } } ) } \end{array}$ -approximation for $\epsilon \in ( 0 , b _ { \mathrm { l o c a l } } ]$ .
# 3.2 Robust ECC and Global ECC
In this section, we summarize our algorithmic results for Robust ECC and Global ECC. The proposed approximation algorithms for these two problems are quite similar; as such, in the interest of space, we will sketch our algorithm only for Robust ECC in this section. The only real difference between the two algorithms is in the constraints of the dual LPs.
Following is the dual LP used by the algorithm for Robust ECC. (As in Section 3.1, edges have weight $w _ { e }$ , but we can simply set all $w _ { e }$ as 1.)
$$
\begin{array} { r l r } & { \sum _ { e \in E , v \in e } \beta _ { e , v } - \sum _ { v \in V } \alpha _ { v } - \lambda b _ { \mathrm { r o b u s t } } } \\ & { \sum _ { e \in \delta _ { c } ( v ) } \beta _ { e , v } \le \alpha _ { v } , \quad } & { \forall v \in V , c \in C , } \\ & { \sum _ { v \in e } \beta _ { e , v } \le w _ { e } , \quad } & { \forall e \in E , } \\ & { \sum _ { e \in \delta ( v ) } \beta _ { e , v } - \alpha _ { v } \le \lambda , } & { \forall v \in V , } \\ & { \alpha _ { v } \ge 0 , \quad } & { \forall v \in V , } \\ & { \beta _ { e , v } \ge 0 , \quad } & { \forall e \in E , v \in e , } \\ & { \lambda \ge 0 . } \end{array}
$$
Let us now sketch the algorithm for Robust ECC we propose. The algorithm maintains a dual feasible solution $( \alpha , \beta , \lambda )$ , initially set as $( \mathbf { 0 } , \mathbf { 0 } , 0 )$ . The set $L$ will be kept as the set of loose edges; $R \subseteq V$ is the set of nodes with at least two incident loose edges of distinct colors. Intuitively, $R$ is the set of nodes we will remove from the hypergraph. The algorithm therefore continues its execution until $| R | \leq b _ { \mathsf { r o b u s t } }$ holds. When increasing the dual variables, the algorithm increases variables associated with all vertices in $R$ at the same time, unlike Algorithm 1 which handles one node at a time. The following two properties will hold:
(i) The algorithm increases $\lambda$ and $\begin{array} { r } { \sum _ { e \in \delta ( v ) \cap L } \beta _ { e , v } - \alpha _ { v } } \end{array}$ for each $v \in R$ at the same rate.
(ii) For each $v \in R$ , the algorithm increases $\alpha _ { v }$ and $\sum _ { e \in \delta _ { c } ( v ) \cap L } \beta _ { e , v }$ for each $c \in \chi ( \delta ( v ) \cap L )$ at the same rate. In general, the increase rate of $\alpha _ { v 1 }$ may be different from that of $\alpha _ { v _ { 2 } }$ for $v _ { 1 } \neq v _ { 2 }$ .
These properties can be ensured as follows: the increase rate of $\lambda$ is set as $^ { 1 }$ . For each $v \in R$ , we increase $\alpha _ { v }$ and $\textstyle \sum _ { e \in \delta _ { c } ( v ) \cap L } \beta _ { e , v }$ for each $c \in \chi ( \delta ( v ) \cap L )$ at rate $\frac { 1 } { | \chi ( \delta ( v ) \cap L ) | - 1 }$ .
Once $| R |$ becomes less than or equal to $b _ { \mathrm { r o b u s t } }$ , the algorithm removes $R$ from the hypergraph and assigns every node $v \in V \setminus R$ the (only) color in $\chi ( \delta ( v ) \cap L )$ . If $\chi ( \delta ( v ) \cap L ) = \emptyset$ , an arbitrary color can be assigned without affecting the theoretical guarantee on solution quality; in practical implementation, we could employ heuristics for marginal improvement. In the interest of space, the full pseudocodes are deferred to Appendices C.2 and C.6.
We prove the following theorems in Appendices C.3 and D.3.
Theorem 3.5. There exists a $2 ( b _ { \mathrm { r o b u s t } } + 1 )$ -approximation algorithm for Robust ECC.
Table 1: Statistics of the benchmark datasets.
Theorem 3.6. There exists a $2 ( b _ { \mathtt { g l o b a l } } + 1 )$ -approximation algorithm for Global ECC.
The LP relaxation of Crane et al. [19] for Robust ECC has infinite integrality gap, whereas the integrality gap of our LP is $O ( b _ { \mathsf { r o b u s t } } )$ , following from the proof of Theorem 3.5. This makes it possible to obtain a true (non-bicriteria) approximation algorithm based on our LP. In fact, the following theorems show that our LP for Robust ECC (and Global ECC) has an integrality gap of $\Theta ( b _ { \mathsf { r o b u s t } } )$ (and $\Theta \big ( b _ { \mathsf { g l o b a l } } \big ) \big )$ , respectively. Their proofs are deferred to Appendices C.4 and D.4.
Theorem 3.7. The integrality gap of our LP for Robust ECC is at least $b _ { \mathrm { r o b u s t } } + 1$ .
Theorem 3.8. The integrality gap of the LP for Global ECC is at least $b _ { \mathrm { g l o b a l } } + 1$ .
Final remarks. Our algorithms can be analyzed in the bicriteria setting as well, yielding a bicriteria $\begin{array} { r } { ( 2 + \epsilon , 1 + \frac { 1 } { b } \lceil \frac { 2 b } { \epsilon } \rceil - \frac { 1 } { b } ) } \end{array}$ -approximation algorithm for all $\epsilon \in ( 0 , 2 b ]$ , where $b = b _ { \mathsf { r o b u s t } }$ for Robust ECC and $b = b _ { \mathrm { g } \mid \mathrm { o b a l } }$ for Global ECC. This improves the best bicriteria approximation ratios previously known; furthermore, it affirmatively answers one of the open questions of Crane et al. [19], namely, whether there exists a bicriteria $( O ( 1 ) , O ( 1 ) )$ -approximation algorithm for Global ECC. See Appendices C.5 and D.5.
# 4 Experiments
In this section, we analyze the performance of our algorithmic framework through experiments. We describe the experimental setup in Section 4.1. We evaluate and discuss the performance of our algorithm for Local ECC in Section 4.2. In Section 4.3, we address Robust and Global ECC.
# 4.1 Setup
Our experiments used the same benchmark as Crane et al. [19], which contains six datasets. See Appendix $\mathrm { E }$ for further description of the individual datasets. We remark that these datasets have been used as a benchmark to experimentally evaluate ECC also in other prior work [4, 48]. Table 1 summarizes some statistics of the datasets: the number of nodes $| V |$ , number of edges $| E |$ , number of colors $| C |$ , rank $r : = { }$ $\operatorname* { m a x } _ { e \in E } | e |$ , average degree $\begin{array} { r } { d : = \sum _ { v \in V } d _ { v } / | V | } \end{array}$ , maximum color-degree $\Delta _ { \chi } : = \operatorname* { m a x } _ { v \in V } | \chi ( \delta ( v ) ) |$ , average color-degree $\begin{array} { r } { d _ { \chi } : = \sum _ { v \in V } | \chi ( \delta ( v ) ) | / | V | } \end{array}$ , and the ratio $\rho$ of vertices whose color degree is at least 2, i.e., $\rho : = | \{ v \in V \mid | \chi ( \delta ( v ) ) | \geq 2 \} | / | V |$ .
All experiments were performed on a machine with Intel Core i9-9900K CPU and 64GB of RAM. In our experiments, we used the original code of Crane et al. [47, 19] as the implementation of the previous algorithms. Since their code was written in Julia, we implemented our algorithms also in Julia to ensure a fair comparison. When running the original codes for the LP-rounding algorithms, we used Gurobi-12.0 as the LP solver. Gurobi was the solver of choice in previous work [47, 19, 48, 4], and it is widely recognized for its excellent speed [41, 42].
Our experiments focus on two aspects of the algorithms’ performance: solution quality and running time. To compare solution quality, we will use relative error estimate, a normalized, estimated error of the algorithm’s output cost (or quality) compared to the optimum. Since the problems are NP-hard, it is hard to compute the exact error compared to the optimum; as such, Crane et al. [19] used the optimal solution to their LP relaxation in lieu of the true optimum, giving an overestimate of the error. We followed this approach, but we used our LP relaxation instead since we can prove that our relaxation always yields a better estimate of the true optimum. To normalize the estimated error, we divide it by the estimated optimum: that is, the relative error estimate is defined as $( A - L ) / L$ , where $A$ denotes the algorithm’s output cost and $L$ is the LP optimum.3
Table 2: Average running times of each dataset (in seconds): Local ECC. Values in parentheses are averages excluding trivial instances.
Crane et al.’s experiment [19] used $b _ { \mathsf { l o c a l } } \in \{ 1 , 2 , 3 , 4 , 5 , 8 , 1 6 , 3 2 \}$ for Local ECC, $b _ { \mathsf { r o b u s t } } / | V | \in \{ 0 , . 0 1 , . 0 5$ , . $\cdot 1 , . 1 5 , . 2 , . 2 5 \}$ for Robust ECC, and $b _ { \mathbf { g } | \mathbf { o b a } | } / | V | \in \{ 0 , . 5 , 1 , 1 . 5 , 2 , 2 . 5 , 3 , 3 . 5 , 4 \}$ for Global ECC. While these choices were carefully made so that we can avoid trivial instances, we decided to extend their choice for Global ECC. To explain what trivial instances are, suppose that $b _ { \mathrm { l o c a l } }$ is greater than the maximum color-degree $\Delta _ { \chi }$ in an instance of Local ECC. The problem then becomes trivial, since the local budget allows assigning each vertex all the colors of its incident edges. We call an instance of Local ECC trivial if $b _ { \mathsf { l o c a l } } \geq \Delta _ { \chi }$ ; similarly, Robust ECC instances are trivial if $b _ { \mathsf { r o b u s t } } \geq \rho | V |$ , and Global ECC instances are trivial if $b _ { \mathbf { g } \mathbf { l o b a l } } \geq | V | ( { \bar { d } } _ { \chi } - 1 )$ . For Local ECC and Robust ECC, Crane et al.’s choice of budgets ensure that most instances are nontrivial: each data set has 0, $^ { 1 }$ , or at most 2 trivial instances, possibly with the exception of at most one dataset. However, for Global ECC, only 44 instances out of 78 in the original benchmark are nontrivial, so we decided to additionally test $b _ { \mathbf { g } | \mathbf { o b a l } } / | V | \in \{ . 1 , . 2 , . 3 , . 4 \}$ . As a result, we tested thirteen different budgets in total for each dataset for Global ECC.
# 4.2 Local ECC
We measured the solution quality and running time of the proposed algorithm in comparison with the greedy combinatorial algorithm and the LP-rounding algorithm of Crane et al. [19].
Figure 1(a) depicts the running times, and Table 2 lists their average for each dataset. Figure 1(a) shows that our proposed algorithm was the fastest in most instances. It is not surprising that our algorithm, with the overall average running time of 0.121sec, was much faster than the LP-rounding algorithm whose overall average running time was 146.470sec, since our algorithm is combinatorial. This gap was no smaller even when we consider only nontrivial instances: the overall average running times were 0.142sec (proposed) and 180.367sec (LP-rounding). Remarkable was that the proposed algorithm was faster than the greedy algorithm, too. In fact, on average, it was more than twice as fast as the greedy algorithm in most datasets except for Brain. Such gap in the running times became more outstanding in larger datasets: for Trivago, our proposed algorithm was 11 times faster than the greedy algorithm and 2,100 times faster than the LP-rounding algorithm.
Figure 1(b) shows the relative error estimates of the algorithms’ outputs. We note that, except for Brain and MAG-10, the relative error estimate of our algorithm (and of the greedy algorithm) tends to increase as $b _ { \mathrm { l o c a l } }$ increases, and then at some point starts decreasing. This appears to be the result of the fact that the problem becomes more complex as $b _ { \mathrm { l o c a l } }$ initially increases, but when $b _ { \mathrm { l o c a l } }$ becomes too large, the problem becomes easy again. It can be seen from Figure 1(b) that our proposed algorithm outperformed the greedy algorithm in all cases. The overall average relative error estimate of our proposed algorithm was 0.141, which is less than half of the greedy algorithm’s average of 0.297. The LP-rounding algorithm output near-optimal solutions in every case.
Figure 1: (a) Running times (in seconds) and (b) relative error estimates of the Local ECC algorithms. Empty square markers denote trivial instances.
Overall, these experimental results demonstrate that our algorithmic framework is scalable, and produces solutions of good quality. As was noted by Veldt [48] and observed in this section, LP-rounding approach does not scale well due to its time consumption, even though it produces near-optimal solutions when it is given sufficient amount of time. Compared to the greedy combinatorial algorithm, our proposed algorithm output better solutions in smaller amount of time in most cases. This suggests that the proposed algorithm can provide improvement upon the greedy algorithm.
# 4.3 Robust ECC and Global ECC
Since the proposed algorithms for Robust ECC and Global ECC are similar, we present the experimental esults of both problems together in this section, starting with Robust ECC.
We measured the performance of our proposed algorithm in addition to the greedy combinatorial algorithm and the LP-rouding algorithm of Crane et al. [19]. However, as their LP-rounding algorithm is a bicriteria approximation algorithm that possibly violates the budget $b _ { \mathrm { r o b u s t } }$ , we cannot directly compare their solution quality with the proposed algorithm. In fact, the LP-rounding algorithm turned out to output “superoptimal” solutions violating $b _ { \mathrm { r o b u s t } }$ in most cases of the experiment. The bicriteria approximation ratio was chosen as $( 6 , 3 )$ , which is the same choice as in Crane et al.’s experiment [19].4
Comparing the average running times of each dataset reveals that the proposed algorithm ran much faster than the LP-rounding algorithm for most datasets, except for DAWN. The proposed algorithm was slower than the greedy algorithm for all datasets; however, it tended to produce solutions of much better quality than the greedy algorithm. The relative error estimate of the proposed algorithm was strictly better than that of the greedy algorithm in all nontrivial instances; the overall average relative error estimate of the proposed algorithm was 0.042, six times better than the greedy algorithm’s average of 0.272. We also note that the relative error estimate of our algorithm stayed relatively even regardless of the budget, while that of the greedy algorithm fluctuated as $b _ { \mathrm { r o b u s t } }$ changed in some datasets, such as MAG-10 and Trivago. Due to space constraints, a detailed table and a figure presenting the experimental results have been deferred to Appendix F.
For Global ECC, the bicriteria approximation ratio of the LP-rounding algorithm was chosen as $( 2 b _ { \tt g l o b a l } + 5 , 2$ ), which again is the same choice as in Crane et al.’s experiment. For Global ECC, the bicriteria approximation algorithm did not violate the budget for any instances of the benchmark. This may be due to the fact that their LP relaxation for Global ECC has a bounded integrality gap, unlike their LP for Robust ECC.5
The experimental results for Global ECC exhibited similar trends to those for Robust ECC. The relative error estimate of the proposed algorithm was strictly better than that of the greedy algorithm in all nontrivial instances. The average relative error estimate on nontrivial instances was 0.039 for the proposed algorithm, while that of the greedy algorithm was 0.912—more than 23 times higher. We also note that the relative error estimate of the greedy algorithm rapidly increased as $b _ { \mathsf { g l o b a l } }$ increased. While the proposed algorithm was on average slower than the greedy algorithm for all datasets, it was much faster than the LP-rounding algorithm in all datasets except for DAWN. A detailed table and a figure presenting the experimental results have been again deferred to Appendix F due to the space constraints.
The above results together indicate that our proposed algorithms for Robust ECC and Global ECC are likely to be preferable when a high-quality solution is desired possibly at the expense of a small increase in computation time. | Clustering is a fundamental task in both machine learning and data mining.
Among various methods, edge-colored clustering (ECC) has emerged as a useful
approach for handling categorical data. Given a hypergraph with (hyper)edges
labeled by colors, ECC aims to assign vertex colors to minimize the number of
edges where the vertex color differs from the edge's color. However,
traditional ECC has inherent limitations, as it enforces a nonoverlapping and
exhaustive clustering. To tackle these limitations, three versions of ECC have
been studied: Local ECC and Global ECC, which allow overlapping clusters, and
Robust ECC, which accounts for vertex outliers. For these problems, both linear
programming (LP) rounding algorithms and greedy combinatorial algorithms have
been proposed. While these LP-rounding algorithms provide high-quality
solutions, they demand substantial computation time; the greedy algorithms, on
the other hand, run very fast but often compromise solution quality. In this
paper, we present an algorithmic framework that combines the strengths of LP
with the computational efficiency of combinatorial algorithms. Both
experimental and theoretical analyses show that our algorithms efficiently
produce high-quality solutions for all three problems: Local, Global, and
Robust ECC. We complement our algorithmic contributions with
complexity-theoretic inapproximability results and integrality gap bounds,
which suggest that significant theoretical improvements are unlikely. Our
results also answer two open questions previously raised in the literature. | [
"cs.LG",
"cs.DB",
"cs.DS"
] |
# 1 Introduction
Language models (LMs) perform well on coding benchmarks like HumanEval [4] or LiveCodeBench [13] but struggle with real-world software engineering (SWE) tasks [15]. Unlike standardized coding problems, real issues—such as GitHub issues [15]—are often under-specified and require reasoning across multiple files and documentation. Even large models like Claude reach only around $60 \%$ accuracy on SWE-bench [15], despite using carefully engineered prompting pipelines [33]. Smaller models (under 100B parameters) perform significantly worse, typically scoring below $10 \%$ in zeroshot settings and plateauing around $30 \%$ after supervised fine-tuning (SFT) [34, 22] on GitHub issue datasets. Improving the performance of these models remains a key challenge for practical deployment, where repeatedly querying large models is often too costly or inefficient.
Recent and concurrent works to improve the performance of small LMs on SWE tasks have mainly focused on expanding SFT datasets—either through expert annotation or distillation from larger models [36, 34, 22]. These approaches show that performance improves as the quality and quantity of training data increase. However, collecting such data is both costly and time-consuming.
An alternative is test-time scaling, which improves performance by generating multiple outputs at inference and selecting the best one using a scoring function, such as a reward model [5, 17]. While widely applied in math and logical reasoning [9, 28], test-time scaling remains underexplored in SWE. Yet it shows strong potential: prior works [22, 3] demonstrate that small models can generate correct solutions when sampled many times. Specifically, their $\mathtt { p a s s @ } N$ , the probability that at least one of $N$ samples is correct, is close to the pass@1 performance of larger models. This indicates that small models can produce correct solutions; the challenge lies in efficiently identifying them.
Test-time scaling assumes that among many sampled outputs, at least one will be correct. However, when correct solutions are rare, these methods often require a large number of samples to succeed. This is particularly costly in SWE tasks, where generating each sample is slow due to long code contexts, and scoring is expensive when unit tests execution is needed [33]. Recent work [31] uses reinforcement learning (RL) [31] to enhance the reasoning capabilities of LMs for improved output quality but still requires hundreds of code edits (i.e., patch samples) per issue. Also, Pan et al. [22] depends on slow interactions with the runtime environment in agentic workflows. This motivates the need for sample-efficient test-time scaling methods that can identify correct solutions with fewer samples.
In this paper, we propose Evolutionary Test-Time Scaling (EvoScale), a sample-efficient method for improving test-time performance on SWE tasks. Existing test-time scaling methods often require an excessive number of samples because model outputs are highly dispersed—correct solutions exist but are rare, as shown in Figure 1. EvoScale mitigates this by progressively steering generation toward higher-scoring regions, reducing the number of samples needed to find correct outputs. Inspired by evolutionary algorithms [25, 32, 8, 23], EvoScale iteratively refines candidate patches through selection and mutation. Instead of consuming the sample budget in a single pass, EvoScale amortizes it over multiple iterations: the model generates a batch of outputs, a scoring function selects the top ones, and the next batch is generated by conditioning on these—effectively mutating prior outputs. Early iterations focuses on exploration; later ones focus on exploitation.
Figure 1: Reward score distribution of outputs from a SFT model, with high-scoring outputs concentrated in the long tail.
Although EvoScale improves sample efficiency, the selection step still incurs overhead: like standard evolutionary algorithms [32], it generates more outputs than needed and filters only the high-scoring ones, increasing sampling and computation costs. To eliminate this, we use RL to internalize the reward model’s guidance into the model itself, enabling it to self-evolve—refining its own outputs without external reward models at inference. We formulate this as a potential-based reward maximization problem [21], where the model learns to improve output scores across iterations based on score differences. This avoids discarding low-scoring outputs and reduces sample usage per iteration. Our theoretical analysis shows that this RL objective ensures monotonic score improvement across iterations. We evaluate the proposed EvoScale method on SWE-Bench-Verified [15], and summarize our key contributions as follows:
• A new perspective of formulating test-time scaling as an evolutionary process, improving sample efficiency for software engineering tasks.
• A novel RL training approach that enables self-evolution, eliminating the need for external reward models or verifiers at inference time.
• Satori-SWE-32B with EvoScale achieves performance comparable to models exceeding 100B parameters, while requiring only a small number of samples.
# 2 Related Work
Dataset Curation for SWE. Prior works [19, 22] and concurrent efforts [36, 14] use proprietary LLMs (e.g., Claude, GPT-4) as autonomous agents to collect SFT data by recording step-by-step interactions in sandboxed runtime environments. While this automates the data collection process for agent-style training [35], it involves substantial engineering overhead (e.g., Docker setup, sandboxing) and high inference costs. In contrast, Xie et al. [34] uses a pipeline-based framework [33], collecting real pull-request–issue pairs and prompting GPT-4 to generate CoT traces and ground-truth patches without runtime interaction. Though easier to collect, this data requires careful noise filtering. Our approach instead improves small models’ performance by scaling the computation at test time.
Test-time scaling for SWE. Xia et al. [33] showed that sampling multiple patches and selecting the best one based on unit test results in sandboxed environments improves performance. Unit tests have since been widely adopted in SWE tasks [31, 6, 14, 3]. Other works [22, 14, 20] train verifiers or reward models to score and select patches. To reduce the cost of interacting with runtime environments in agentic frameworks [35], some methods [20, 2] integrate tree search, pruning unpromising interaction paths early. While prior works improve patch ranking or interaction efficiency, our focus is on reducing the number of samples needed for effective test-time scaling.
RL for SWE. Pan et al. [22] used a basic RL approach for SWE tasks, applying rejection sampling to fine-tune models on their own successful trajectories. Wei et al. [31] later used policy gradient RL [24], with rewards based on string similarity to ground truth patches, showing gains over SFT. In contrast, our method trains the model to iteratively refine its past outputs, improving scores over time. We also use a learned reward model that classifies patches as correct or incorrect, which outperforms string similarity as shown in Appendix A.
# 3 Preliminaries
Figure 2: Pipeline for SWE Tasks. Given a GitHub issue, the retriever identifies the code files most relevant to the issue. The code editor then generates a code patch to resolve it.
Software engineering (SWE) tasks. We study the problem of using LMs to resolve real-world GitHub issues, where each issue consists of a textual description and a corresponding code repository. Since issues are not self-contained, solving them requires identifying and modifying relevant parts of the codebase. There are two main paradigms for solving SWE tasks with LMs: agentic [35] and pipeline-based [33, 31]. Agentic methods allow the model to interact with the runtime environment, such as browsing files, running shell commands, and editing code through tool use. While flexible, these approaches are computationally intensive and rely on long-context reasoning, making them less practical for small models. In contrast, pipeline-based methods decompose the task into subtasks, typically retrieval and editing, and solve each without runtime interaction, which is more computationally efficient and suited for small models. Retrieval refers to identifying the files or functions relevant to the issue, while editing involves generating the code changes needed to resolve it.
Formally, given an issue description $x$ , the goal is to produce a code edit (i.e., patch) $y$ that fixes the bug or implements the requested change. A retrieval model selects a subset code context $C ( x ) \subseteq { \mathcal { C } }$ from the full codebase $\mathcal { C }$ , and an editing model $\pi$ generates the patch $y = \pi ( x , C ( x ) )$ that modifies the code context $C ( x )$ . While retrieval has reached around $70 \%$ accuracy in prior work [34, 33], editing remains the main bottleneck. This work focuses on improving editing performance in the pipeline-based setting, using off-the-shelf localization methods in experiments. In this setup, the dominant cost comes from sampling and scoring outputs from the editing model at test time.
Test-time scaling [9, 28] improves model performance during inference without training. It typically involves sampling multiple outputs and selecting the best one using a scoring function (e.g., a reward model) [5, 17]. Specifically, the model generates outputs $y _ { 1 } , \ldots , y _ { N }$ , scores them with $R$ , and returns arg $\operatorname* { m a x } _ { y _ { i } } R ( y _ { i } )$ . This strategy is commonly used in domains like reasoning and mathematics.
# 4 Method: Evolutionary Test-Time Scaling
Figure 3: An Overview of Evolutionary Test-Time Scaling. Given a GitHub issue $x$ and its code context $C ( x )$ , the editor model $\pi$ first generates a batch of candidate patches $\mathcal { V } ^ { t }$ . The reward landscape is illustrated with contour lines, where brighter contours indicate a higher score of a scoring function $R$ (e.g., reward model or unit tests). A set of patches $\mathcal { E } ^ { t }$ is selected (e.g., via a scoring function $R$ ) and combined with $x$ and $C ( x )$ to form a conditional prompt (see Section 4.1), which guides the model to generate the next batch ${ \mathcal { V } } ^ { t + 1 } = \{ y _ { 1 } ^ { t + 1 } , \dots , { \bar { y } } _ { M } ^ { t + 1 } \}$ , increasingly concentrated around the optimum. The process continues under a fixed sampling budget until convergence, after which the final patch is submitted.
Goal: Sample-efficient test-time scaling. Test-time scaling improves performance by selecting the best output from multiple samples, but often requires a large number of generations to find correct solutions, especially in SWE tasks [31]. Our goal is to enable test-time scaling more sample-efficient, achieving stronger performance with fewer samples.
Why is test-time scaling sample-inefficient in SWE task? Correct solutions exist but are rarely sampled, as for hard issues, the model’s output distribution is not concentrated around high-scoring regions. Given a sample budget $N$ , typical test-time scaling methods in SWE [33, 31, 14, 22] draw $N$ outputs (patches) $\{ y _ { i } \} _ { i = 1 } ^ { N }$ from a frozen editor model $\pi$ , score them with a score function $R$ (e.g., reward model or unit tests), and selects the best one arg $\textstyle \operatorname* { m a x } _ { y _ { i } } R ( x , y _ { i } )$ . While high-scoring outputs near the mode could be sampled easily, the challenge of test-time scaling is to identify high-scoring outputs from the tail of $\pi ( \cdot \mid x , C ( x ) )$ . However, doing so typically requires a large sample size $N$ , making the process sample-inefficient.
Our approach: This motivates our method, Evolutionary Test-Time Scaling (EvoScale), which iteratively refines generation by using earlier outputs to guide subsequent sampling. We recast patch generation for a GitHub issue as an evolutionary process. The objective is to explore the patch space with a small number of samples, identify high-scoring patches, and iteratively refine the generated patches. As shown in Figure 3, initial samples are scattered and far from the correct solutions (denoted by stars), but over iterations, the distribution shifts closer to the correct solution. Through evolution, EvoScale more efficiently uncovers high-scoring outputs in long tails. We formulate the problem in Section 4.1 and detail the training procedure in Sections 4.2 and 4.3.
# 4.1 Formulation: Patch Generation as Evolution
We amortize the sampling budget over $T$ iterations by generating $M < N$ samples per iteration, rather than sampling all $N$ at once. The goal is to progressively improve sample quality across iterations. A key challenge lies in effectively using early samples to guide later ones. Typical evolutionary strategies select top-scoring candidates and mutate them—often by adding random noise—to steer future samples toward high-scoring regions. However, in SWE tasks, where patches are structured code edits, random perturbations often break syntax or semantics (e.g., undefined variables, etc).
Algorithm. Instead of using random noise for mutation, we use a language model (LM) as a mutation operator, leveraging its ability to produce syntactically and semantically valid patches. At each iteration $t$ , the LM generates a batch of patches $\mathcal { V } ^ { t + 1 } = \{ y _ { 1 } ^ { t + 1 } , \dots , y _ { M } ^ { t + 1 } \}$ conditioned on a set of prior patches $\mathcal { E } ^ { t }$ : $y ^ { t + 1 } \sim \pi ( \cdot \mid x , C ( x ) , \mathcal { E } ^ { t } )$ . We refer to $\mathcal { E } ^ { t }$ as conditioning examples consisting of patches generated at iteration $t$ . Following the selection step in evolutionary algorithms, $\mathcal { E } ^ { t }$ could be selected as the top- $K$ patches ranked by a scoring function $R$ (i.e., fitness function in evolutionary algorithms). Note that we find that our model after training can self-evolve without this selector (see Section 4.3 and Section 5.2), so this step is optional. The full procedure is detailed in Algorithm 1.
Question: Can a language model naturally perform mutation? Ideally, the mutation operator should generate patches that improve scores. However, as shown in Section 5.2, models trained with classical SFT—conditioned only on the issue and code context—struggle to refine existing patches. In the next section, we present our approach to overcome this limitation.
# 4.2 Small-scale Mutation Supervised Fine-Tuning
Classical supervised fine-tuning (SFT) fails at mutation because it never learns to condition on previous patches. To train the model for mutation, it must observe conditioning examples—patches from previous iterations—so it can learn to refine them. In EvoScale, conditioning examples are drawn from the model’s earlier outputs. We introduce a two-stage supervised fine-tuning (SFT) process: classical SFT followed by mutation SFT. The classical SFT model is first trained and then used to generate conditioning examples for training the mutation SFT model.
Stage 1 — Classical SFT. We fine-tune a base model on inputs consisting of the issue description $x$ and code context $C ( x )$ , with targets that include a chain-of-thought (CoT) trace and the ground-truth patch, jointly denoted as $y _ { \mathrm { S F T } } ^ { * }$ . Following prior work on dataset curation [36, 34], we use a teacher model $\mu$ (e.g., a larger LLM; see Section 5.1) to generate CoT traces. The training objective is:
$$
\operatorname* { m a x } _ { \pi \sim \mathrm { r r } } \ \mathbb { E } _ { x \sim \mathcal { D } , y _ { \mathrm { S F T } } ^ { * } \sim \mu ( \cdot | x , C ( x ) ) } \left[ \log \pi _ { \mathrm { S F T } } ( y _ { \mathrm { S F T } } ^ { * } \mid x , C ( x ) ) \right] .
$$
We refer to the resulting model $\pi _ { \mathrm { S F T } }$ as the classical SFT model.
Stage 2 — Mutation SFT. We fine-tune a second model, initialized from the same base model, using inputs $x$ , $C ( x )$ , and a set of conditioning examples $\mathcal { E }$ consisting of patches sampled from the classical SFT model $\pi _ { \mathrm { S F T } }$ . The target $y _ { \mathrm { M - S F T } } ^ { \ast }$ includes a CoT trace generated by the teacher model $\mu$ conditioned on $\mathcal { E }$ , along with the ground-truth patch. The training objective is:
$$
\operatorname* { m a x } _ { \pi \times \exp } \mathbb { E } _ { x \sim \mathcal { D } , \mathcal { E } \sim \pi _ { \mathrm { S F T } } ( \cdot \vert x , C ( x ) ) , y _ { \mathrm { M } , \mathrm { S F T } } ^ { * } \sim \mu ( \cdot \vert x , C ( x ) , \mathcal { E } ) } \left[ \log \pi _ { \mathrm { M } \cdot \mathrm { S F T } } ( y _ { \mathrm { M } \cdot \mathrm { S F T } } ^ { * } \mid x , C ( x ) , \mathcal { E } ) \right] .
$$
We refer to the resulting model $\pi _ { \mathrm { M - S F T } }$ as the mutation SFT model.
Training on small-scale datasets. EvoScale targets issues where one-shot generation often fails, but high-scoring patches can still be found through sufficient sampling. This means the model generates a mix of high- and low-scoring patches, so conditioning examples should reflect this diversity. If all examples were already high-scoring, test-time scaling would offer limited benefit. Training a classical SFT model on the full dataset, however, leads to memorization, reducing output diversity and making it difficult to construct diverse conditioning examples for mutation SFT. To preserve diversity, we collect $y _ { \mathrm { S F T } } ^ { * }$ and $y _ { \mathrm { M - S F T } } ^ { \ast }$ on disjoint subsets of the data. See Appendix D for details.
Limitation of SFT in self-evolving. The mutation SFT model $\pi _ { \mathrm { M - S F T } }$ is trained on conditioning examples from the classical SFT model $\pi _ { \mathrm { S F T } }$ , which include both low- and high-scoring patches. This raises a natural question: can $\pi _ { \mathrm { M - S F T } }$ learn to improve low-scoring patches on its own—i.e., selfevolve—without relying on reward models to select high-scoring examples? If so, we could eliminate the selection step (Line 3 in Algorithm 1), reducing scoring costs and sample usage. However, we find that SFT alone cannot enable self-evolution. Section 4.3 introduces a reinforcement learning approach that trains the model to self-evolve without scoring or filtering.
# Algorithm 1 Evolutionary Test-Time Scaling (EvoScale)
Require: Issue description $x$ , code context $C ( x )$ , editor model $\pi$ , number of iterations $T$ , samples per iteration $M$ , optional selection size $K$
1: Generate initial outputs $y ^ { 0 } : = \{ y _ { 1 } ^ { 0 } , \cdot \cdot \cdot , y _ { M } ^ { 0 } \} \sim \pi ( \cdot \mid x , C ( x ) )$
2: for $t = 1$ to $T$ do
3: (Optional) Select conditioning examples $\mathcal { E } ^ { t - 1 } : = \{ \bar { y } _ { 1 } ^ { t - 1 } , \cdot \cdot \cdot , \bar { y } _ { K } ^ { t - 1 } \} = \mathrm { S e l e c t } ( \mathcal { V } ^ { t - 1 } )$
4: Generate new outputs $\mathcal { V } ^ { t } : = \{ y _ { 1 } ^ { t } , \cdot \cdot \cdot , y _ { M } ^ { t } \} \sim \pi ( \cdot \mid x , C ( x ) , \mathcal { E } ^ { t - 1 } ) \}$
5: end for
# 4.3 Learning to Self-evolve via Large-scale Reinforcement Learning (RL)
To self-evolve, the model must generate patches that maximize a scoring function $R$ , given conditioning examples $\mathcal { E }$ from previous patches. This setup naturally aligns with the reinforcement learning (RL) [29], where a policy $\pi$ is optimized to maximize expected rewards (i.e., scores) over time. Since our goal is to maximize the reward at the final iteration $T$ , a naïve RL objective is:
$$
\begin{array}{c} \operatorname* { m a x } _ { \pi } \mathbb { E } _ { y ^ { t } \sim \pi ( \cdot | x , C ( x ) , \mathcal { E } ^ { t - 1 } ) } \Big [ \sum _ { t = 0 } ^ { T } r _ { t } \Big ] , \quad \mathrm { w h e r e } \quad r _ { t } = \Big \{ R ( x , y ^ { t } ) , \quad t = T \atop 0 , & { \mathrm { o t h e r w i s e } } \end{array}
$$
This objective focuses solely on maximizing the final reward. However, it presents two key challenges: (1) rewards are sparse, with feedback only at iteration $T$ , making learning inefficient [16, 26]; and (2) generating full $T$ -step trajectories is computationally expensive [28].
Potential shaping alleviates sparse rewards. We address the sparse reward challenge using potentialbased reward shaping [21], where the potential function is defined as $\Phi ( y ) = R ( x , \bar { y } )$ . The potential reward at step $t$ is:
$$
r _ { t } = \Phi ( y ^ { t } ) - \Phi ( y ^ { t - 1 } ) = R ( x , y ^ { t } ) - R ( x , y ^ { t - 1 } ) .
$$
Unlike the naïve formulation (Equation 3), this provides non-zero potential rewards at every step, mitigating the sparse reward challenge. The cumulative potential reward forms a telescoping sum: $\begin{array} { r } { \sum _ { t = 1 } ^ { T } r _ { t } = R ( x , y ^ { T } ) - R ( x , y ^ { 0 } ) } \end{array}$ t. aSli.n[c2e1 $y ^ { 0 }$ is fixed, maximizing this sum is equivalent to maximizing $\mathrm { N g }$
Monotonic improvement via local optimization. While optimizing Equation 3 achieves the optimal final reward, it is computationally expensive due to the need for full $T$ -step trajectories. As a more efficient alternative, we train the model to maximize the potential reward at each individual iteration $t$ (Equation 4), avoiding the cost of generating full $T$ -step trajectories. This local optimization reduces computation and runtime while ensuring monotonic reward improvement (see Section 5.2), which is sufficient for improving patch scores over iterations. We formally show this property in Section 4.4.
Implementation. Using the full dataset, we fine-tune the mutation SFT model $\pi _ { \mathrm { M - S F T } }$ to maximize the expected potential rewards (Equation 4) in score between a newly generated patch $y$ and a previous patch $y ^ { \prime }$ drawn from the conditioning examples $\mathcal { E }$ :
$$
\operatorname* { m a x } _ { \pi _ { \mathrm { R L } } } \mathbb { E } _ { y \sim \pi _ { \mathrm { R L } } ( \cdot | x , C ( x ) , \mathcal { E } ) , y ^ { \prime } \sim \mathcal { E } } \big [ R ( x , y ) - R ( x , y ^ { \prime } ) - \lambda F ( y ) \big ] .
$$
This objective encourages the model to generate patches that consistently improve upon previous ones. To ensure the outputs follow the required syntax, we incorporate a formatting penalty term $F$ into the reward function (see Appendix $\mathbf { D }$ for details). The conditioning patch $y ^ { \prime }$ is sampled from conditioning examples constructed using patches generated by earlier models, such as $\pi _ { \mathrm { S F T } }$ or intermediate checkpoints of $\pi _ { \mathrm { R L } }$ .
# 4.4 Theoretical Analysis
We analyze the RL objective in Equation 5, which leverages potential-based reward shaping [21], and show that the induced policy yields non-decreasing scores at each iteration.
Assumption 1 $\Phi$ -monotonicity). Let $\mathbb { Y }$ be the set of all patches and $\Phi \colon \mathbb { Y } \mathbb { R }$ a potential function. For every $y \in \mathbb { Y }$ , there exists a finite sequence $y = y _ { 0 } , y _ { 1 } , \cdot \cdot \cdot , y _ { k }$ such that $\Phi ( y _ { t + 1 } ) \ \geq$ $\Phi ( y _ { t } )$ for all $0 \leq t < k$ .
This ensures that from any initial patch one can reach higher-scoring patches without decreasing $\Phi$ . Definition 1 (Myopic Policy). Define the one-step action-value $Q _ { 0 } ( y , y ^ { \prime } ) = \Phi ( y ^ { \prime } ) - \Phi ( y ) , y , y ^ { \prime } \in$ Y. The myopic policy $\pi _ { 0 }$ selects, at each state $y$ , any successor that maximizes $Q _ { 0 }$ : $\pi _ { 0 } ( y ) \in$ arg $\mathrm { m a x } _ { y ^ { \prime } \in \mathbb { Y } } \big [ \Phi ( y ^ { \prime } ) - \Phi ( y ) \big ]$ .
Proposition 1 (Monotonic Improvement). Under Assumption $\jmath$ , any trajectory $\{ y ^ { t } \} _ { t \geq 0 }$ generated by the myopic policy $\pi _ { 0 }$ satisfies $\Phi ( y ^ { t } ) ~ \ge ~ \Phi ( y ^ { t - 1 } )$ and $r _ { t } ~ = ~ \Phi ( y ^ { t } ) - \Phi ( y ^ { t - 1 } ) ~ \bar { \geq } ~ 0 ~ \forall t \geq 1$ .
Proof. By definition of $\pi _ { 0 }$ , at each step $y ^ { t } \in \arg \operatorname* { m a x } _ { y ^ { \prime } } \left[ \Phi ( y ^ { \prime } ) - \Phi ( y ^ { t - 1 } ) \right]$ . Hence $\Phi ( y ^ { t } ) - \Phi ( y ^ { t - 1 } ) \geq$ 0, which immediately gives $\Phi ( y ^ { t } ) \geq \Phi ( y ^ { t - 1 } )$ and $r _ { t } \ge 0$ . In particular, training with the potential reward in Equation (5) guarantees that
$$
R ( x , y ^ { t } ) ~ = ~ \Phi ( y ^ { t } ) ~ \ge ~ \Phi ( y ^ { t - 1 } ) = R ( x , y ^ { t - 1 } ) ~ \forall t .
$$
Thus the learned policy produces non-decreasing scores over iterations.
# 5 Experiments
# 5.1 Settings
Implementation Details. We adopt a pipeline-based scaffold consisting of a retriever and a code editing model (see Appendix C). Both components are trained using small-scale SFT and large-scale RL. We use the Qwen2.5-Coder-32B-Instruct model [12] as our base model due to its strong code reasoning capabilities. Our training data is sourced from SWE-Fixer [34] and SWE-Gym [22]. After filtering and deduplication, we obtain a total of 29,404 high-quality instances. For RL training of the code editing model, we rely on a reward model trained on data collected from open source data3 with 1,889 unique instances. Additional experimental details are provided in Appendix D.
Evaluation and Metrics. We consider two metrics in our evaluation: (1) Greedy: zero-shot pass $\ @ 1$ accuracy, which measures the number of correctly solved instances using greedy generation with syntax retrial (i.e., random sampling up to five times until syntactically correct); (2) Best $\ @ N$ : accuracy of the optimal sample selected by the verifier among $N$ randomly generated samples. Greedy evaluates the model’s budget-efficient performance, while Best $@ N$ represents the model’s potential for test-time scaling.
Test-time Scaling Methods. We evaluate the following test-time scaling methods: (1) Reward Model Selection: selects the optimal patch sample with the highest reward model score; (2) Unit Tests Selection: selects the optimal patch sample based on whether it passes unit tests, including both regression and reproduction tests. If multiple samples pass, one is selected at random; (3) EvoScale: at each evolution iteration, the model generates $M$ patch samples and selects $K \leq M$ samples as the conditional prompt for the next generation. The selection of the $K$ samples is guided by the reward model. In our experiments, we set $M = 1 0$ , $K = 5$ , and perform up to four iterations of evolution.
# 5.2 Analysis
In this section, we present a comprehensive analysis of the proposed EvoScale approach. To simplify our analysis, we use ground-truth localization (retrieval) and focus on the code editing part. All reported results are averaged over three random trials. More results are provided in Appendix A.
(a) RM as selector: Classical SFT v.s. Mutation SFT
(b) Self-evolve: Mutation SFT v.s. RL
Figure 4: Evolutionary Capability of Different Stages of SFT and RL Models. (a) Reward Model selects the top-5 patch candidates from 10 samples from the previous iteration, and the model iteratively evolves by generating new 10 samples conditioned on the candidates. Performance of the top-1 sample selected by RM is reported. Without the additional mutation SFT training, the model fails to exhibit evolutionary behavior, even when scaling up the training set. (b) Without RM selection, the model only iteratively evolves by conditioning on 5 random samples from the last iteration. RL training improves the model’s initial performance and incentivizes the self-evolution capability, while the SFT model fails to self-evolve without guidance from RM.
Can LLMs Iteratively Evolve without Mutation SFT Training? First, we investigate whether the mutation SFT is necessary for LLMs to learn how to iteratively improve their generations. Specifically, we fine-tune base LLMs using either classical SFT (without conditional generation) or mutation SFT. As shown in Figure 4(a), models trained with classical SFT fail to naturally improve their outputs when conditioned on previous samples. In contrast, mutation SFT enables the model to iteratively improve under the guidance of a reward model. The performance of the mutation SFT model at later iterations can surpass the classical SFT model by scaling up the samples (e.g., $\mathtt { B e s t @ 4 0 } )$ ). Moreover, this iterative refinement capability can be learned effectively even with a small number of training data.
RL Enables Self-evolve Capability. While mutation SFT model demonstrates evolutionary behavior when guided by a reward model, we further examine whether it can self-evolve without such guidance. Specifically, instead of selecting the top- $K$ candidates to ensure generation quality, we allow the model to generate $M = K = 5$ random samples for the next iteration of conditional generation. However, as shown in Figure 4(b), the SFT model fails to learn self-evolution without reward model selection. Interestingly, RL training significantly improves the SFT model in two key aspects. First, RL substantially boosts the model’s greedy performance, surpassing even the $\mathrm { B e s t } @ N$ performance of 30 randomly generated samples from the SFT model. Second, we observe that the RL-trained model exhibits strong self-evolution capability: even when conditioned on its random outputs, the model can self-refine and improve performance across iterations without reward model guidance. We provide further analysis of the model’s behavior through demo examples in Appendix B.1.
Figure 5: Average Reward Score of Patch Samples at Each Evolution Iteration. Reward scores are normalized via a sigmoid function before average. The SFT model struggles to improve reward scores without the guidance of a reward model to select top- $K$ conditional patch samples, while the RL model consistently self-improves its reward score across iterations without external guidance, validating our theoretical results of monotonic improvement in Section 4.4
Figure 6: Comparison with Other Test-Time Scaling Methods. Reward model selection requires deploying an additional model at test time and can become unstable as the number of samples increases. Unit test selection is computationally expensive and performs poorly with a small sample size. In contrast, self-evolution demonstrates high sample efficiency and strong test-time scaling performance.
Do our SFT and RL Models Monotonically Improve Reward Scores over Iterations? We further analyze the evolutionary behavior of the SFT and RL models by measuring the average reward score of the patch samples generated at each iteration. As shown in Figure 5, although the SFT model learns to iteratively improve reward scores, it relies on the reward model to select high-quality conditioning examples to achieve significant improvements. In contrast, the RL model trained with potential-based reward, naturally learns to self-evolve without any external guidance. Its reward scores improve monotonically across iterations, aligns with our theoretical analysis in Section 4.4.
Evolutionary Test-time Scaling v.s. Other Test-time Scaling Methods. Next, we further compare evolutionary test-time scaling with other test-time scaling methods. Starting from the RL model, we first randomly sample $N = 5 , 1 0 , 1 5 , 2 0 , 2 5 , 5 0$ patch samples and let the reward model and unit tests select the best sample among the subsets. Also starting from the RL model, we let the model perform self-evolution with $K = 5$ samples per iteration, up to four iterations (20 samples in total). The test-time scaling results presented in Figure 6 demonstrate both efficiency and effectiveness of evolutionary test-time scaling. We include more details in Appendix A.
Table 1: Results on SWE-bench Verified. Satori-SWE-32B outperforms all small-scale models under greedy decoding, while achieving comparable performance with current SOTA SWE-RL with much fewer training data and test-time scaling samples.
# 5.3 Results in the Wild: SWE-bench Performance
We present the main results of our RL-trained model, Satori-SWE-32B, on the SWE-bench Verified benchmark [15] and compare its performance against both open-source and proprietary systems. We report results for both greedy decoding and Best $@ N$ metrics, using our own retrieval framework (see details of retrieval in Appendix C). For test-time scaling, we apply iterative self-evolution, allowing the RL model to generate $M = 2 5$ samples per iteration. We observe that the initial iterations produce more diverse candidate patches, while later iterations generate higher-quality, more refined patches. To balance diversity and refinement, we aggregate all generated samples across iterations into a combined pool of $N = 5 0$ candidates. As discussed in Section 5.2, different verifiers provide complementary strengths. We therefore combine both the reward model and unit tests to select the best patch from the candidate pool.
As shown in Table 1, Satori-SWE-32B achieves a greedy accuracy of 35.8, outperforming all existing small-scale models under greedy decoding. Additionally, it achieves a Best $\textcircled { a } 5 0$ score of 41.6, matching the performance of the current state-of-the-art Llama3-SWE-RL-70B [31], which requires Best $\textcircled { a } 5 0 0$ decoding—incurring over $1 0 \times$ higher sampling cost. It is also worth noting that agentbased methods incur even higher test-time computational cost, as each generation corresponds to a full rollout trajectory with multiple interactions. In contrast, Satori-SWE-32B achieves state-of-the-art performance with significantly lower inference cost and is trained on fewer than 30K open-source samples, compared to millions of proprietary data used to train Llama3-SWE-RL-70B.
# 6 Concluding Remarks
We propose Evolutionary Test-time Scaling (EvoScale), a sample-efficient inference-time method that enables small language models to approach the performance of $1 0 0 \mathrm { B } +$ parameter models using just 50 code patch samples—without requiring interaction trajectories with the runtime environment. EvoScale opens up a new direction for sample-efficient test-time scaling in real-world software engineering tasks: (1) Evolution improves sample efficiency. Our results show that evolutionary strategies, which iteratively refine generations, can drastically reduce the number of required samples. This contrasts with prior work that primarily focuses on improving verifiers (e.g., reward models, test cases); (2) RL enables self-evolution. We show that reinforcement learning (RL) can train models to refine their outputs without relying on external verifiers at inference. While our current method optimizes local reward differences, future work may explore optimizing cumulative potential rewards over entire trajectories. Compared to Snell et al. [28], who maintains all prior outputs in the prompt during revision, our method retains only the most recent output—making it more suitable for SWE tasks with long context windows; (3) Limitations and future work. This work focuses on a pipeline-based (agentless) setup. Extending EvoScale to agentic settings where models interact with code and runtime environments, remains an interesting future work.
# References
[1] Anthropic. Introducing claude 3.7 sonnet, 2025., 2025. URL https://www.anthropic.com/ claude/sonnet. 9 [2] Antonis Antoniades, Albert Örwall, Kexun Zhang, Yuxi Xie, Anirudh Goyal, and William Yang Wang. SWE-search: Enhancing software agents with monte carlo tree search and iterative refinement. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id $\ c =$ G7sIFXugTX. 3 [3] Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling, 2024. URL https://arxiv.org/abs/2407.21787. 2, 3 [4] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluating large language models trained on code. 2021. 1 [5] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. 2, 4 [6] Ryan Ehrlich, Bradley Brown, Jordan Juravsky, Ronald Clark, Christopher Ré, and Azalia Mirhoseini. Codemonkeys: Scaling test-time compute for software engineering. arXiv preprint arXiv:2501.14723, 2025. 3 [7] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948, 2025. 9 [8] Nikolaus Hansen. The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772,
2016. 2 [9] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. 2, 3 [10] Jian Hu, Xibin Wu, Zilin Zhu, Xianyu, Weixun Wang, Dehao Zhang, and Yu Cao. Openrlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143,
2024. 31 [11] Jian Hu, Jason Klein Liu, and Wei Shen. Reinforce $^ { + + }$ : An efficient rlhf algorithm with robustness to both prompt and reward models, 2025. URL https://arxiv.org/abs/2501.
03262. 32, 33, 34 [12] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, and Junyang Lin. Qwen2.5-coder technical report, 2024. URL https: //arxiv.org/abs/2409.12186. 7, 31
[13] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Armando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id $\ c =$ chfJJYC3iL.
[14] Naman Jain, Jaskirat Singh, Manish Shetty, Liang Zheng, Koushik Sen, and Ion Stoica. R2egym: Procedural environments and hybrid verifiers for scaling open-weights swe agents. arXiv preprint arXiv:2504.07164, 2025. 3, 4
[15] Carlos E Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik R Narasimhan. SWE-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations, 2024. URL https: //openreview.net/forum?id $\ c =$ VTF8yNQM66. 1, 2, 9
[16] Chi-Chang Lee, Zhang-Wei Hong, and Pulkit Agrawal. Going beyond heuristics by imposing policy improvement as a constraint. Advances in Neural Information Processing Systems, 37: 138032–138087, 2024. 6
[17] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, 2023. 2, 4
[18] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437, 2024. 9
[19] Yingwei Ma, Rongyu Cao, Yongchang Cao, Yue Zhang, Jue Chen, Yibo Liu, Yuchen Liu, Binhua Li, Fei Huang, and Yongbin Li. Lingma swe-gpt: An open development-process-centric language model for automated software improvement. arXiv preprint arXiv:2411.00622, 2024. 3, 9
[20] Yingwei Ma, Yongbin Li, Yihong Dong, Xue Jiang, Rongyu Cao, Jue Chen, Fei Huang, and Binhua Li. Thinking longer, not larger: Enhancing software engineering agents via scaling test-time compute, 2025. URL https://arxiv.org/abs/2503.23803. 3
[21] Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In Icml, volume 99, pages 278–287, 1999. 2, 6
[22] Jiayi Pan, Xingyao Wang, Graham Neubig, Navdeep Jaitly, Heng Ji, Alane Suhr, and Yizhe Zhang. Training software engineering agents and verifiers with SWE-gym. In ICLR 2025 Third Workshop on Deep Learning for Code, 2025. URL https://openreview.net/forum?id= lpFFpTbi9s. 1, 2, 3, 4, 7, 9, 14
[23] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017. 2
[24] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models, 2024. URL https://arxiv.org/abs/ 2402.03300. 3
[25] Maohao Shen, Soumya Ghosh, Prasanna Sattigeri, Subhro Das, Yuheng Bu, and Gregory Wornell. Reliable gradient-free and likelihood-free prompt tuning. In Findings of the Association for Computational Linguistics: EACL 2023. Association for Computational Linguistics, 2023. URL https://aclanthology.org/2023.findings-eacl.183/. 2
[26] Maohao Shen, Guangtao Zeng, Zhenting Qi, Zhang-Wei Hong, Zhenfang Chen, Wei Lu, Gregory Wornell, Subhro Das, David Cox, and Chuang Gan. Satori: Reinforcement learning with Chain-of-Action-Thought enhances llm reasoning via autoregressive search. arXiv preprint arXiv:2502.02508, 2025. 6
[27] Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient RLHF framework. In Proceedings of the Twentieth European Conference on Computer Systems. ACM, 2025. 31
[28] Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling LLM testtime compute optimally can be more effective than scaling parameters for reasoning. In The Thirteenth International Conference on Learning Representations, 2025. URL https: //openreview.net/forum?id $\ c =$ 4FWAwZtd2n. 2, 3, 6, 9
[29] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. 2018. 6
[30] Xingyao Wang, Boxuan Li, Yufan Song, Frank F. Xu, Xiangru Tang, Mingchen Zhuge, Jiayi Pan, Yueqi Song, Bowen Li, Jaskirat Singh, Hoang H. Tran, Fuqiang Li, Ren Ma, Mingzhang Zheng, Bill Qian, Yanjun Shao, Niklas Muennighoff, Yizhe Zhang, Binyuan Hui, Junyang Lin, Robert Brennan, Hao Peng, Heng Ji, and Graham Neubig. Openhands: An open platform for AI software developers as generalist agents. In The Thirteenth International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=OJd3ayDDoF. 9
[31] Yuxiang Wei, Olivier Duchenne, Jade Copet, Quentin Carbonneaux, Lingming Zhang, Daniel Fried, Gabriel Synnaeve, Rishabh Singh, and Sida I Wang. Swe-rl: Advancing llm reasoning via reinforcement learning on open software evolution. arXiv preprint arXiv:2502.18449, 2025. 2, 3, 4, 9, 13, 33
[32] Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jan Peters, and Jürgen Schmidhuber. Natural evolution strategies. The Journal of Machine Learning Research, 15(1):949–980, 2014. 2
[33] Chunqiu Steven Xia, Yinlin Deng, Soren Dunn, and Lingming Zhang. Agentless: Demystifying llm-based software engineering agents, 2024. URL https://arxiv.org/abs/2407.01489. 1, 2, 3, 4, 9, 14, 30, 31, 34
[34] Chengxing Xie, Bowen Li, Chang Gao, He Du, Wai Lam, Difan Zou, and Kai Chen. Swefixer: Training open-source llms for effective and efficient github issue resolution, 2025. URL https://arxiv.org/abs/2501.05040. 1, 2, 3, 5, 7, 9
[35] John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik R Narasimhan, and Ofir Press. SWE-agent: Agent-computer interfaces enable automated software engineering. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. URL https://openreview.net/forum?id $\mathbf { \tau } = \mathbf { \eta }$ mXpq6ut8J3. 3, 9
[36] John Yang, Kilian Leret, Carlos E. Jimenez, Alexander Wettig, Kabir Khandpur, Yanzhe Zhang, Binyuan Hui, Ofir Press, Ludwig Schmidt, and Diyi Yang. Swe-smith: Scaling data for software engineering agents, 2025. URL https://arxiv.org/abs/2504.21798. 2, 3, 5
[37] Yuntong Zhang, Haifeng Ruan, Zhiyu Fan, and Abhik Roychoudhury. Autocoderover: Autonomous program improvement. In Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 1592–1604, 2024. 9
# Appendix
A Additional Experiments 13
B Demo Examples 15
B.1 Type 1: Prior patches are all wrong 15
B.2 Type 2: Prior patches are partially wrong 18
B.3 Type 3: Prior patches are all correct 23
C Scaffold of Satori-SWE 30
C.1 Retriever 30
C.2 Code Editing Model 30
C.3 Verifier 31
D Implementation Details 31
D.1 Dataset Collection 31
D.2 Training Pipeline and Hardware 31
D.3 Retrieval Model 32
D.4 Retrieval Reward Model 32
D.5 Code Editing Model 33
D.6 Code Editing Reward Model 34
D.7 Reproduction Test Generator 34
E Prompt Template 35
# A Additional Experiments
In this section, we provide additional analytical experiments of EvoScale and model training.
RL Reward Modeling: Reward Model is More Effective than String-matching. A reliable reward signal is the key to drive RL training. To better understand the impact of different components in reward modeling, we conduct an ablation study comparing three variants: using only the reward model score, using only string-matching rewards proposed in [31], and using both. As shown in Table 2, models trained with a single reward component show degraded greedy decoding performance compared to the model trained with the hybrid reward. In particular, the reward model plays a crucial role in boosting the performance, while the string-matching reward helps the model learn better syntactic structure. However, the results also suggest that naïve string-matching [31] alone may not serve as a reliable reward signal for SWE tasks.
Table 2: Ablation Study on Reward Modeling. The total number of instances is 500. Compared to the SFT model, RL using the RM reward significantly improves performance but introduces more syntax errors. In contrast, RL with a string-matching reward reduces syntax errors but fails to improve reasoning capability. A hybrid reward signal effectively balances both aspects, achieving superior performance.
EvoScale Prefers Higher Mutation Sampling Temperature. Mutation sampling plays a critical role in Evolutionary Test-Time Scaling. To investigate its impact, we vary the model’s sampling temperature across 0.7, 1.0, 1.2 and perform self-evolution over four iterations. As shown in Figure 7, higher temperatures demostrate better performance. Intuitively, a larger temperature increases the diversity of generated patch samples, providing richer information for the mutation operator to produce improved patches in subsequent iterations. In contrast, lower temperatures tend to result in repetitive patch samples and may lead the model to converge quickly to suboptimal solutions.
SFT $^ +$ Test-time Scaling v.s. RL $^ +$ Self-evolve. In Figure 6, we demonstrated the superior performance of Evolutionary Test-Time Scaling using our RL-trained model. To further investigate this result, we compare against other test-time scaling methods applied to a classical SFT model trained on the full dataset (30K instances), since the typical procedure in most existing SWE work [22, 33] trains a SFT model and applying verifiers (e.g., reward models or unit tests) for test-time scaling.
Figure 7: Impact of Mutation Sampling Temperature. Higher sampling temperatures in EvoScale encourage greater diversity among mutation samples, leading to more effective iterative improvements.
Figure 8: Classical $\mathbf { S F T + }$ Test-time Scaling v.s. Mutation RL $^ +$ Self-evolve. RL Model with self-evolve capability is more effective than classical SFT model using other test-time scaling methods.
However, as shown in Figure 8, this approach proves to be less effective: (1) With 50 samples, the SFT model’s Best $@ 5 0$ performance is still outperformed by the greedy decoding of the RL model, despite both being trained on the same dataset. (2) The SFT model is relatively sensitive to the choice of verifier. When using unit tests (including both reproduction and regression tests) as the verifier, increasing the number of samples results in only marginal performance gains. These observations support our hypothesis: while correct solutions do exist in the SFT model’s output distribution, they are rarely sampled due to its dispersed sampling distribution. In contrast, the RL model learns to self-refine the sampling distribution towards high-scoring region.
Table 3: Average Runtime per Instance for Different Test-Time Scaling Methods. Runtime is measured using a sample budget of 10. EvoScale achieves the highest efficiency, while unit test-based selection incurs over $6 \times$ higher runtime cost.
Runtime Comparison of Different Test-time Scaling Methods. To evaluate the efficiency of different test-time scaling methods, we measure the average runtime per instance using a sample budget of 10. For our proposed EvoScale approach, the runtime consists solely of iteratively prompting the RL model to generate 10 samples. Reward model selection incurs additional computational cost due to running the reward model to score each sample, and unit test selection requires executing each patch in a sandbox environment. Although unit test selection is effective when scaling to larger sample sizes (see Figure 6), it comes at a cost around $6 \times$ slower than EvoScale.
Would RL without Evolution Training still Work? We consider a simplified training setup for the code editing model, where the base model is trained using classical SFT followed by RL without incorporating mutation data or potential-based rewards. As shown in Figure 2, although this simplified RL approach can still improve the SFT model’s greedy performance, it fails to equip the model with iterative self-improvement ability. This finding demonstrates the importance of evolution training, particularly the use of potential-based rewards, in incentivizing the model to learn how to self-refine over multiple iterations.
Figure 9: RL with vs. without Self-Evolution Training. Removing evolution training during the RL stage results in a model that lacks iterative self-improvement capabilities.
# B Demo Examples
The core idea of EvoScale is to use a mix of correct and incorrect patches as context to generate a correct patch. To do this effectively, the model must demonstrate the following capabilities:
1. Generate correct patches even when all prior patches are incorrect—essential for making progress when initial generations fail.
2. Generate correct patches when most prior patches are incorrect—common in practice, requiring the model to identify and build on the few correct ones.
3. Preserve correctness when all prior patches are correct—ensuring the model doesn’t over-edit and degrade already correct solutions.
We present examples below to illustrate each of these capabilities.
# B.1 Type 1: Prior patches are all wrong
# Problem Statement 1
sum: min_count is not available for reduction with more than one dimensions
\*\*Is your feature request related to a problem? Please describe.\*\*
\`sum\` with \`min_count\` errors when passing more than one dim: \`python
import xarray as xr
da $\mathbf { \Psi } = \mathbf { \Psi } _ { \mathbf { X } \mathbf { T } }$ .DataArray([[1., 2, 3], [4, 5, 6]])
da.sum(["dim_0", "dim_1"], min_coun $\scriptstyle : = 1$ )
\*\*Describe the solution you'd like\*\*
The logic to calculate the number of valid elements is here:
https://github.com/pydata/xarray/blob/1be777fe725a85b8cc0f65a2bc41f4bc2ba18043/xarray/core/nanops.py#L35
I \*think\* this can be fixed by replacing
\`mask.shape[axis]\` with \`np.take(a.shape, axis).prod()
\*\*Additional context\*\*
Potentially relevant for #4351
# Conditional patch 1 (Wrong)
diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py
index 74e25c7..3e6c5db 100644 - a/xarray/core/nanops.py
+++ b/xarray/core/nanops.py
@@ -32,7 +32,16 @@ def _maybe_null_out(result, axis, mask, min_count $^ { = 1 }$ ): if axis is not None and getattr(result, "ndim", False): null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (mask.shape[axis] - mask.sum(axis) - min_count) < 0
$^ +$ if np.isscalar(axis):
$^ +$ # in the scalar case we can just use the shape of that axis
$^ +$ size $\mathbf { \sigma } = \mathbf { \sigma }$ mask.shape[axis]
$^ +$ else:
$^ +$ # in the case of multiple axis we need to take the product
$^ +$ # of all the axis sizes since we want the total number of
$^ +$ # elements we are reducing, not just the size of one axis
$^ +$ size $\mathbf { \tau } = \mathbf { \tau }$ np.take(mask.shape, axis).prod()
$^ +$
$^ +$ null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (size - mask.sum(axis) - min_count) < 0 if null_mask.any(): dtype, fill_value $\mathbf { \tau } = \mathbf { \tau }$ dtypes.maybe_promote(result.dtype) result $\mathbf { \tau } = \mathbf { \tau }$ result.astype(dtype)
# Conditional patch 2 (Wrong)
diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py
index 74e25c7..f533eed 100644 - a/xarray/core/nanops.py
$^ { + + + }$ b/xarray/core/nanops.py
@@ -32,7 $+ 3 2$ ,15 @@ def _maybe_null_out(result, axis, mask, min_count $_ { , = 1 }$ ): ) if axis is not None and getattr(result, "ndim", False): null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (mask.shape[axis] - mask.sum(axis) - min_count) $\mathit { \Theta } < \mathit { \Theta } 0$
$^ +$ # handle the case where we are reducing on multiple axis
$^ +$ # we need to consider the product of their sizes, not just any
$^ +$ # one of them.
$^ +$ if np.isscalar(axis):
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ mask.shape[axis]
$^ +$ else:
$^ +$ size $\mathbf { \sigma } = \mathbf { \sigma }$ np.take(mask.shape, axis).prod()
$^ +$
$^ +$ null_mask $\mathbf { \tau } = \mathbf { \tau }$ (size - mask.sum(axis) - min_count) < 0 if null_mask.any(): dtype, fill_value $\mathbf { \tau } = \mathbf { \tau }$ dtypes.maybe_promote(result.dtype) result $\mathbf { \sigma } = \mathbf { \sigma }$ result.astype(dtype)
# Conditional patch 3 (Wrong)
diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py
index 74e25c7..f533eed 100644 - a/xarray/core/nanops.py
$^ { + + + }$ b/xarray/core/nanops.py
@@ -32,7 $^ { + 3 2 , 1 5 }$ @@ def _maybe_null_out(result, axis, mask, min_count $^ { = 1 }$ ): ) if axis is not None and getattr(result, "ndim", False): null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (mask.shape[axis] - mask.sum(axis) - min_count) $\mathit { \Theta } < \mathit { \Theta } 0$
$^ +$ # handle the case where we are reducing on multiple axis
$^ +$ # we need to consider the product of their sizes, not just any
$^ +$ # one of them.
$^ +$ if np.isscalar(axis):
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ mask.shape[axis]
$^ +$ else:
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ np.take(mask.shape, axis).prod()
$^ +$
$^ +$ null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (size - mask.sum(axis) - min_count) < 0 if null_mask.any(): dtype, fill_value $\mathbf { \tau } = \mathbf { \tau }$ dtypes.maybe_promote(result.dtype) result $\mathbf { \Sigma } = \mathbf { \Sigma }$ result.astype(dtype)
# Conditional patch 4 (Wrong)
diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py
index 74e25c7..f533eed 100644 - a/xarray/core/nanops.py
$^ { + + + }$ b/xarray/core/nanops.py
@@ -32,7 +32,15 @@ def _maybe_null_out(result, axis, mask, min_count $_ { , = 1 }$ ): ) if axis is not None and getattr(result, "ndim", False): null_mask $\mathbf { \sigma } = \mathbf { \sigma }$ (mask.shape[axis] - mask.sum(axis) - min_count) < 0
$^ +$ # handle the case where we are reducing on multiple axis
$^ +$ # we need to consider the product of their sizes, not just any
$^ +$ # one of them.
$^ +$ if np.isscalar(axis):
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ mask.shape[axis]
$^ +$ else:
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ np.take(mask.shape, axis).prod()
$^ +$
$^ +$ null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (size - mask.sum(axis) - min_count) < 0 if null_mask.any(): dtype, fill_value $\mathbf { \tau } = \mathbf { \tau }$ dtypes.maybe_promote(result.dtype) result $\mathbf { \Sigma } = \mathbf { \Sigma }$ result.astype(dtype)
# Conditional patch 5 (Wrong)
diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py
index 74e25c7..f533eed 100644 a/xarray/core/nanops.py
$^ { + + + }$ b/xarray/core/nanops.py
@@ -32,7 $+ 3 2$ ,15 @@ def _maybe_null_out(result, axis, mask, min_count $^ { = 1 }$ ): ) if axis is not None and getattr(result, "ndim", False): null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (mask.shape[axis] - mask.sum(axis) - min_count) $< 0$
$^ +$ # handle the case where we are reducing on multiple axis
$^ +$ # we need to consider the product of their sizes, not just any
$^ +$ # one of them.
$^ +$ if np.isscalar(axis):
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ mask.shape[axis]
$^ +$ else:
$^ +$ size $\mathbf { \tau } = \mathbf { \tau }$ np.take(mask.shape, axis).prod()
$^ +$
$^ +$ null_mask $\mathbf { \Sigma } = \mathbf { \Sigma }$ (size - mask.sum(axis) - min_count) < 0 if null_mask.any(): dtype, fill_value $\mathbf { \tau } = \mathbf { \tau }$ dtypes.maybe_promote(result.dtype) result $\mathbf { \tau } = \mathbf { \tau }$ result.astype(dtype)
# Thinking Process
1. Peer Patch Review:
- All five patches are essentially identical in their approach to solving the issue. They all modify $\hookrightarrow$ the same section of code in _maybe_null_out() to handle the case where axis is a list/tuple by $\hookrightarrow$ calculating the product of the sizes of the reduced dimensions instead of just using the size of $\hookrightarrow$ a single axis.
- The key insight is correct - when reducing over multiple dimensions, the total number of elements $\hookrightarrow$ being reduced is the product of the sizes of those dimensions, not just the size of one $\hookrightarrow$ dimension.
- All patches correctly identify that when axis is a scalar, we should use the single dimension size $\hookrightarrow$ as before, but when axis is a list/tuple, we should use the product of the sizes of the $\hookrightarrow$ specified dimensions.
- The solution is consistent with the issue description which suggests using np.take(a.shape, $\hookrightarrow$ axis).prod() to get the total number of elements being reduced.
2. Issue Analysis:
The problem occurs when using sum() with min_count on multiple dimensions because the current code $\hookrightarrow$ only checks the size of one dimension (the first in the axis list) rather than the total number $\hookrightarrow$ of elements being reduced.
- The intended behavior is that min_count should consider the total number of valid elements across $\hookrightarrow$ all reduced dimensions, not just the number of elements along a single dimension.
- The fix requires modifying the logic that calculates the number of valid elements when reducing $\hookrightarrow$ over multiple dimensions to use the product of the sizes of all reduced dimensions.
3. Solution Approach:
The key change needed is in the _maybe_null_out() function where it calculates null_mask for $\hookrightarrow$ multi-dimensional reductions.
- When axis is a list/tuple, we should calculate the total number of elements being reduced by
$\hookrightarrow$ taking the product of the sizes of the specified dimensions (using np.take(mask.shape,
$\hookrightarrow$ axis).prod()). - For single dimension reductions, we can keep the existing behavior of using just mask.shape[axis].
- The rest of the function's logic (checking min_count against the number of valid elements) can
$\hookrightarrow$ remain the same.
4. Code Localization: - The issue is in xarray/core/nanops.py in the _maybe_null_out() function. - The specific location is where it calculates null_mask for multi-dimensional reductions (when axis
$\hookrightarrow$ is not None and result has ndim). - The current code uses mask.shape[axis] which is incorrect for multi-dimensional reductions. We
$\hookrightarrow$ need to modify this to use the product of the sizes of all reduced dimensions when axis is a
$\hookrightarrow$ list/tuple.
# Evolved Patch
diff --git a/xarray/core/nanops.py b/xarray/core/nanops.py
index 74e25c7..c642ed6 100644 -- a/xarray/core/nanops.py
$^ { + + + }$ b/xarray/core/nanops.py
@@ -27,12 $^ { + 2 7 }$ ,22 @@ def _maybe_null_out(result, axis, mask, min_count=1): xarray version of pandas.core.nanops._maybe_null_out 11I if hasattr(axis, "__len__"): # if tuple or list raise ValueError( "min_count is not available for reduction with more than one dimensions." ) # we still need to handle the case where axis is a list/tuple
$^ +$ # by calculating the product of the sizes of the reduced
$^ +$ # dimensions rather than just using one dimension's size
$^ +$ pass if axis is not None and getattr(result, "ndim", False): null_mask $\mathbf { \sigma } = \mathbf { \sigma }$ (mask.shape[axis] - mask.sum(axis) - min_count) < 0
$^ +$ if np.isscalar(axis):
$^ +$ # in the scalar case we can just use the shape of that axis
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ mask.shape[axis]
$^ +$ else:
$^ +$ # in the case of multiple axis we need to take the product
$^ +$ # of all the axis sizes since we want the total number of
$^ +$ # elements we are reducing, not just the size of one axis
$^ +$ size $\mathbf { \Sigma } = \mathbf { \Sigma }$ np.take(mask.shape, axis).prod()
$^ +$
$^ +$ null_mask $\mathbf { \tau } = \mathbf { \tau }$ (size - mask.sum(axis) - min_count) < 0 if null_mask.any(): dtype, fill_value $\mathbf { \Sigma } = \mathbf { \Sigma }$ dtypes.maybe_promote(result.dtype) result $\mathbf { \tau } = \mathbf { \tau }$ result.astype(dtype)
# B.2 Type 2: Prior patches are partially wrong
# Problem Statement 1
Incorrect parenthesizing of Subs
Here is an example.
\`\`\`python
>>> from sympy import Subs
>>> from sympy.abc import x,y
>>> 3\*Subs $( - x + y$ , (x,),(1,))
LaTeX printing of this gives:
\`\`python
'3 \\left. - x + y \\right|_{\\substack{ $\scriptstyle \mathbf { x } = 1$ }}'
It would be better to be parenthesized to:
\`\`python '3 \\left. \\left(- x + y\\right) \\right|_{\\substack{ $\scriptstyle \mathbf { x } = 1$ }}'
# Conditional patch 1 (Wrong)
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 675a3c4..8b39416 100644 - a/sympy/printing/latex.py
+++ b/sympy/printing/latex.py
@@ -698,7 $+ 6 9 8$ ,12 @@ class LatexPrinter(Printer): def _print_Subs(self, subs): expr, old, new $\mathbf { \sigma } = \mathbf { \sigma }$ subs.args latex_expr $\mathbf { \Psi } = \mathbf { \Psi }$ self._print(expr)
$^ +$ # get the expression in proper parenthesized form
$^ +$ # note: we need to use a lower precedence than MUL
$^ +$ # to get parens for things like 3 $* \mathrm { S u b s } \left( 3 + 2 , . . . \right) \ldots > 3 * \left( 3 + 2 \right) | \ldots . . .$
$^ +$ # the MUL prec level is the highest that always adds the
$^ +$ # needed parens
$^ +$ latex_expr $\mathbf { \tau } = \mathbf { \tau }$ self.parenthesize(expr, PRECEDENCE["Mul"] - 1) latex_old $\mathbf { \Sigma } = \mathbf { \Sigma }$ (self._print(e) for e in old) latex_new $\mathbf { \tau } = \mathbf { \tau }$ (self._print(e) for e in new) latex_subs $\mathbf { \Sigma } = \mathbf { \Sigma } _ { \mathbf { r } ^ { \prime } \setminus \setminus }$ '.join(
# Conditional patch 2 (Correct)
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 675a3c4..2d262c4 100644 - a/sympy/printing/latex.py
$^ { + + + }$ b/sympy/printing/latex.py
@@ -698,7 +698,10 @@ class LatexPrinter(Printer): def _print_Subs(self, subs): expr, old, new $\mathbf { \tau } = \mathbf { \tau }$ subs.args latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._print(expr) # issue 12165: need to parenthesize the expression if it is
$^ +$ # an Add (or lower precedence) to get proper grouping
$^ +$ # when a coefficient multiplies a substituted expression latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self.parenthesize(expr, PRECEDENCE["Mul"]) latex_old $\mathbf { \tau } = \mathbf { \tau }$ (self._print(e) for e in old) latex_new $\mathbf { \Sigma } = \mathbf { \Sigma }$ (self._print(e) for e in new) latex_subs $\mathbf { \tau } = \mathbf { \tau }$ r'\\ '.join(
# Conditional patch 3 (Correct)
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 675a3c4..b5b4285 100644 - a/sympy/printing/latex.py
$^ { + + + }$ b/sympy/printing/latex.py
@@ -698,7 $+ 6 9 8$ ,11 @@ class LatexPrinter(Printer): def _print_Subs(self, subs): expr, old, new $\mathbf { \sigma } = \mathbf { \sigma }$ subs.args latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._print(expr)
$^ +$ # always parenthesize the expr so Subs is properly grouped
$^ +$ # when multiplied, e.g. 3\*Subs(-x+y, (x,),(1,)) vs $3 * ( - x + y ) \mid _ { - { \bf x } = 1 }$
$^ +$ # we use Mul precedence since we want to group it at the same
$^ +$ # level as args to Mul (but not as high as Pow which wouldn't group)
$^ +$ latex_expr $\mathbf { \tau } = \mathbf { \tau }$ self.parenthesize(expr, PRECEDENCE["Mul"]) latex_old $\mathbf { \Sigma } = \mathbf { \Sigma }$ (self._print(e) for e in old) latex_new $\mathbf { \tau } = \mathbf { \tau }$ (self._print(e) for e in new) latex_subs $\mathrm { ~ \bf ~ \zeta ~ } = \mathrm { ~ \bf ~ r " \setminus \setminus ~ }$ '.join(
# Conditional patch 4 (Wrong)
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 675a3c4..868ab44 100644 - a/sympy/printing/latex.py
$^ { + + + }$ b/sympy/printing/latex.py
@@ -698,7 +698,13 @@ class LatexPrinter(Printer):
def _print_Subs(self, subs): expr, old, new $\mathbf { \sigma } = \mathbf { \sigma }$ subs.args latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._print(expr)
$^ +$ # we want to always parenthesize the expression, otherwise
$^ +$ # something like $3 * \mathrm { { S u b s } ( - x + y }$ , (x,),(1,)) will have
$^ +$ # incorrect LaTeX output
$^ +$ # to see the issue, compare 3\*Subs $( - x + y$ , (x,),(1,)) with
$^ +$ # $3 * ( - x + y )$ .subs(x, 1) in IPython with init_printing()
$^ +$ # so we sub in a low precedence level to be safe
$^ +$ latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self.parenthesize(expr, 0) latex_old $\mathbf { \tau } = \mathbf { \tau }$ (self._print(e) for e in old) latex_new $\mathbf { \Sigma } = \mathbf { \Sigma }$ (self._print(e) for e in new) latex_subs $\mathbf { \tau } = \mathbf { \tau }$ r'\\ '.join(
# Conditional patch 5 (Wrong)
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 675a3c4..dab5b75 100644 -- a/sympy/printing/latex.py
$^ { + + + }$ b/sympy/printing/latex.py
@@ -698,7 $+ 6 9 8$ ,13 @@ class LatexPrinter(Printer): def _print_Subs(self, subs): expr, old, new $\mathbf { \sigma } = \mathbf { \sigma }$ subs.args latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._print(expr)
$^ +$
$^ +$ # issue 15210: always parenthesize the expression because
$^ +$ # otherwise the wrong precedence may give incorrect results
$^ +$ # For example 3\*Subs $( x + y$ , x, 2) would be $3 * \mathrm { x } + \mathrm { y } \vert \_ \mathrm { x } \mathrm { - } ( \mathrm { x } = 2 )$ rather
$^ +$ # than the correct $3 * ( x + y ) \mid \mathcal { \underline { { \mathbf { \sigma } } } } ( \mathbf { x } = 2 \}$
$^ +$ latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self.parenthesize(expr, 0)
$^ +$ latex_old $\mathbf { \sigma } = \mathbf { \sigma }$ (self._print(e) for e in old) latex_new $\mathbf { \sigma } = \mathbf { \sigma }$ (self._print(e) for e in new) latex_subs $\mathrm { ~ \bf ~ \zeta ~ } = \mathrm { ~ \bf ~ r " \setminus \setminus ~ }$ '.join(
# Thinking Process
1. Peer Patch Review:
# - Patch 1:
- Correctly identifies the need to use a lower precedence than MUL to force parentheses - The solution is reasonable since MUL precedence is high enough to always add needed parens - However, using PRECEDENCE["Mul"] - 1 might be too low since we want to group at the same level $\hookrightarrow$ as MUL args
# - Patch 2:
- Uses PRECEDENCE["Mul"] which is a reasonable choice since we want to group at the same level as
$\hookrightarrow$ MUL args
- Correctly identifies the need to parenthesize Add expressions when multiplied
- Simple and correct solution that addresses the core issue
# - Patch 3:
- Similar to patch 2 but with clearer explanation of why Mul precedence is appropriate - Also correctly identifies the need to always parenthesize the expression - Good solution that matches the intended behavior
# - Patch 4:
- Uses precedence 0 which is too low - this would always add parentheses even when not needed - While it would fix the issue, it would also introduce unnecessary parentheses in many cases - Not the optimal solution since it's too aggressive with parenthesizing
# - Patch 5:
- Similar to patch 4 but with a clearer explanation of the issue - Also uses precedence 0 which has the same problem as patch 4 - While the intent is correct, the solution is too broad in its application
2. Issue Analysis:
- The issue shows that when a Subs expression is multiplied by a coefficient (like 3\*Subs $( - x + y$ ,
$\begin{array} { r l } { \hookrightarrow } & { { } \left( \mathbf { x } , \right) , \left( 1 , \right) ) } \end{array}$ , the expression inside Subs isn't properly parenthesized in LaTeX output
- The current code just prints the expression directly without considering its precedence relative
$\hookrightarrow$ to multiplication The correct behavior should always parenthesize the Subs expression when it appears in a context
$\hookrightarrow$ where precedence matters (like multiplication)
# 3. Solution Approach:
- The key is to modify the _print_Subs method to always parenthesize the expression using the MUL
$\hookrightarrow$ → precedence level
- This ensures that when Subs appears in a multiplication context, it will be properly grouped with
$\hookrightarrow$ parentheses Using MUL precedence is appropriate because we want it to group at the same level as other MUL
$\hookrightarrow$ arguments but not higher than necessary
4. Code Localization:
- The issue is in sympy/printing/latex.py in the _print_Subs method - The current implementation directly prints the expression without any parenthesizing We need to modify it to use the parenthesize method with MUL precedence
# Evolved Patch
diff --git a/sympy/printing/latex.py b/sympy/printing/latex.py
index 675a3c4..6dd89a9 100644 - a/sympy/printing/latex.py
$^ { + + + }$ b/sympy/printing/latex.py
@@ -698,7 $+ 6 9 8$ ,14 @@ class LatexPrinter(Printer): def _print_Subs(self, subs): expr, old, new $\mathbf { \tau } = \mathbf { \tau }$ subs.args latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._print(expr)
$^ +$ # issue 15210: always parenthesize the expression because
$^ +$ # otherwise the wrong precedence may give incorrect results
$^ +$ # For example 3\*Subs $( x + y$ , $\texttt { x }$ , 2) would be $3 * x + y \mid - ( x = 2 )$ rather
$^ +$ # than the correct $3 * ( x + y ) \mid \mathcal { \underline { { \mathbf { \sigma } } } } ( \mathbf { x } = 2 \} )$
$^ +$ # Using MUL precedence ensures we group at the same level as
$^ +$ # other MUL args but not higher than necessary (we don $\therefore t$ want
$^ +$ # to group for Pow since Pow has higher precedence)
$^ +$ latex_expr $\mathbf { \Sigma } = \mathbf { \Sigma }$ self.parenthesize(expr, PRECEDENCE["Mul"]) latex_old $\mathbf { \Sigma } = \mathbf { \Sigma }$ (self._print(e) for e in old) latex_new $\mathbf { \tau } = \mathbf { \tau }$ (self._print(e) for e in new) latex_subs $\mathbf { \Sigma } = \mathbf { \Sigma } _ { \mathbf { r } ^ { \prime } \setminus \setminus }$ '.join(
# Problem Statement 2
Contains.as_set returns Contains \`\`py
$> > >$ Contains(x, Reals).as_set()
Contains(x, Reals)
This is wrong because Contains is not a set (it's a boolean). It results in failures in other places $\hookrightarrow$ because it doesn't have as_relational (since it isn't a set). For instance, from $\hookrightarrow$ https://github.com/sympy/sympy/pull/14965#discussion_r205281989
\`\`pytb >>> Piecewise((6, Contains(x, Reals)), (7, True)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "./sympy/functions/elementary/piecewise.py", line 136, in __new__ $\textbf { r } =$ cls.eval(\*newargs) File "./sympy/functions/elementary/piecewise.py", line 185, in eval $\texttt { c } = \texttt { c }$ .as_set().as_relational(x) AttributeError: 'Contains' object has no attribute 'as_relational'
# Conditional patch 1 (Correct)
diff --git a/sympy/sets/contains.py b/sympy/sets/contains.py
index 0b1e933..3fd780b 100644 - a/sympy/sets/contains.py
$^ { + + + }$ b/sympy/sets/contains.py
@@ -45,4 +45,6 @@ class Contains(BooleanFunction): def as_set(self): raise NotImplementedError() # The set being contained in is the only set we can return
+ # from a Contains object in any meaningful way return self.args[1]
# Conditional patch 2 (Correct)
diff --git a/sympy/sets/contains.py b/sympy/sets/contains.py
index 0b1e933..4c089aa 100644 a/sympy/sets/contains.py
$^ { + + + }$ b/sympy/sets/contains.py
@@ -45,4 +45,5 @@ class Contains(BooleanFunction): def as_set(self): raise NotImplementedError() # The set that is being tested for contains return self.args[1]
# Conditional patch 3 (Correct)
diff --git a/sympy/sets/contains.py b/sympy/sets/contains.py
index 0b1e933..985c88e 100644 - a/sympy/sets/contains.py
$^ { + + + }$ b/sympy/sets/contains.py
@@ -45,4 +45,6 @@ class Contains(BooleanFunction): def as_set(self): raise NotImplementedError()
$^ +$ # The set being contained in is the set representation
$^ +$ # of Contains objects
$^ +$ return self.args[1]
# Conditional patch 4 (Correct)
diff --git a/sympy/sets/contains.py b/sympy/sets/contains.py
index 0b1e933..e732791 100644 a/sympy/sets/contains.py
$^ { + + + }$ b/sympy/sets/contains.py
@@ -45,4 $+ 4 5$ ,6 @@ class Contains(BooleanFunction): def as_set(self): raise NotImplementedError()
+ # Contains(x, S) means ${ \mathfrak { n } } _ { \mathtt { X } }$ is in $\boldsymbol { \mathsf { S } } " \boldsymbol { \mathsf { \Omega } }$ , so as_set returns S
+ return self.args[1]
# Conditional patch 5 (Wrong)
diff --git a/sympy/sets/contains.py b/sympy/sets/contains.py
index 0b1e933..4980364 100644 a/sympy/sets/contains.py
+++ b/sympy/sets/contains.py
@@ -45,4 $+ 4 5$ ,22 @@ class Contains(BooleanFunction): def as_set(self):
$^ +$ # try to evaluate Contains(x, S)
$^ +$ try:
$^ +$ res $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._eval_evalf()
$^ +$ if res $\scriptstyle = = \ s$ .true:
$^ +$ return S.UniversalSet
$^ +$ elif res $\scriptstyle = =$ S.false:
$^ +$ return S.EmptySet
$^ +$ except (TypeError, AttributeError):
$^ +$ pass
$^ +$ # fallback raise NotImplementedError()
# Thinking Process
1. Peer Patch Review:
- Patch 1: This patch correctly identifies that the set being contained in (self.args[1]) is the $\hookrightarrow$ most meaningful set representation of a Contains object. The implementation is simple and $\hookrightarrow$ correct.
- Patch 2: Similar to Patch 1 but with slightly different wording in the comment. The functionality $\hookrightarrow$ is correct but the comment could be more precise about why this is the right approach.
- Patch 3: This patch adds a note about potential invalid sets, which isn't strictly necessary since $\hookrightarrow$ the issue is about the basic behavior. The core implementation is correct but the comment is $\hookrightarrow$ overly cautious. - Patch 4: This patch provides a clear explanation of why returning the set (self.args[1]) is the $\hookrightarrow$ correct behavior - because Contains(x, S) means $" \mathbf { x }$ is in $\mathbf { S } " \$ , so the set representation is just S. $\hookrightarrow$ This is a correct and well-explained solution.
Patch 5: This patch takes a different approach by trying to evaluate the Contains expression and $\hookrightarrow$ return either the universal or empty set based on the result. While creative, this doesn't match $\hookrightarrow$ the simple and correct solution of just returning the set being contained in. The approach is $\hookrightarrow$ overly complex for this simple issue.
2. Issue Analysis:
- The issue occurs when Contains.as_set() returns the Contains object itself instead of a proper $\hookrightarrow$ set. - This causes problems because other code expects a set with as_relational() method. - The correct behavior should be to return the set that $\textbf { x }$ is being tested for containment in $\hookrightarrow$ (self.args[1]), since that's the only meaningful set representation of a Contains object.
3. Solution Approach:
- The simplest and most correct solution is to modify as_set() to return self.args[1], which is $\hookrightarrow$ the set being tested for containment.
- This matches the mathematical interpretation that Contains(x, S) means ${ \mathfrak { n } } _ { \mathtt { X } }$ is in $\mathsf { s } " \hphantom { 0 0 0 0 0 }$ , so the set $\hookrightarrow$ representation is just S.
- No other evaluation or special cases are needed since the set being contained in is the only $\hookrightarrow$ meaningful set representation of a Contains object.
4. Code Localization:
- The issue is in sympy/sets/contains.py in the Contains class.
- The problematic method is as_set() which currently raises NotImplementedError.
- The fix is to modify this method to return self.args[1].
# Evolved Patch
diff --git a/sympy/sets/contains.py b/sympy/sets/contains.py
index 0b1e933..e732791 100644 - a/sympy/sets/contains.py
$^ { + + + }$ b/sympy/sets/contains.py
@@ -45,4 $+ 4 5$ ,6 @@ class Contains(BooleanFunction): def as_set(self): raise NotImplementedError()
+ # Contains(x, S) means $" \mathbf { x }$ is in $\mathtt { s u }$ , so as_set returns S
+ return self.args[1]
# B.3 Type 3: Prior patches are all correct
# Problem Statement 1
_pytest.capture.EncodedFile mode should not include \`b\` (binary)
<!-
Thanks for submitting an issue!
Here's a quick checklist for what to provide: -->
- [x] a detailed description of the bug or suggestion
Exception when youtube-dl logs to pytest captured output. Youtube-dl looks for $\because b ^ { - }$ in \`out.mode\` to
$\hookrightarrow$ decide whether to writes \`bytes\` or \`str\`. \`_pytest.capture.EncodedFile\` incorrectly advertises
$\hookrightarrow$ \`rb+\`, the mode of the underlying stream. Its \`write()\` method raises an exception when passed
$\hookrightarrow$ \`bytes\`.
(pytest-issue-ve3) 01:11:48:nlevitt@Internets-Air-2:/tmp\$ py.test test.py test session starts
platform darwin -- Python 3.7.3, pytest-4.5.0, py-1.8.0, pluggy-0.11.0
rootdir: /private/tmp
collected 1 item
test.py F
$\hookrightarrow$ [100%] FAILURES test_foo
def test_foo(): youtube_dl.YoutubeDL().extract_info('http://example.com/')
test.py:4:
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py:796: in extract_info ie_result $\mathbf { \Sigma } = \mathbf { \Sigma }$ ie.extract(url)
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/extractor/common.py:529: in extract ie_result $\mathbf { \Sigma } = \mathbf { \Sigma }$ self._real_extract(url)
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/extractor/generic.py:2245: in _real_extract self.to_screen('%s: Requesting header' % video_id)
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/extractor/common.py:913: in to_screen self._downloader.to_screen('[%s] %s' % (self.IE_NAME, msg))
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py:502: in to_screen return self.to_stdout(message, skip_eol, check_quiet=True)
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py:516: in to_stdout self._write_string(output, self._screen_file)
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/YoutubeDL.py:505: in _write_string write_string(s, out=out, encoding=self.params.get('encoding'))
pytest-issue-ve3/lib/python3.7/site-packages/youtube_dl/utils.py:1496: in write_string out.write(byt)
self $\mathbf { \Sigma } = \mathbf { \Sigma }$ <_pytest.capture.EncodedFile object at 0x10df124a8>, obj $\mathbf { \Sigma } = \mathbf { \Sigma }$ b'[generic] example: Requesting $\hookrightarrow$ header\n'
def write(self, obj): if isinstance(obj, six.text_type): obj $\mathbf { \Sigma } = \mathbf { \Sigma }$ obj.encode(self.encoding, "replace") elif _PY3: raise TypeError( "write() argument must be str, not {}".format(type(obj).__name__) ) E TypeError: write() argument must be str, not bytes
est-issue-ve3/lib/python3.7/site-packages/_pytest/capture.py:437: TypeError
[x] output of \`pip list\` from the virtual environment you are using
Package Version
atomicwrites 1.3.0
attrs 19.1.0
more-itertools 7.0.0
pip 19.1.1
pluggy 0.11.0
py 1.8.0
pytest 4.5.0
setuptools 41.0.1
six 1.12.0
wcwidth 0.1.7
wheel 0.33.4
youtube-dl 2019.5.11
- [x] pytest and operating system versions
This is pytest version 4.5.0, imported from /private/tmp/pytest-issue-ve3/lib/python3.7/site-packages/pytest.py
macOS 10.14.4 (18E226) [x] minimal example if possible
pip install pytest youtube-dl
py.test test.py
test.py:
import youtube_dl
def test_foo(): youtube_dl.YoutubeDL().extract_info('http://example.com/')
# Conditional patch 1 (Correct)
diff --git a/src/_pytest/capture.py b/src/_pytest/capture.py
index 6b2d076..aa57c0d 100644 - a/src/_pytest/capture.py
+++ b/src/_pytest/capture.py
@@ -448,6 $+ 4 4 8$ ,17 @@ class EncodedFile(object): return repr(self.buffer) def __getattr__(self, name):
$^ +$ # when code inspects the mode attribute (like youtube-dl does)
$^ +$ # we need to hide the 'b' flag from it - we act as a text stream
$^ +$ # even though we write to a bytes buffer
$^ +$ if name $\scriptstyle = =$ "mode":
$^ +$ mode $\mathbf { \Sigma } = \mathbf { \Sigma }$ getattr(self.buffer, name, None)
$^ +$ if mode and "b" in mode:
$^ +$ # return a copy of the mode without 'b'
$^ +$ # this makes e.g. youtube-dl think this is a text stream
$^ +$ # and write str instead of bytes
$^ +$ return mode.replace("b", "") # fall back to regular attribute access return getattr(object.__getattribute__(self, "buffer"), name)
# Conditional patch 2 (Correct)
diff --git a/src/_pytest/capture.py b/src/_pytest/capture.py
index 6b2d076..01e03c8 100644 - a/src/_pytest/capture.py
$^ { + + + }$ b/src/_pytest/capture.py
@@ -448,6 +448,20 @@ class EncodedFile(object): return repr(self.buffer) def __getattr__(self, name):
+ # youtube-dl specifically looks at out.mode to decide whether to
$^ +$ # write bytes or str. Since we're a text interface over a bytes
$^ +$ # interface, we should make it look like we're text.
$^ +$ # https://github.com/ytdl-org/youtube-dl/blob/2019.05.11/youtube_dl/utils.py#L1496
$^ +$ # See: https://github.com/pytest-dev/pytest/issues/5691
$^ +$ if name $\scriptstyle = =$ "mode":
$^ +$ mode $\mathbf { \tau } = \mathbf { \tau }$ getattr(self.buffer, name, None)
$^ +$ if mode and "b" in mode:
$^ +$ # remove the 'b' from the mode, but otherwise leave it
$^ +$ # unchanged. This makes youtube-dl think it can write text
$^ +$ # to us when in reality we'll encode it to bytes for the
$^ +$ # underlying buffer.
$^ +$ mode $\mathbf { \mu } = \mathbf { \mu } ^ { \mathfrak { n } \mathfrak { n } }$ .join(ch for ch in mode if ch $! =$ "b") return mode return getattr(object.__getattribute__(self, "buffer"), name)
# Conditional patch 3 (Correct)
diff --git a/src/_pytest/capture.py b/src/_pytest/capture.py
index 6b2d076..14dfb3c 100644 - a/src/_pytest/capture.py
$^ { + + + }$ b/src/_pytest/capture.py
@@ -447,6 +447,21 @@ class EncodedFile(object): """Ensure that file.name is a string.""" return repr(self.buffer) @property
$^ +$ def mode(self):
$^ +$ """
$^ +$ Get the mode of the underlying buffer but remove 'b' from it.
$^ +$ This is needed because some code (like youtube-dl) checks
$^ +$ for 'b' in mode to decide whether to write bytes or str.
$^ +$ Since this is an encoded (text) file, we need to hide that
$^ +$ it's wrapping a binary stream.
$^ +$
+ $^ +$ $^ +$ $^ +$ $^ +$ +
mode $\mathbf { \Sigma } = \mathbf { \Sigma }$ getattr(self.buffer, "mode", None) if mode and "b" in mode: # remove 'b' but keep other flags like $" + "$ or ' mode $\mathbf { \Sigma } = \mathbf { \Sigma }$ "".join(c for c in mode if c $\ ! =$ "b") return mode def __getattr__(self, name): return getattr(object.__getattribute__(self, "buffer"), name)
# Conditional patch 4 (Correct)
diff --git a/src/_pytest/capture.py b/src/_pytest/capture.py
index 6b2d076..df7fb36 100644 - a/src/_pytest/capture.py
+++ b/src/_pytest/capture.py
@@ -447,6 $+ 4 4 7$ ,22 @@ class EncodedFile(object): """Ensure that file.name is a string.""" return repr(self.buffer)
$^ +$ @property
$^ +$ def mode(self):
$^ +$ """Lie about our mode - remove 'b' if present.
$^ +$
$^ +$ Some libraries (like youtube-dl) inspect the mode to decide whether to
$^ +$ write bytes or str to the stream. Since we handle that in .write() and
$^ +$ .writelines() already, we should pretend to be a text stream even if our
$^ +$ underlying buffer is a byte stream.
$^ +$ 11 I1
$^ +$
$^ +$ # just in case the buffer doesn't even have a mode..
$^ +$ mode $\mathbf { \Sigma } = \mathbf { \Sigma }$ getattr(self.buffer, "mode", "")
$^ +$ # .. . and remove the 'b'
$^ +$ mode = mode.replace("b", "")
$^ +$ return mode
$^ +$ def __getattr__(self, name): return getattr(object.__getattribute__(self, "buffer"), name)
# Conditional patch 5 (Correct)
diff --git a/src/_pytest/capture.py b/src/_pytest/capture.py
index 6b2d076..f94e9dc 100644
-- a/src/_pytest/capture.py
+++ b/src/_pytest/capture.py
@@ -447,6 $+ 4 4 7$ ,14 @@ class EncodedFile(object): """Ensure that file.name is a string.""" return repr(self.buffer)
$^ +$ @property
$^ +$ def mode(self):
$^ +$ # youtube-dl checks for 'b' in mode to decide whether to write bytes or str.
$^ +$ # since we are a text interface (we encode strs to bytes), we should not
$^ +$ # appear as a binary stream, regardless of the underlying buffer's mode.
$^ +$ # https://github.com/pytest-dev/pytest/issues/5295
$^ +$ return self.buffer.mode.replace("b", "")
+ def __getattr__(self, name): return getattr(object.__getattribute__(self, "buffer"), name)
# Thinking Process
1. Peer Patch Review:
# - Patch 1:
- Correctly identifies the need to modify the mode attribute when accessed - Uses __getattr__ to intercept the mode attribute and remove 'b' - Simple and effective solution that matches the issue requirements - Correctly maintains all other attribute behavior
# - Patch 2:
- Similar to patch 1 but uses a different string manipulation method - Also correctly intercepts the mode attribute and removes 'b' - Works just as well as patch 1 but with slightly different implementation
# - Patch 3:
- Takes a different approach by adding a mode property directly - This is a cleaner solution since it explicitly defines the behavior - Correctly handles the case where the buffer doesn't have a mode - Maintains all other attribute behavior through __getattr__
# - Patch 4:
- Similar to patch 3 but with more detailed docstring - Also adds the mode property to remove 'b' from the underlying mode - Correctly handles the case where the buffer doesn't have a mode - Maintains all other attribute behavior through __getattr__
# - Patch 5:
- Simplest solution that just adds a mode property - Directly returns the underlying mode with 'b' removed - Doesn't handle the case where the buffer might not have a mode attribute - Still solves the core issue but is less robust than patches 3-4
2. Issue Analysis:
The problem occurs when youtube-dl checks the mode of the captured output stream - It sees 'b' in the mode and tries to write bytes when the EncodedFile expects str - The solution needs to make the EncodedFile appear as a text stream (no 'b' in mode) - While still maintaining all other file-like behavior from the underlying buffer
3. Solution Approach:
The key is to intercept access to the 'mode' attribute of the underlying buffer - When 'mode' is accessed, we should return the mode string with 'b' removed All other attribute access should continue to work normally from the buffer - This matches exactly what youtube-dl expects while maintaining proper functional
4. Code Localization:
- The issue is in the EncodedFile class in src/_pytest/capture.py
- Need to modify the class to add a mode property that removes 'b' from the underlying buffer's mode
- All other behavior should remain unchanged
# Evolved Patch
diff --git a/src/_pytest/capture.py b/src/_pytest/capture.py
index 6b2d076..d3b3473 100644
-- a/src/_pytest/capture.py
$^ { + + + }$ b/src/_pytest/capture.py
@@ -447,6 $+ 4 4 7$ ,23 @@ class EncodedFile(object):
"""Ensure that file.name is a string."""
return repr(self.buffer)
$^ +$ @property
$^ +$ def mode(self):
$^ +$ """When code inspects our mode (like youtube-dl does), lie and $^ +$ remove the 'b' if present. This makes youtube-dl think we're a text $^ +$ stream and write str instead of bytes, which we can then encode to $^ +$ bytes for the underlying buffer.
$^ +$
$^ +$ Note: we don't need to handle the case where the buffer doesn't have $^ +$ a mode attribute because in that case, normal attribute access will $^ +$ raise AttributeError like it should.
$^ +$ 11
$^ +$ mode $\mathbf { \tau } = \mathbf { \tau }$ self.buffer.mode
$^ +$ if "b" in mode:
$^ +$ # remove the 'b' but otherwise preserve the mode
$^ +$ mode $\mathbf { \Sigma } = \mathbf { \Sigma }$ mode.replace("b", "")
$^ +$ return mode
$^ +$
def __getattr__(self, name):
return getattr(object.__getattribute__(self, "buffer"), name)
# Problem Statement 2
DateFormat.Y() is not zero-padded. Description
The Y specifier for django.utils.dateformat.DateFormat is supposed to always return a four-digit $\hookrightarrow$ year padded with zeros. This doesn't seem to be the case for year $\yen 1000$ .
# Conditional patch 1 (Correct)
diff --git a/django/utils/dateformat.py b/django/utils/dateformat.py
index 38cb1c1..823e893 100644 a/django/utils/dateformat.py
$^ { + + + }$ b/django/utils/dateformat.py
@@ -314,7 $+ 3 1 4$ ,9 @@ class DateFormat(TimeFormat): def Y(self): "Year, 4 digits; e.g. '1999'" return self.data.year
$^ +$ # We want to zero-pad the year (this is only useful for year $<$ 1000)
$^ +$ # so that we get 4 digits.
$^ +$ return '%04d' % self.data.year def z(self): """Day of the year, i.e. 1 to 366."""
# Conditional patch 2 (Correct)
diff --git a/django/utils/dateformat.py b/django/utils/dateformat.py
index 38cb1c1..15090a4 100644 - a/django/utils/dateformat.py
$^ { + + + }$ b/django/utils/dateformat.py
@@ -314,7 $+ 3 1 4$ ,9 @@ class DateFormat(TimeFormat): def Y(self): "Year, 4 digits; e.g. '1999'" return self.data.year
$^ +$ # We want to zero-pad the year, so can't just return the number -
$^ +$ # hence the "%04d" dance.
$^ +$ return '%04d' % self.data.year def z(self): """Day of the year, i.e. 1 to 366."""
# Conditional patch 3 (Correct)
diff --git a/django/utils/dateformat.py b/django/utils/dateformat.py
index 38cb1c1..abab54f 100644 a/django/utils/dateformat.py
+++ b/django/utils/dateformat.py
@@ -314,7 $+ 3 1 4$ ,8 @@ class DateFormat(TimeFormat): def Y(self): "Year, 4 digits; e.g. '1999'" return self.data.year # We want a four-digit year; 0-padding if necessary return '%04d' % self.data.year def z(self): """Day of the year, i.e. 1 to 366."""
# Conditional patch 4 (Correct)
diff --git a/django/utils/dateformat.py b/django/utils/dateformat.py
index 38cb1c1..ecf7449 100644 a/django/utils/dateformat.py
$^ { + + + }$ b/django/utils/dateformat.py
@@ -314,7 $+ 3 1 4$ ,11 @@ class DateFormat(TimeFormat):
def Y(self): "Year, 4 digits; e.g. '1999'" return self.data.year # We hardcode 4 here instead of using a settings value or # self.data.year.__len__() because we want to always return # a 4-digit year for consistency with PHP's Y behavior. # See also: https://bugs.php.net/bug.php?id=67077 return '%04d' % self.data.year
def z(self): """Day of the year, i.e. 1 to 366."""
# Conditional patch 5 (Correct)
diff --git a/django/utils/dateformat.py b/django/utils/dateformat.py
index 38cb1c1..acd0ae0 100644 - a/django/utils/dateformat.py
$^ { + + + }$ b/django/utils/dateformat.py
@@ -314,7 $+ 3 1 4$ ,9 @@ class DateFormat(TimeFormat):
def Y(self): "Year, 4 digits; e.g. '1999'" return self.data.year # We want to always return a 4-digit year, zero-padded if necessary # This ensures years $\mathit { \Theta } < \mathit { \Theta } 1 0 0 0$ are properly formatted return '%04d' % self.data.year
def z(self): """Day of the year, i.e. 1 to 366."""
# Thinking Process
1. Peer Patch Review:
# - Patch 1:
- Correctly identifies the need to zero-pad the year using '%04d' - The comment explains the issue well for years $\yen 1000$ - Simple and correct solution that matches the issue requirements
# - Patch 2:
- Similar to Patch 1 but with slightly different comment wording - Also correctly uses '%04d' to zero-pad the year - Correct solution that addresses the issue
# - Patch 3:
- Uses '%04d' like the other patches but with simpler comment - Correct solution that matches the issue requirements
# - Patch 4:
- Adds more detailed comment explaining the PHP compatibility
- While the solution is correct, the additional comment about PHP behavior isn't necessary for
$\hookrightarrow$ this Django-specific issue
- Still a correct solution to the issue
# - Patch 5:
- Simple and correct solution using '%04d' with clear comment - Correctly addresses the issue requirements
2. Issue Analysis:
- The issue states that DateFormat.Y() should always return a 4-digit year padded with zeros - Currently it returns the raw year number which doesn't zero-pad for years $\angle \mathrm { ~ ~ { ~ \frac ~ { ~ 1 ~ 0 ~ 0 ~ 0 ~ } ~ } ~ }$ - The fix requires modifying the Y() method to format the year as a 4-digit string with zero-padding - This matches the behavior described in the method's docstring which says $" 4$ digits"
3. Solution Approach:
The solution is to modify the Y() method to use string formatting with '%04d' - This will ensure the year is always returned as a 4-character string with leading zeros if needed - The change is simple and focused on just the Y() method where the issue occurs
4. Code Localization:
The issue is in django/utils/dateformat.py in the DateFormat class The problematic method is Y() which currently just returns self.data.year Need to modify it to return '%04d' % self.data.year instead
# Evolved Patch
diff --git a/django/utils/dateformat.py b/django/utils/dateformat.py
index 38cb1c1..d0e66c3 100644 - a/django/utils/dateformat.py
$^ { + + + }$ b/django/utils/dateformat.py
@@ -314,7 +314,9 @@ class DateFormat(TimeFormat): def Y(self): "Year, 4 digits; e.g. '1999'" return self.data.year # We want to zero-pad the year to ensure we always get
$^ +$ $\# 4$ digits, even for years $\mathit { \Theta } < \mathit { \Theta } 1 0 0 0$ .
$^ +$ return '%04d' % self.data.year
# C Scaffold of Satori-SWE
We adopt a pipeline-based scaffold consisting of two main components: a retriever and a code editing model. The retriever takes either the GitHub repository structure or the contents of individual code files as input and identifies the files most relevant to the given issue. Once retrieval is complete, the full content of the selected code files is passed to the code editing model.
The code editing model receives both the issue description and the retrieved code content as input and generates a patch to resolve the issue. Additionally, there is an optional verifier component, which can be used to select the best patch from a large pool of candidate samples. We describe each component in detail below.
# C.1 Retriever
Our retriever is entirely LLM-based and consists of two components: a retrieval model and a retrieval reward model.
Figure 10: Retrieval Pipeline. Given the repository’s file structure, the retrieval model first selects the top-5 candidate files. These candidates are then re-scored by the retrieval reward model based on file content, and the top-ranked (Top-1) file is returned as the final result.
Retrieval Model The first stage of our retriever uses a retrieval model to identify the top 5 most relevant files based on the repository structure and the GitHub issue description. We adopt the same format as Agentless [33] to represent the repository’s file structure. Given this representation and the issue description, the retrieval model performs a reasoning process and outputs five file paths from the repository. The model is trained using a combination of small-scale supervised fine-tuning (SFT) and large-scale reinforcement learning (RL), see Appendix D.3 for details.
Retrieval Reward Model The retrieval reward model is designed to refine retrieval results in a more fine-grained manner. After the initial top-5 files are retrieved by the retrieval model, the reward model evaluates each one by considering both the file’s code content and the issue description. It then outputs a relevance score for each file, and the file with the highest score is selected as the final target for code editing. The retrieval reward model is a classifier-style LLM trained with a binary classification objective, see Appendix D.4 for training details.
# C.2 Code Editing Model
The code editing model receives a prompt formed by concatenating the issue statement with the code content of the retrieved target file. It performs iterative sampling to enable self-evolution during generation.
In the first iteration, given the issue statement and code context, the model generates five diverse responses, each corresponding to a different patch candidate. These five patch candidates are then appended to the input as a conditional prompt for the next iteration of generation. This iterative process allows the model to progressively refine its outputs.
As discussed in Section 4, the code editing model is trained using a combination of small-scale SFT and large-scale RL. Additional training details are provided in Appendix D.5.
# C.3 Verifier
As discussed in Section 5.2, our code editing model demonstrates the ability to self-evolve by iteratively refining its own generations. While this process improves the quality of patch samples, incorporating external verifiers to select the optimal patch can further boost performance. For software engineering tasks, we consider two primary types of verifiers: an LLM-based reward model and unit tests.
Code Editing Reward Model The code editing reward model is designed to select the best patch from a pool of candidates. It takes as input the retrieved file’s code content, the issue description, and a patch candidate in git diff format. The model then outputs a score indicating the quality of the patch. This reward model is implemented as a classifier-based LLM trained with a binary classification objective (see Appendix D.6 for details).
Unit Tests Unit tests consist of two components: (1) Reproduction tests, which validate whether the original GitHub issue can be reproduced and resolved by the patch; (2) Regression tests, which check whether the patch preserves the existing functionality of the codebase. To construct the regression tests, we extract existing test files from the repository using the Agentless [33] pipeline. For the reproduction tests, we use a trained test generation model that takes the issue description as input and generates tests aimed at reproducing the issue. This reproduction test generator is trained using supervised fine-tuning (see Appendix D.7). For each instance, we sample 100 reproduction tests and retain 5 valid patches to serve as reproduction tests.
Hybrid Verifiers We combine multiple verifiers to select the most promising patch candidates. The selection process is as follows: (1) Regression tests are applied first. Any patch that fails is discarded; (2) Reproduction tests are then executed on the remaining patches. Candidates are ranked based on how many of the five tests they pass. (3) The top- $k$ $k = 2$ ) unique patches are retained per instance. (4) If no patch passes both regression and reproduction tests, we fall back to using all generated candidates without filtering. (5) Finally, among the remaining patches, the code editing reward model is used to select the candidate with the highest reward score for submission.
# D Implementation Details
# D.1 Dataset Collection
Our primary training data is sourced from SWE-Fixer and SWE-Gym. To ensure data quality, we apply a comprehensive filtering and deduplication process. Specifically, we discard instances that meet any of the following criteria: (1) extremely short or excessively long issue statements; (2) multimodal or non-text content (e.g., images, videos, LaTeX); (3) presence of unrelated external links; (4) inclusion of commit hashes; (5) patches requiring modifications to more than three files. After applying these filters, we obtain 29,404 high-quality training instances, which we use to train both the retrieval and code editing models.
# D.2 Training Pipeline and Hardware
Both the retrieval and code editing models are trained in two stages: (1) small-scale supervised fine-tuning (SFT) for warm-up, and (2) large-scale reinforcement learning (RL) for self-improvement. The SFT trajectories are generated via chain-of-thought prompting using Deepseek-V3-0324. We adopt the Qwen2.5-Coder-32B-Instruct model [12] as the base model for all components due to its strong code reasoning capabilities. We utilize OpenRLHF [10] as the training framework for SFT, and use VERL [27] as the training framework for RL. All training runs are conducted on NVIDIA H100 GPUs with $8 0 \mathrm { G B }$ of memory. For evaluation and model inference, we serve models using the sglang framework4, employing tensor parallelism with a parallel size of 8 on NVIDIA H100 GPUs.
# D.3 Retrieval Model
Dataset Construction To ensure cost-efficient synthetic data generation, we randomly select 1,470 instances $5 \%$ of the full dataset) as the small-scale SFT dataset for training the retrieval model.
To train the model to perform step-by-step reasoning and generate relevant file paths conditioned on both the issue statement and the repository structure, we require training data that includes both intermediate reasoning and final retrieved files. However, the raw data only provides the issue descriptions and the ground-truth retrieved files (extracted from the final patch), without any intermediate reasoning.
To address this, we use Deepseek-V3-0324 with a custom retrieval model prompt (see Appendix E) and apply rejection sampling to collect high-quality chain-of-thought (CoT) reasoning traces. Specifically, we check whether the model’s top-5 retrieved files include the ground-truth retrieval files. If so, we retain the response as part of our synthetic SFT data.
For each selected instance, we generate one response via greedy decoding and three additional responses using random sampling (temperature $= 0 . 7$ ), as long as they include the correct retrieval files. This results in four responses per instance.
For RL training, we use the remaining 27,598 instances $9 5 \%$ of the dataset), filtering out prompts whose total sequence length exceeds 16,384 tokens. The RL dataset consists of prompts (issue statement $+$ repo structure) as input and the corresponding ground-truth retrieval files as the final answer, without requiring intermediate reasoning.
Supervised Fine-Tuning We perform supervised fine-tuning (SFT) on the Qwen2.5-Coder-32BInstruct model using the synthetic SFT dataset described above. The prompt template used for training is identical to the one used to construct the synthetic data (see the retrieval model prompt in Appendix E). We train the model using a cosine learning rate scheduler with an initial learning rate of 1e-5. The model is fine-tuned for one epoch with a batch size of 128 and a maximum sequence length of 32,768 tokens.
Reinforcement Learning To further push the limit of the model’s retrieval performance, we apply a large-scale reinforcement learning (RL) stage after the SFT stage. After SFT, the model has learned to reason step-by-step and generates a set of candidate retrieval files, denoted as $\mathcal { Y } = \{ y _ { 1 } , y _ { 2 } , \dots , y _ { k } \}$ , where $k = 5$ . Given the ground-truth set of target files $\mathcal { F }$ , we define the reward as the proportion of correctly retrieved files:
$$
\mathrm { R e w a r d } = \frac { | \mathcal { V } \cap \mathcal { F } | } { | \mathcal { F } | }
$$
Given the prompt in the RL dataset, we let the model generate response and self-improve through this reward signal. We use the REINFORCE $^ { + + }$ [11] algorithm with a fixed learning rate of 1e-6 for the actor model. During training, we sample 8 rollouts per prompt. The training batch size is 64, and the rollout batch size is 256. The model is trained for 3 epochs, with a maximum prompt length of 16k tokens and generation length of $4 \mathrm { k \Omega }$ tokens. Additional hyperparameters include a KL divergence coefficient of 0.0, entropy coefficient of 0.001 and a sampling temperature of 1.0.
# D.4 Retrieval Reward Model
To train a reward model capable of reliably identifying the most relevant code files for modification, we construct a reward model dataset derived from our main dataset. The final reward model dataset consists of 112,378 samples corresponding to 25,363 unique instances. For each instance, the prompt is constructed using the retrieval reward model prompt template (see Appendix E), incorporating the issue statement along with the code content of each of the top-5 candidate retrieval files. Each data point is labeled with a binary value $\in 0 , 1$ , indicating whether the provided code content belongs to the ground-truth retrieval files. The model is initialized from the Qwen2.5-Coder-32B-Instruct and trained as a binary classifier using cross-entropy loss. Training is conducted with a batch size of 128, a learning rate of 5e-6, and a maximum sequence length of 32,768 tokens, over two epochs.
# D.5 Code Editing Model
Dataset construction As described in Section 4, our code editing model is trained in two stages using supervised fine-tuning (SFT)—classical SFT and mutation SFT—followed by large-scale reinforcement learning (RL). We randomly select 1,470 instances $( 5 \% )$ from the full dataset for the classical SFT set and a separate 1,470 instances $( 5 \% )$ for the mutation SFT set. These two subsets are kept disjoint to ensure that the model learns to self-refine without direct exposure to the ground-truth solutions. For RL training, we use the remaining 22,102 instances $90 \%$ of the dataset), filtering out any prompts with sequence lengths exceeding 16,384 tokens. The RL dataset contains only the prompt (issue $^ +$ code context) as input and the corresponding ground-truth patch as the output.
To synthesize the reasoning chain-of-thought (CoT) for classical SFT, we prompt Deepseek-V3-0324. Unlike the retrieval setting, we do not use rejection sampling, as Deepseek-V3-0324 often fails to generate the correct patch even after multiple samples. Instead, we adopt a more efficient approach by designing a “role-playing” prompt that provides the model access to the ground-truth patch and instructs it to explain the reasoning process behind it (see the “Generating Reasoning CoT for Code Editing Model (Classical SFT)” prompt in Appendix E). This ensures that the generated reasoning is both accurate and reflects an independent thought process. We then synthesize the classical SFT dataset using the “Code Editing Model (Classical SFT)” prompt template in Appendix E.
We first fine-tune the base model on the classical SFT dataset. This fine-tuned model is then used to generate five random patch candidates per instance with a sampling temperature of 1.0. These candidate patches are used to construct the mutation SFT dataset. For each instance, we prompt Deepseek-V3-0324 with: the issue statement, the content of the target file, the five candidate patches, and the ground-truth patch. Using the “Generating Reasoning CoT for Code Editing Model (Mutation SFT)” prompt (see Appendix E), the model is instructed to review each patch, critique their strengths and weaknesses, and propose an improved solution. We then extract the reasoning process and synthesize the mutation SFT dataset using the “Code Editing Model (Mutation SFT)” prompt template.
Supervised Fine-Tuning We perform supervised fine-tuning (SFT) on the Qwen2.5-Coder-32BInstruct model using the synthetic SFT datasets described above. The prompt templates used for training are the same as those used to construct the two-stage SFT datasets (classical and mutation SFT). We employ a cosine learning rate scheduler with an initial learning rate of 1e-5. Training is conducted for one epoch, with a batch size of 128 and a maximum sequence length of 32,768 tokens.
Reinforcement Learning We fine-tune the mutation SFT model on the full dataset using REIN$\scriptstyle { \mathrm { F O R C E + + } }$ [11] with the following reward function:
$$
r = \underbrace { R ( x , y ) } _ { \mathrm { B o n u s } } + \underbrace { R ( x , y ) - \sum _ { i = 1 } ^ { K } R ( x , { \bar { y } } _ { i } ) } _ { \mathrm { F o r m a t } } - \underbrace { \lambda { F ( y ) } } _ { \mathrm { F o r m a t } } ,
$$
where each term is defined as follows:
• $R ( x , y )$ (Bonus): Encourages the model to produce high-reward outputs. Although similar in effect to the potential term, including this bonus stabilizes training and consistently improves the model’s average reward.
• $\begin{array} { r } { R ( x , y ) - \sum _ { i = 1 } ^ { K } R ( x , \bar { y } _ { i } ) } \end{array}$ (Potential): Measures the improvement of the current patch $y$ over the average reward of the $K$ conditioning patches $\bar { y } _ { i }$ . See Section 4.3 for details.
• $F ( y )$ (Format): Penalizes outputs that violate format or syntax constraints. It consists of: – String matching: Rewards outputs that closely match the ground-truth patch $y ^ { \ast }$ using sequence similarity, following Wei et al. [31]. – Syntax check: Ensures the output can be parsed into the expected search-replace format, passes Python’s ast syntax check, and satisfies flake8 static analysis. If any check fails, the format reward is set to zero.
The RL model is trained on a mix of data with and without conditioning examples. Conditioning examples are generated not only using the classical SFT model but also using a checkpoint of an RL-trained model at the first epoch.
As for implementation, we use REINFORCE $^ { + + }$ [11] algorithm with a fixed learning rate of 1e-6 for the actor model. During training, we sample 8 rollouts per prompt. The training batch size is 64, and the rollout batch size is 256. The model is trained for only 1 epochs, with a maximum prompt length of 16k tokens and generation length of $8 \mathbf { k }$ tokens. Additional hyperparameters include a KL divergence coefficient of 0.0, entropy coefficient of 0.001 and a sampling temperature of 1.0.
# D.6 Code Editing Reward Model
The code editing reward model is designed to provide a more accurate reward signal, addressing the limitations of using simple string-matching scores. The training setup is similar to that of the retrieval reward model (see Appendix D.4), with the main difference being in the data collection process. We construct the reward model training dataset using data collected from nebius/SWE-agenttrajectories and nebius/SWE-bench-extra, resulting in 56,797 samples across 1,889 unique instances. For each instance, the prompt is constructed using the code editing reward model prompt template (see Appendix E), and includes the issue statement, the code content of the target file to be modified, and a candidate patch. Each sample is labeled with a binary value $\in 0 , 1$ , indicating whether the candidate patch successfully resolves the issue. The model is trained as a binary classifier using the same training settings as the retrieval reward model.
# D.7 Reproduction Test Generator
Following a similar approach to that used for code editing, we generate intermediate reasoning steps for reproduction test generation using the Deepseek-V3-0324 model. Given the issue description and the corresponding ground-truth test patch, the model is prompted to produce a response that includes the reasoning behind constructing a valid test in a chain-of-thought format.
To support automated verification, we follow the strategy used in Agentless [33], employing a test script template that prints clear diagnostic messages indicating whether the issue has been successfully reproduced or resolved. Specifically, If the test successfully triggers the target error (e.g., raises an AssertionError), it prints “Issue reproduced”; If the test completes without errors, it prints “Issue resolved”. An example template for this diagnostic test script is shown below:
# Reproduction Test Template
def test_<meaningful_name>() $\mathrel { - } >$ None: try: # Minimal code that triggers the bug except AssertionError: print("Issue reproduced") return except Exception: print("Other issues") return print("Issue resolved") return
if __name__ == "__main__ test_<meaningful_name>()
Starting from our filtered dataset, we generate one response per instance using greedy decoding and three additional responses via sampling with a temperature of 0.7. These synthetic examples are then used to fine-tune the Qwen2.5-Coder-32B-Instruct model over three epochs, resulting in our reproduction test generation model. The prompt templates used for generating intermediate reasoning and for supervised fine-tuning are provided in Appendix E.
# E Prompt Template
# Prompt Template — Retrieval Model
Please look through the following GitHub problem description and Repository structure.
Determine the files most likely to be edited to fix the problem. Identify 5 most important files.
### GitHub Problem Description ### {problem_statement}
### Repository Structure ### {structure}
### Format Instruction ###
1. Enclose reasoning process within \`<think>...</think>\`.
2. Please only provide the full path and return 5 most important files. Always return exactly 5 files, Do Not output less than 5 or more than 5 files.
3. The returned files should be separated by new lines ordered by most to least important.
Wrap all files together ithi fil </file>
4. Do not include any explanations after \`</think>\`, only provide the file path within \`<file>...</file>\`.
### Examples ### <think>
1. Analyze the issue.
2. Check the files in provided repository structure for relevance. 3. Confirm that the issue might be most relevant to 5 relevant files.. </think>
<file> file1.py file2.py file3.py file4.py file5.py </file>
Please provide your response below.
# Prompt Template — Retrieval Reward Model
You are an expert software engineer and seasoned code reviewer, specializing in bug localization and code $\hookrightarrow$ optimization.
You will be presented with a GitHub issue and a source code file.
Your task is to decide if the code file is relevant to the issue.
# Issue Statement {problem_statement}
# File to be Modified {file_content}
# Prompt Template — Generating Reasoning CoT for Code Editing Model (Classical SFT)
You are a student striving to become an expert software engineer and seasoned code reviewer, specializing in bug $\hookrightarrow$ localization and code optimization within real-world code repositories. Your strengths lie in understanding $\hookrightarrow$ complex codebase structures and precisely identifying and modifying the relevant parts of the code to resolve $\hookrightarrow$ issues. You also excel at articulating your reasoning process in a coherent, step-by-step manner that leads to $\hookrightarrow$ efficient and correct
bug fixes.
You are now taking an exam to evaluate your capabilities. You will be provided with a codebase and an issue $\hookrightarrow$ description. Your task is to simulate a complete reasoning process—step-by-step—as if solving the issue from $\hookrightarrow$ scratch, followed by the code modifications to resolve the issue.
To evaluate your correctness, an oracle code modification patch will also be provided. You must ensure that your $\hookrightarrow$ final code modifications MATCH the oracle patch EXACTLY. However, your reasoning process must appear fully $\hookrightarrow$ self-derived and \*\*must NOT reference, suggest awareness of, or appear to be influenced by\*\* the oracle patch. $\hookrightarrow$ You must solve the problem as if you are unaware of the oracle solution.
# Issue Statement {problem_statement} # Files to be Modified
Below are some code files that might be relevant to the issue above. One or more of these files may contain bugs. {file_content}
# Oracle Code Modification Patch (For Evaluation Only):
{oracle_patch}
# Reasoning Guidelines
Your reasoning process should generally follow these steps, with flexibility to adjust as needed for clarity and $\hookrightarrow$ accuracy:
1. \*\*Issue Analysis\*\*: Start by thoroughly analyzing the issue. Explain what the problem is, why it matters, and what $\hookrightarrow$ the intended behavior should be. Identify the key goals and constraints that must be addressed in your solution.
2. \*\*Task Decomposition\*\*: Break down the issue into smaller, manageable sub-tasks. Describe the purpose of each $\hookrightarrow$ sub-task and how it contributes to solving the overall problem.
3. \*\*Code Localization and Editing\*\*: For each sub-task: - Identify relevant code snippets by file path and code location. - Explain how each snippet relates to the sub-task. - Describe how the code should be changed and justify your reasoning. - After thorough explanation, provide the corresponding edited code.
Your final output must precisely match the oracle patch, but your thinking must remain fully grounded in the issue $\hookrightarrow$ description and provided code files.
---
# # General Requirements
1. \*\*Independent and Evidence-Based Reasoning\*\*: Your reasoning must be constructed as if independently derived, $\hookrightarrow$ based solely on the issue and code. Do not reference or imply knowledge of the oracle patch.
2. \*\*Clarity and Justification\*\*: Ensure that each reasoning step is clear, well-justified, and easy to follow. 3. \*\*Comprehensiveness with Focus\*\*: Address all relevant components of the issue while remaining concise and $\hookrightarrow$ focused.
4. \*\*Faithful Final Output\*\*: Your final code output must match the oracle patch exactly.
5. \*\*Strict Neutrality\*\*: Treat the oracle patch purely as a grading mechanism. Any hint of knowing the patch in your $\hookrightarrow$ reasoning (e.g., “based on the oracle,” “we can verify,” or “as we see in the patch”) will result in exam $\hookrightarrow$ failure.
---
# Response Format
1. The reasoning process should be enclosed in <think> ... </think>.
2. The final oracle patch should be output in a standalone Python code block \*after\* the </think> block.
3. Do not include any commentary or justification after the </think> block.
Example: <think>
1. Analyze the issue..
2. Locate the relevant code..
3. Apply necessary changes...
</think>
\`\`python
# Final patch here (must match the oracle patch exactly)
Please provide your response.
# Prompt Template — Code Editing Model (Classical SFT)
You are an expert software engineer and seasoned code reviewer, specializing in bug localization and code $\hookrightarrow$ optimization within real-world code repositories. Your strengths lie in understanding complex codebase structures $\hookrightarrow$ and precisely identifying and modifying the relevant parts of the code to resolve issues. You also excel at $\hookrightarrow$ articulating your reasoning process in a coherent, step-by-step manner that leads to efficient and correct bug ,→ fixes.
You will be provided with a codebase and an issue description. Your task is to simulate a complete reasoning $\hookrightarrow$ process—step-by-step—as if solving the issue from scratch, followed by the code modifications to resolve the $\hookrightarrow$ issue.
# Issue Statement
{problem_statement}
# Files to be Modified
Below are some code files that might be relevant to the issue above. One or more of these files may contain bugs. {file_content}
# Reasoning Guidelines
Your reasoning process should generally follow these steps, with flexibility to adjust as needed for clarity and $\hookrightarrow$ accuracy:
1. \*\*Issue Analysis\*\*: Start by thoroughly analyzing the issue. Explain what the problem is, why it matters, and what
the intended behavior should be. Identify the key goals and constraints that must be addressed in your solution.
2. \*\*Task Decomposition\*\*: Break down the issue into smaller, manageable sub-tasks. Describe the purpose of each
$\hookrightarrow$ sub-task and how it contributes to solving the overall problem.
3. \*\*Code Localization and Editing\*\*: For each sub-task: - Identify relevant code snippets by file path and code location. - Explain how each snippet relates to the sub-task. - Describe how the code should be changed and justify your reasoning. - After thorough explanation, provide the corresponding edited code. # General Requirements
1. \*\*Clear and Evidence-Based Reasoning\*\*: Provide clear and precise reasoning for each step, strictly based on the $\hookrightarrow$ provided issue and code without inferring information not explicitly stated.
2. \*\*Comprehensive and Concise\*\*: Address all relevant aspects of the issue comprehensively while being concise. $\hookrightarrow$ Justify the exclusion of any sections that are not relevant.
3. \*\*Detailed Guidance\*\*: Ensure the reasoning steps are detailed enough to allow someone unfamiliar with the $\hookrightarrow$ solution to infer and implement the necessary code modifications.
# Response Format
1. The reasoning process should be enclosed in <think> ... </think>.
2. The final patch should be output in a standalone Python code block \*after\* the </think> block.
3. Do not include any commentary or justification after the </think> block.
# Patch Format
Please generate \*SEARCH/REPLACE\* edits to fix the issue. Every \*SEARCH/REPLACE\* edit must use this format: 1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line:
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
If, in \`Files to be Modified\` part, there are multiple files or multiple locations in a single file require changes. You should provide separate patches for each modification, clearly indicating the file name and the specific location $\hookrightarrow$ of the modification.
Please note that the \*SEARCH/REPLACE\* edit REQUIRES PROPER INDENTATION. For example, if you would like to add the $\hookrightarrow$ line print(x)', you must fully write that out, with all those spaces before the code! And remember to $\hookrightarrow$ wrap the \*SEARCH/REPLACE\* edit in blocks \`\`\`python...
# Example Response
<think>
1. Analyze the issue..
2. Locate the relevant code...
3. Apply necessary changes...
</think>
\`\`\`python
### mathweb/flask/app.py
<<<<<<< SEARCH
from flask import Flask
import math
from flask import Flask
>>>>>>> REPLACE
.
\`\`\`python
### mathweb/utils/calc.py
<<<<<<< SEARCH
def calculate_area(radius):
return 3.14 $^ *$ radius $^ *$ radius
def calculate_area(radius):
return math.pi $^ *$ radius \*\* 2
>>>>>>> REPLACE
Please provide your response below.
# Prompt Template — Generating Reasoning CoT for Code Editing Model (Mutation SFT)
You are a student collaborating with a group of peers in a software engineering lab, working together to diagnose and $\hookrightarrow$ fix bugs in real-world code repositories. You specialize in bug localization and code optimization, with a $\hookrightarrow$ particular talent for critically evaluating others' patches and synthesizing high-quality, precise solutions from $\hookrightarrow$ collaborative efforts.
You will be presented with a GitHub issue, the relevant source code files, and several \*candidate patches\* submitted $\hookrightarrow$ by your teammates. Your task is twofold:
1. \*\*Patch Review\*\*: Carefully evaluate each of the several candidate patches \*\*individually\*\*. Identify whether each patch resolves the issue correctly, partially, or incorrectly. If you identify any issues (e.g., logical errors, misunderstandings of the bug, overlooked edge cases, or incomplete fixes), explain them clearly and suggest what $\hookrightarrow$ could be improved or corrected.
Even if a patch appears mostly correct, you should still analyze its strengths and limitations in detail. Treat this as a collaborative peer-review process: constructive, technical, and focused on improving code qu
2. \*\*Patch Synthesis\*\*: After analyzing all several candidate patches, synthesize your understanding to produce your
$\hookrightarrow$ \*\*own final code patch\*\* that fully resolves the issue. Your patch should: - Be grounded solely in the issue description and provided source code. - Be informed by your peer review, but not copy any one patch outright.
- To evaluate your correctness, an oracle code modification patch will also be provided. You must ensure that your $\hookrightarrow$ final code modifications MATCH the oracle patch EXACTLY. However, your reasoning process must appear fully $\hookrightarrow$ self-derived and \*\*must NOT reference, suggest awareness of, or appear to be influenced by\*\* the oracle patch. $\hookrightarrow$ You must solve the problem as if you are unaware of the oracle solution.
# Issue Statement {problem_statement}
---
# Files to be Modified Below are some code files that might be relevant to the issue above. One or more of these files may contain bugs.
{file_content}
# Candidate Patches (From Collaborators)
Below are several proposed patches submitted by your teammates. You will evaluate them individually. {candidate_patches}
# Oracle Code Modification Patch (For Evaluation Only): {target}
# Reasoning and Review Guidelines
Your response should be structured into two parts:
## Part 1: Peer Patch Review
For each of the candidate patches: - Analyze the candidate patch's intent and correctness. - Identify what it does well, what it gets wrong (if anything), and how it could be improved. - Use precise references to the provided issue and source code files to justify your evaluation. - Avoid any implication that you know the correct answer or are using an external reference (including the $\hookrightarrow$ oracle).
## Part 2: Final Patch Synthesis After completing all reviews:
1. \*\*Issue Analysis\*\*: Start by thoroughly analyzing the issue. Explain what the problem is, why it matters, and what $\hookrightarrow$ the intended behavior should be. Identify the key goals and constraints that must be addressed in your solution.
2. \*\*Task Decomposition\*\*: Break down the issue into smaller, manageable sub-tasks. Describe the purpose of each $\hookrightarrow$ sub-task and how it contributes to solving the overall problem.
3. \*\*Code Localization and Editing\*\*: For each sub-task: - Identify relevant code snippets by file path and code location. - Explain how each snippet relates to the sub-task. - Describe how the code should be changed and justify your reasoning. - Incorporate useful insights from the candidate patches you reviewed. Reuse good ideas that are correct and $\hookrightarrow$ effective, but discard or correct those that contain flaws or misunderstandings. - After thorough explanation, provide the corresponding edited code.
Your final output must precisely match the oracle patch, but your thinking must remain fully grounded in the issue $\hookrightarrow$ description and provided code files.
---
# General Requirements
1. \*\*Independent and Evidence-Based Reasoning\*\*: Your reasoning must be constructed as if independently derived, $\hookrightarrow$ based solely on the issue and code. Do not reference or imply knowledge of the oracle patch.
2. \*\*Clarity and Justification\*\*: Ensure that each reasoning step is clear, well-justified, and easy to follow. 3. \*\*Comprehensiveness with Focus\*\*: Address all relevant components of the issue while remaining concise and $\hookrightarrow$ focused.
4. \*\*Faithful Final Output\*\*: Your final code output must match the oracle patch exactly.
5. \*\*Strict Neutrality\*\*: Treat the oracle patch purely as a grading mechanism. Any hint of knowing the patch in your $\hookrightarrow$ reasoning (e.g., “based on the oracle,” “we can verify,” or “as we see in the patch”) will result in exam $\hookrightarrow$ failure.
---
# Response Format
1. The reasoning process should be enclosed in <think> ... </think>.
2. The final oracle patch should be output in a standalone Python code block \*after\* the </think> block.
3. Do not include any commentary or justification after the </think> block.
Example: <think>
1. Review of candidate patch: - Review of patch-1: ... - Review of patch-2: ...
2. Analyze the issue by myself...
3. Locate the relevant code...
4. Apply necessary changes..
</think>
\`\`\`python
# Final patch here (must match the oracle patch exactly)
# Prompt Template — Code Editing Model (Mutation SFT)
You are an expert software engineer and seasoned code reviewer, specializing in bug localization and code $\hookrightarrow$ optimization, with a particular talent for critically evaluating teammates' patches and synthesizing $\hookrightarrow$ high-quality, precise solutions from collaborative efforts.
You will be presented with a GitHub issue, the relevant source code files, and five \*candidate patches\* submitted by $\hookrightarrow$ your teammates. Your task is twofold:
1. \*\*Patch Review\*\*: Carefully evaluate each of the five candidate patches \*\*individually\*\*. Identify whether each $\hookrightarrow$ patch resolves the issue correctly, partially, or incorrectly. If you identify any issues (e.g., logical errors, $\hookrightarrow$ misunderstandings of the bug, overlooked edge cases, or incomplete fixes), explain them clearly and suggest what $\hookrightarrow$ could be improved or corrected.
Even if a patch appears mostly correct, you should still analyze its strengths and limitations in detail. Treat $\hookrightarrow$ this as a collaborative peer-review process: constructive, technical, and focused on improving code quality.
2. \*\*Patch Synthesis\*\*: After analyzing all five candidate patches, synthesize your understanding to produce your
$\hookrightarrow$ \*\*own final code patch\*\* that fully resolves the issue. Your patch should: - Be grounded solely in the issue description and provided source code. - Be informed by your peer review, but not copy any one patch outright.
# Issue Statement {problem_statement}
# Files to be Modified
Below are some code files that might be relevant to the issue above. One or more of these files may contain bugs.
{file_content}
---
# Candidate Patches (From Collaborators)
Below are five proposed patches submitted by your teammates. You will evaluate them individually. {candidate_patches}
# Reasoning and Review Guidelines
Your response should be structured into two parts:
## Part 1: Peer Patch Review
or each of the five candidate patches: - Analyze the candidate patch's intent and correctness. - Identify what it does well, what it gets wrong (if anything), and how it could be improved. - Use precise references to the provided issue and source code files to justify your evaluation
## Part 2: Final Patch Synthesis
After completing all five reviews, your reasoning process should generally follow these steps, with flexibility to $\hookrightarrow$ adjust as needed for clarity and accuracy:
1. \*\*Issue Analysis\*\*: Start by thoroughly analyzing the issue. Explain what the problem is, why it matters, and what $\hookrightarrow$ the intended behavior should be. Identify the key goals and constraints that must be addressed in your solution.
2. \*\*Task Decomposition\*\*: Break down the issue into smaller, manageable sub-tasks. Describe the purpose of each $\hookrightarrow$ sub-task and how it contributes to solving the overall problem.
3. \*\*Code Localization and Editing\*\*: For each sub-task: - Identify relevant code snippets by file path and code location. - Explain how each snippet relates to the sub-task. - Describe how the code should be changed and justify your reasoning. - After thorough explanation, provide the corresponding edited code.
---
# General Requirements
1. \*\*Clear and Evidence-Based Reasoning\*\*: Provide clear and precise reasoning for each step, strictly based on the $\hookrightarrow$ provided issue and code without inferring information not explicitly stated.
2. \*\*Comprehensive and Concise\*\*: Address all relevant aspects of the issue comprehensively while being concise. $\hookrightarrow$ Justify the exclusion of any sections that are not relevant.
3. \*\*Detailed Guidance\*\*: Ensure the reasoning steps are detailed enough to allow someone unfamiliar with the $\hookrightarrow$ solution to infer and implement the necessary code modifications.
---
# Response Format
1. The reasoning process should be enclosed in <think> ... </think>.
2. The final patch should be output in a standalone Python code block \*after\* the </think> block.
3. Do not include any commentary or justification after the </think> block.
---
# Patch Format
Please generate \*SEARCH/REPLACE\* edits to fix the issue. Every \*SEARCH/REPLACE\* edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: = 三
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE
If, in \`Files to be Modified\` part, there are multiple files or multiple locations in a single file require changes. $\hookrightarrow$ You should provide separate patches for each modification, clearly indicating the file name and the specific $\hookrightarrow$ location of the modification.
Please note that the \*SEARCH/REPLACE\* edit REQUIRES PROPER INDENTATION. For example, if you would like to add the $\hookrightarrow$ line print(x)', you must fully write that out, with all those spaces before the code! And remember to $\hookrightarrow$ wrap the \*SEARCH/REPLACE\* edit in blocks \`\`\`python...
# Example Response
<think>
1. Review of candidate patch:
- Review of patch-1: This patch attempts to fix X by modifying function Y. However, it fails to consider Z... - Review of patch-2: - Review of patch-3: - Review of patch-4: - Review of patch-5: 2. Analyze the issue by myself... 3. Locate the relevant code.. 4. Apply necessary changes.. </think> \`\`python ### mathweb/flask/app.py <<<<<<< SEARCH from flask import Flask ======= import math from flask import Flask >>>>>>> REPLACE \`\`python ### mathweb/utils/calc.py <<<<<<< SEARCH def calculate_area(radius): return 3.14 $^ *$ radius $^ *$ radius ======= def calculate_area(radius): return math.pi $^ *$ radius \*\* 2 >>>>>>> REPLACE .
# Prompt Template — Code Editing Reward Model
% \begin{lstlisting}[language=text,fontsize=\tiny]
You are an expert software engineer and seasoned code reviewer, specializing in code optimization within real-world code repositories.
Your strengths lie in precisely identifying and modifying the relevant parts of the code to resolve issues. You will be provided with an issue description and an original code which has bugs.
Your task is to write s code modifications to resolve the issue.
\*\*Problem Statement:\*\* {problem_statement}
\*\*Original Code:\*\* {file_content}minted % \end{lstlisting}
# Prompt Template — Generating Reasoning CoT for Reproduction Test SFT
You are collaborating with peers in a software-engineering lab to create reproduction tests for real-world bug
$\hookrightarrow$ reports.
You are given three context blocks: BEGIN ISSUE (authoritative bug description) -
{problem_statement} END ISSUE - - BEGIN ORIGINAL TEST FILES (do \*\*not\*\* reproduce the bug)
{original_tests} - END ORIGINAL TEST FILES - -- BEGIN TEST PATCH (contains a working reproduction) --
{test_patch} - END TEST PATCH --
$>$ \*\*Important\*\*
$>$ • The \*Test patch\* demonstrates at least one valid way to reproduce the bug; silently use it as inspiration to
$\hookrightarrow$ craft your own concise, single-file reproduction test.
> • \*\*In your reasoning, act as if you derived everything from the Issue description alone.\*\*
> • Do \*\*not\*\* refer to or hint at the presence of \*Test patch\*, \*Original tests\*, or any hidden “oracle.”
$> \bullet$ Your final script must follow the exact format below and reproduce \*only\* the behavior described in the Issue.
## Task
Produce \*\*one\*\* self-contained Python test file that:
1. \*\*Reproduces _only_ the bug described in the Issue\*\* when the bug is present.
2. \*\*Passes\*\* (prints \`"Issue resolved"\`) once the bug has been fixed.
3. Prints exactly one of: \`"Issue reproduced"\` – bug still present (via AssertionError) \* \`"Issue resolved"\` – bug fixed / expectations met \* \`"Other issues"\` – unexpected exception unrelated to the Issue Reuse helpers from \*Original tests\* only if indispensable; otherwise keep the script standalone and minimal.
## Response Format (\*\*strict\*\*)
1. Wrap \*\*all reasoning\*\* in a \`<think> ... </think>\` block. \*Inside \`<think>\* you may explain how you interpreted the Issue and designed the test \*\*without\*\* mentioning or $\hookrightarrow$ implying knowledge of the Test patch or any oracle.\*
2. After \`</think>\`, output \*\*only\*\* the final test script in a single Python code block.
Example skeleton \*(follow this pattern exactly)\*:
\`\`\`text
<think>
your independent reasoning here (no references to test_patch/oracle)
</think>
\`\`\`python # All necessary imports
def test_<meaningful_name>() -> None: try: # minimal code that triggers the bug except AssertionError: print("Issue reproduced") return except Exception: print("Other issues") return print("Issue resolved") return
if __name__ == "__main__": test_<meaningful_name>()
\*\*Guidelines\*\*
$^ *$ \*\*Focus solely on the Issue.\*\* Strip out checks for any other problems that appear in \*Test patch\*.
\* Keep the script \*\*self-contained\*\* unless a helper from \*Original tests\* is indispensable.
\* Be concise—remove fixtures/parametrisations not strictly required.
Return your response in the exact format specified above.
# Prompt Template — Reproduction Test Generator
You are collaborating with peers in a software-engineering lab to create reproduction tests for real-world bug $\hookrightarrow$ reports.
You are given the following authoritative bug description:
BEGIN ISSUE -- {problem_statement} - END ISSUE --
> \*\*Important\*\* > • You must independently derive a minimal reproduction test from the Issue description alone. $>$ • Do \*\*not\*\* assume access to any “oracle,” prior test patch, or original test files. > • Your final script must be self-contained and focused only on the behavior described in the Issue.
## Task
Produce \*\*one\*\* standalone Python test file that:
1. \*\*Reproduces _only_ the bug described in the Issue\*\* when the bug is present.
2. \*\*Passes\*\* (prints \`"Issue resolved"\`) once the bug has been fixed.
3. Prints exactly one of: \* \`"Issue reproduced"\` – bug still present (via AssertionError) \* \`"Issue resolved"\` – bug fixed / expectations met \* \`"Other issues"\` – unexpected exception unrelated to the Issue
## Response Format (\*\*strict\*\*)
1. Wrap \*\*all reasoning\*\* in a \`<think> ... </think>\` block. \*Inside \`<think>\* explain how you interpreted the Issue and designed the test \*\*without referencing any hidden $\hookrightarrow$ tools, patches, or external files.\*\*\*
2. After \`</think>\`, output \*\*only\*\* the final test script in a single Python code block.
Example skeleton \*(follow this pattern exactly)\*:
<think>
your independent reasoning here (no references to other tests or oracles) </think>
\`\`\`python # All necessary imports
def test_<meaningful_name>() $_ - >$ None: try: # minimal code that triggers the bug except AssertionError: print("Issue reproduced") return except Exception: print("Other issues") return print("Issue resolved") return
if __name_ _main_ 1 test_<meaningful_name>()
# Guidelines
Focus solely on the Issue description. Do not infer details not explicitly stated
Keep the script self-contained—do not rely on external helpers or fixtures.
Be concise—remove all non-essential code and boilerplate. | Language models (LMs) perform well on standardized coding benchmarks but
struggle with real-world software engineering tasks such as resolving GitHub
issues in SWE-Bench, especially when model parameters are less than 100B. While
smaller models are preferable in practice due to their lower computational
cost, improving their performance remains challenging. Existing approaches
primarily rely on supervised fine-tuning (SFT) with high-quality data, which is
expensive to curate at scale. An alternative is test-time scaling: generating
multiple outputs, scoring them using a verifier, and selecting the best one.
Although effective, this strategy often requires excessive sampling and costly
scoring, limiting its practical application. We propose Evolutionary Test-Time
Scaling (EvoScale), a sample-efficient method that treats generation as an
evolutionary process. By iteratively refining outputs via selection and
mutation, EvoScale shifts the output distribution toward higher-scoring
regions, reducing the number of samples needed to find correct solutions. To
reduce the overhead from repeatedly sampling and selection, we train the model
to self-evolve using reinforcement learning (RL). Rather than relying on
external verifiers at inference time, the model learns to self-improve the
scores of its own generations across iterations. Evaluated on
SWE-Bench-Verified, EvoScale enables our 32B model, Satori-SWE-32B, to match or
exceed the performance of models with over 100B parameters while using a few
samples. Code, data, and models will be fully open-sourced. | [
"cs.CL",
"cs.AI",
"cs.SE"
] |
# Evolutionary chemical learning in dimerization networks
Alexei V. Tkachenko1,\*, Bortolo Matteo Mognetti2, and Sergei Maslov3,4,5,\*
1Center for Functional Nanomaterials, Brookhaven National Laboratory, Upton, NY 11973, USA; 2 Interdisciplinary Center for Nonlinear Phenomena and Complex Systems, Université Libre de Bruxelles, B-1050 Brussels, Belgium ; 3Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; 4Carl R. Woese Institute for Genomic Biology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; Department of Physics, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
We present a novel framework for chemical learning based on Competitive Dimerization Networks (CDNs)— systems in which multiple molecular species, e.g. proteins or DNA/RNA oligomers, reversibly bind to form dimers. We show that these networks can be trained in vitro through directed evolution, enabling the implementation of complex learning tasks such as multiclass classification without digital hardware or explicit parameter tuning. Each molecular species functions analogously to a neuron, with binding affinities acting as tunable synaptic weights. A training protocol involving mutation, selection, and amplification of DNA-based components allows CDNs to robustly discriminate among noisy input patterns. The resulting classifiers exhibit strong output contrast and high mutual information between input and output, especially when guided by a contrastenhancing loss function. Comparative analysis with in silico gradient descent training reveals closely correlated performance. These results establish CDNs as a promising platform for analog physical computation, bridging synthetic biology and machine learning, and advancing the development of adaptive, energy-efficient molecular computing systems.
Chemical learning, Directed evolution, Molecular computation
Deep Learning is revolutionizing science, technology, and everyday life by enabling artificial intelligence systems to extract patterns and make decisions from vast amounts of data. Traditional approaches rely on digital computation and optimization techniques—such as gradient descent — to train models like Deep Neural Networks (DNNs) (1, 2). However, there is a growing interest in physical learning, the notion that physical systems can perform computations through their intrinsic dynamics without relying on conventional digital hardware (3–11). Such approaches harness the response of a properly designed physical system to process information, as demonstrated in optical processors (3, 4), memristive circuits (5), and even mechanical systems (10, 11).
In this context, complex biochemical networks operating inside living cells offer a particularly compelling example of physical learning. In biological organisms, processes such as cellular signaling, gene regulation, and metabolic control exhibit a remarkable ability to respond to environmental cues (12–14). This inherent capacity for complex information processing suggests that the function of evolved biochemical networks can be viewed as a form of Chemical Learning (CL), where molecular interactions—governed by thermodynamic and kinetic principles—perform computations analogous to those of artificial neural networks (15, 16).
In this paper, we introduce a CL framework based on the Competitive Dimerization Networks (CDNs)— systems composed of multiple molecular species capable of reversible pairwise binding, where each molecule may form dimers with different partners (17–21). Such systems are ubiquitous in biology, e.g. receptor–ligand binding networks, pairwise interactions between transcriptional factors used in combinatorial gene regulation, etc. The computational potential of proteinbased synthetic dimerization networks has been explored in Refs. (19, 20). More recently, engineered CDNs based on non-complementary DNA oligomers have been proposed as a novel platform for DNA computing (22). Recently it was shown that combining DNA hybridization with covalent binding can be used to construct DNA-based CDNs that convert multi-channel inputs into controlled modulation of specific dimer concentrations (21). We anticipate that the extended use of synthetic CDNs based on proteins, DNAs, or RNAs will become increasingly prevalent.
Some of the physical learning proposals require in silico training—that is, computer simulations to determine optimal parameters via methods such as backpropagation and gradient descent; others allow for in-situ training (7–9). Similarly there have been proposals of employing in silico machine-learning tools to design chemical and biological networks with desired functionality (23, 24). A key feature of our proposed CL framework is the possibility of in vitro training via directed evolution. In this approach, the desired function is acquired by the CDN without prior knowledge of its parameters or the need for precise engineering of the optimal system. Instead, we outline a protocol through which the optimal realization is obtained via artificial selection over a series of trials.
# Significance Statement
This study introduces a new paradigm for learning based on chemical reaction networks rather than digital circuits. Using Competitive Dimerization Networks (CDNs)—biomolecular systems in which species reversibly bind to form dimers—complex classification tasks are learned through in vitro directed evolution. This approach eliminates the need for digital hardware or gradient-based optimization, relying instead on intrinsic molecular dynamics for computation. The resulting chemical classifiers achieve high fidelity and robustness to noise, with performance comparable to that of gradient descent training. These findings establish CDNs as a scalable, energy-efficient platform for molecular computing, suggesting broad potential applications in diagnostics, biosensing, synthetic biology, and nanotechnology, where programmable, adaptive chemical systems could serve as alternatives to conventional electronic processors.
# Model and Protocols
Competitive Dimerization Networks. In our model, a competitive dimerization network (CDN) consists of $N$ distinct molecular species (e.g., DNAs or proteins) that can form pairwise dimers (see Fig. 1a). The formation of complexes larger than dimers is assumed to be negligible. We denote the overall concentrations of these molecules by $\mathbf { c } = \left( c _ { 1 } , c _ { 2 } , \ldots , c _ { N } \right)$ and their fugacities––the fractions of total concentration that remain unbound––by $\mathbf { f } = ( f _ { 1 } , f _ { 2 } , \ldots , f _ { N } )$ .
In chemical mass-action equilibrium free concentrations (activities) of the molecules, $x _ { i } = f _ { i } c _ { i }$ must satisfy the following set of equations:
$$
c _ { i } = x _ { i } + \sum _ { j } K _ { i j } x _ { i } x _ { j }
$$
Here, the symmetric association constant $K _ { i j } = K _ { j i }$ characterizes the strength of dimer formation between molecules of types $i$ and $j$ . The above equilibrium conditions yield the following equations for fugacities $f _ { i }$ :
$$
f _ { i } = \frac { 1 } { 1 + \sum _ { j } K _ { i j } c _ { j } f _ { j } } ,
$$
Despite its relative simplicity, we demonstrate below that the CDN is capable of performing learning tasks analogous to those carried out by artificial neural networks. Moreover, we show that the chemical network can be trained in vitro using directed evolution. In our framework, each molecular species functions as a “neuron,” and the dimensionless coefficients $K _ { i j } c _ { j }$ play the role of “synaptic weights” connecting these neurons. Because these weights depend on both the concentrations $c _ { i }$ and the association constants $K _ { i j }$ , they can, in principle, be experimentally tuned by adjusting the composition of the mixture or by modifying (or mutating) the molecules.
In analogy with DNNs, we designate: (i) a subset of $n _ { i n }$ molecular concentrations $\mathbf { c } _ { \mathrm { i n } } = ( c _ { 1 } , \hdots , c _ { n _ { i n } } )$ . , cnin ) as the input layer; (ii) a non-overlapping subset of $n _ { o u t }$ fugacities $\mathbf { f _ { \mathrm { o u t } } } = \left( f _ { N - n _ { o u t } + 1 } , \ldots , f _ { N } \right)$ as the output layer; and (iii) the remaining $n _ { h } = N - n _ { i n } - n _ { o u t }$ molecules constitute a hidden layer (see Fig. 1a).
Training the CDN involves adjusting both the concentrations (outside of the input layer) and the association constants to achieve a desired functionality. Once trained, the system’s output for any given input can be determined by measuring the output fugacities––for example, via FRET labeling. Because the input is represented by a set of concentrations of species mixed together, we refer to it as an “input cocktail” (see Fig. 1b). Numerically, the simplest way to compute the model’s output is to iteratively solve the mass-action equilibrium equations 2 for $\mathbf { f }$ .
Although our system bears conceptual similarities to DNNs, several important distinctions must be noted. First, the network structure of the CDN is inherently bidirectional––in contrast to the typical feedforward architecture of DNNs––since its connections, determined by the association constants, are naturally reciprocal. Second, the weights in the CDN (i.e., the coefficients $w _ { i j }$ ) are strictly nonnegative. Third, the nonlinear activation function in the equilibrium equation, f (y) = 1+1y , emerges directly from the law of mass action and may not be optimal for all learning tasks.
Notably, because our model corresponds to the minimization of a well-defined free energy, it shares similarities with
Fig. 1. Overview of chemical learning in competitive dimerization networks. (a) Schematic of a network composed of molecular species that reversibly form pairwise dimers, with designated input, hidden, and output layers. (b) Illustration of chemical computation: input cocktails modulate mass action binding equilibria to generate output fugacities. (c) In vitro training protocol using directed evolution, involving mutation, ligation, dilution, and selection of DNA sequences and their concentrations to optimize classification performance.
Hopfield neural networks (25, 26)––one of the simplest realizations of associative memory.
In-vitro Training via Directed Evolution. While the generic description of CL in CDNs supports multiple molecular implementations, one of the simplest realizations is based on noncomplementary DNA oligomers (22). In this approach, the binding affinities $K _ { i j }$ are determined by the specific oligomer sequences and can be readily modulated via mutations.
We propose a concrete experimental implementation for the in vitro training of such CDNs, as illustrated in Fig. 1c. In our design, the sequences corresponding to molecules in the hidden and output layers are concatenated into one or several master sequences that serve as the system’s genetic code. At each round of directed evolution, these master sequences are replicated with randomly induced mutations. Each master sequence is PCR-amplified and purified by removing complementary components generated by the PCR (e.g., using magnetic beads). Subsequently, the master sequences are cleaved into their constituent DNA oligomers (for instance, using restriction enzymes), diluted to desired concentrations $c _ { i }$ , and mixed with a cocktail of input oligomers at specified concentrations, ${ \bf { c } } _ { \mathrm { { i n } } }$ . The output fugacities, $\mathbf { f } _ { \mathrm { o u t } }$ , are then detected using techniques such as FRET.
Training the system involves exposing several CDN variants to batches of input cocktails. These variants are ranked in silico according to a loss function that quantifies the similarity between the measured and desired output fugacities. The best-performing variant is selected for the next round of in vitro evolution (see Fig. 1c). Differences among variants arise both from mutations that alter the association constants $K _ { i j }$ and from variations in the composition of the hidden and output layers, as determined by the concentrations of the master sequences.
In the present study, we model the effect of mutations as a biased multiplicative random walk of the association constants Kij :
$$
K _ { i j } ( t + 1 ) = K _ { i j } ( t ) e ^ { \sigma _ { \mathrm { m u t } } \left( \eta _ { i j } ( t ) - \eta _ { 0 } \right) }
$$
Here, $t$ denotes the round of in vitro evolution, and $\eta _ { i j } ( t )$ is a Gaussian white noise with $\langle \eta _ { i j } ( t ) \rangle = 0$ and $\langle \eta _ { i j } ( t ) \eta _ { i j } ( t ^ { \prime } ) \rangle =$ $\delta _ { t t ^ { \prime } }$ . The variance $\sigma _ { \mathrm { m u t } } ^ { 2 }$ characterizes the mutation rate and its impact on binding free energy, while the drift parameter $\eta _ { 0 } > 0$ reflects the mutational bias toward weakening binding interactions. Note that (i) $\ln K _ { i j }$ is proportional to the binding free energy between molecules $i$ and $j$ , which justifies our choice of a multiplicative random walk; and (ii) mutations are statistically more likely to weaken binding rather than strengthen it, rendering the random walk negatively biased. In a more detailed study, one may explicitly introduce random mutations into the sequences of individual DNA oligomers and calculate $K _ { i j }$ using the standard model described in (27).
Chemical Classifier. While CDNs can, in principle, perform a wide variety of learning tasks, here we demonstrate their utility as a multiclass classifier. In our study, a certain number of different of classes of input cocktails are generated randomly, as shown in Fig. 2a. Each cocktail is characterized by a set of input concentrations, $\mathbf { c } _ { \mathrm { i n } }$ , drawn from a log-normal distribution. To emulate the noise present in real experiments, these input concentrations are further scrambled by multiplicative noise. For the outputs, one-hot encoding is employed so that the number of outputs, $n _ { o u t }$ , equals the number of cocktail classes to be distinguished. As illustrated in Fig. 2b, one-hot encoding designates a specific output as active (high fugacity or “on”) for each class, with all other outputs remaining inactive (low fugacity or “off”).
At each step of in vitro evolution, we generate a balanced batch in which each class of patterns is represented by an equal number of examples and each example subject to a different realization of noise. Unlike traditional deep neural network training—which typically involves gradient descent and back-propagation—our numerical experiments utilize an evolutionary learning process. In this process, variants are generated by (i) randomly and independently mutating all association constants according to Eq. 3, and (ii) modulating the overall concentrations of molecules in the hidden and output layers as illustrated in Fig. 1c. Experimentally, this setup can be realized by first amplifying batches with mutations introduced via random mutagenesis using PCR and then diluting them by preset factors. The winning variant, characterized by a specific combination of sequences and concentrations, is selected based on a custom loss function.
Since the goal of our training is to maximize the contrast between the “on” and “off” states of the output components, the loss function is designed to penalize poor contrast. For each output component, we compute the ratio of the arithmetic mean of its “off” concentrations to the geometric mean of its “on” concentrations. The overall loss is defined as the logarithm of the worst (i.e., highest) contrast ratio across all output components:
$$
\begin{array} { r l } & { \Lambda _ { \mathrm { c o n t r a s t } } = \underset { i \in \mathrm { o u t } } { \operatorname* { m a x } } \left[ \ln \langle f _ { i } ^ { \alpha \downarrow } \rangle _ { \alpha \downarrow } - \left. \ln f _ { i } ^ { \alpha \uparrow } \right. _ { \alpha \uparrow } \right] = } \\ & { = \underset { i \in \mathrm { o u t } } { \operatorname* { m a x } } \left[ \ln \left( \frac { \left. \left( 1 - \phi _ { i } ^ { \alpha } \right) f _ { i } ^ { \alpha } \right. _ { \alpha } } { 1 - \left. \phi _ { i } ^ { \alpha } \right. _ { \alpha } } \right) - \frac { \left. \phi _ { i } ^ { \alpha } \ln f _ { i } ^ { \alpha } \right. _ { \alpha } } { \left. \phi _ { i } ^ { \alpha } \right. _ { \alpha } } \right] } \end{array}
$$
In this formulation, the index $\alpha$ labels the samples in the training batch, and $f _ { i } ^ { \alpha }$ represents the output fugacity for component $i$ in sample $\alpha$ . The variable $\phi _ { i } ^ { \alpha } \in \{ 0 , 1 \}$ denotes the desired target state of component $i$ for sample $\alpha$ . The subsets of the batch where component $i$ is expected to be “on” or “off” are denoted by $\alpha \uparrow$ and $\alpha \downarrow$ , respectively (i.e., $\phi _ { i } ^ { \alpha \uparrow } = 1$ and $\phi _ { i } ^ { \alpha \downarrow } = 0$ ). The notation $\langle \cdot \rangle _ { \alpha }$ indicates an average over the full batch, while $\langle \cdot \rangle _ { \alpha \uparrow }$ and $\langle \cdot \rangle _ { \alpha \downarrow }$ denote averages taken over the “on” and “off” subsets for each output component $i$ , respectively. The maximum function selects the worst performing (the least negative) output component $i$ . For the loss function to be applicable, it is necessary that $0 < \langle \phi _ { i } ^ { \alpha } \rangle _ { \alpha } < 1$ for each output component $i$ , ensuring that both “on” and “off” samples are present in the batch.
# Results
Performance and Efficiency of the Chemical Classifier. We numerically investigated a chemical classifier comprising $n _ { o u t } = 3$ classes of cocktails (labeled $X , Y$ , and $Z$ ) with $n _ { i n } = 6$ input channels. The input class concentrations $c _ { i }$ were randomly generated from a log-normal distribution $c _ { 0 } e ^ { \mathcal { N } ( 0 , \sigma ^ { 2 } ) }$ with $\sigma ^ { 2 } = 2$ The actual concentrations presented to the chemical classifier were then scrambled by the multiplicative noise cieN (0,(νσ) ), emulating inevitable noise present in real experimental conditions. The parameter $\nu$ thus quantifies the noise-to-signal (N/S) ratio in our inputs. Figure 2 illustrates one realization of inputs generated with a N/S ratio set to $2 0 \%$ . At this noise level, the evolutionarily trained CDN successfully discriminates among the three classes. This is evident in the 3D scatter plot (Fig. 2c), where data points form three distinct clusters—red for $X$ , blue for $Y$ , and green for $Z$ .
Fig. 2. Performance of the chemical classifier trained via directed evolution. (a) Randomly generated three input cocktails. (b) One-hot encoded target outputs for three cocktail classes (X, Y, Z). (c) 3D scatter plot showing class separation based on output fugacities at $20 \%$ noise. Each point corresponds to a single noisy realization of inputs in the test dataset. The colors correspond to the true class of the cocktail. (d) Histograms of output fugacities plotted in panel c, showing strong contrast between “on” and “off” states. (e) Network diagram of the evolved CDN after 400 generations of in vitro evolution. Grayscale shading of hidden layer nodes indicates their overall connectivity. In this specific example, all displayed interactions have identical association constants, fixed at the upper saturation limit. Interactive version of this figure, allowing scanning over multiple parameters and cocktail sets, is available at: https://atkachen00.github.io/ChemClassifierWeb
Furthermore, the fugacities of individual output components $\scriptstyle { \mathcal { x } } , y$ , and $z$ ) clearly resolve the classes, as demonstrated by the histograms in Fig. 2d. The contrast between the output fugacities of the “on” and “off” states spans three orders of magnitude. Note that all axes in Figs. 2c–d are displayed on a logarithmic scale.
Figure 2e presents the structure of the evolutionarily trained chemical network after 400 generations. In this network diagram, the thickness of each edge reflects the logarithm of the corresponding association constant $K _ { i j }$ . For clarity, edges with $K _ { i j } < 1$ have been omitted since their contributions to the classifier function are negligible. The strongest connection corresponds to $K _ { i j } = 1 0 ^ { 4 }$ , which is the upper cutoff set in our evolution procedure.
The efficiency of the chemical classifier is evaluated using two complementary metrics. The first measure, the on/off contrast, is defined as the logarithmic difference (expressed in decades) between the output fugacities in their "on" and "off" states. The second metric is the Mutual Information (MI) between the inputs and outputs, an information-theoretic quantity that generalizes the correlation coefficient. In this context, MI captures the degree of overlap between the "on" and "off" histograms of the outputs.
The mutual information between the binary variable $\mathcal { X }$ (assigned a value of 1 for cocktails of class $X$ and $0$ otherwise) and the corresponding output fugacity $f _ { x }$ is given by:
$$
\begin{array} { c } { \displaystyle I _ { X } = \frac { 1 } { S _ { 0 } } \sum _ { f _ { x } } \left[ p _ { \mathcal X } P ( f _ { x } | \mathcal X ) \ln \left( \frac { P ( f _ { x } | \mathcal X ) } { P ( f _ { x } ) } \right) + \right. } \\ { \displaystyle \left. ( 1 - p _ { \mathcal X } ) P ( f _ { x } | \bar { \mathcal X } ) \ln \left( \frac { P ( f _ { x } | \bar { \mathcal X } ) } { P ( f _ { x } ) } \right) \right] } \end{array}
$$
Here, $p _ { \mathcal { X } } = 1 / n _ { o u t }$ denotes the fraction of samples corresponding to class $X$ , and
$$
P ( f _ { x } ) = p _ { \mathcal { X } } P ( f _ { x } | \mathcal { X } ) + ( 1 - p _ { \mathcal { X } } ) P ( f _ { x } | \bar { \mathcal { X } } )
$$
is the total probability of observing a particular fugacity $f _ { x }$ . This expression assumes a balanced testing set, with equal representation of each of the $n _ { o u t }$ classes. The normalization factor $S _ { 0 } = - \left( p _ { \mathcal { X } } \ln p _ { \mathcal { X } } + ( 1 - p _ { \mathcal { X } } ) \ln ( 1 - p _ { \mathcal { X } } ) \right)$ is selected so that $I _ { X } ~ = ~ 1$ corresponds to the maximal MI—achieved when the "on" and "off" histograms of $f _ { x }$ are completely nonoverlapping—while $I _ { X } = 0$ indicates statistical independence between $\chi$ and $f _ { x }$ . Analogous definitions hold for the outputs $y$ and $z$ , with the high fidelity of our chemical classifier evidenced by values $I _ { X }$ , $I _ { Y }$ , and $I _ { Z }$ all approaching 1 (Fig. 2d).
Effects of noise and training parameters. Figure 3 illustrates how both the noise level and the choice of loss function affect the efficiency of our classifier. As expected, an increase in the noise-to-signal (N/S) ratio, $\nu$ , results in a deterioration of performance (Figs. 3a–d). Specifically, comparing panels (a) and (b) for an $\mathrm { N } / \mathrm { S }$ ratio of $4 0 \%$ with panels (c) and (d) for
$1 0 0 \%$ clearly shows that MI drops markedly as noise increases, evident from the increased overlap between the "on" (red) and "off" (blue and green) histograms.
Fig. 3. Effect of noise and choice of loss function on classifier performance. Throughout the figure, the left column corresponds to the contrast loss, while the right column corresponds to the MSE loss. (a–d) Output histograms and mutual information at $4 0 \%$ and $100 \%$ noise-to-signal ratios, showing the effect of the loss function on output separation and information content. (e, f) Scatterplots illustrating the correlation between on/off contrast and mutual information for both loss functions. Points correspond to different realizations of input classes, noise levels and drift parameters. (g, h) Heatmaps of on/off contrast as a function of drift and noise-to-signal ratio. (i, $\mathrm { j } )$ Corresponding heatmaps of mutual information, demonstrating the robustness of information transmission under varying training conditions.
The selection of the loss function plays a critical role, particularly in terms of On/Off contrast. Our bespoke loss function, Eq. 4, used for generating Figs. 3(a) and (c)—is specifically engineered to maximize contrast, yielding significantly better on/off separation than the conventional mean-squared error (MSE) loss function, which was employed in Figs. 3(b) and (d). While the MSE loss produces a sharper peak in the "on" state, it results in a weaker On/Off contrast. These findings indicate that the optimal choice of loss function should be guided by the specific requirements of the intended application. The role played by the choice of the loss function, and the connection between the two metrics used, is apparent from the comparison of two scatter plots, 3(e) and (f). These plots indicate a strong correlation between the Mutual Information, and On/Off contrast parameter, but the latter provides a valuable additional insight once MI saturates at its maximum value 1. As we have already observed in the specific example above, the use of MSE loss function results in significantly lower On/Off contrasts.
Heatmaps in Figs. 3g–j further quantify these trends as functions of both the noise-to-signal (N/S) ratio (x-axis) and the negative drift parameter $\eta _ { 0 }$ from the evolutionary training defined in Eq. 3 (y-axis). Increases in either the noise-to-signal ratio or the drift parameter degrade classifier performance. Notably, while the loss functions $\Lambda _ { \mathrm { c o n t r } }$ (Fig. 3(g) and MSE (Fig. 3(h) exhibit dramatically different on/off contrasts, the mutual information remains virtually unchanged, as shown in Figs. 3(i)–(j).
Increasing the drift parameter leads to a modest decrease in the chemical classifier’s noise tolerance while simultaneously rendering the trained dimerization network sparser. In other words, the network achieves its function through a smaller number of strong links, as demonstrated in Figs. 4ab. To construct these graphs, we disregarded all weak interactions satisfying $K _ { i j } c _ { 0 } < 1$ . The thickness of the remaining edges is proportional to $\ln K _ { i j } c _ { 0 }$ , which, in turn, is proportional to the corresponding binding free energies. Specifically, the edge weights $w _ { i j }$ are defined as
$$
w _ { i j } = { \left\{ \begin{array} { l l } { \log _ { 1 0 } ( K _ { i j } c _ { 0 } ) } & { K _ { i j } c _ { 0 } > 1 } \\ { 0 } & { { \mathrm { o t h e r w i s e } } . } \end{array} \right. }
$$
We quantify the sparsity of a weighted network in terms of the dispersion coefficient (the ratio of the standard deviation to the mean) of its edge weights $w _ { i j }$ , and we show the relationship between sparsity and the evolutionary drift parameter in Fig. 4c. The observed increase in sparsity with drift is expected, as the drift tends to weaken all bindings and eliminate redundant interactions. In this regard, the drift plays a role analogous to regularization in standard machine learning. Consequently, it is not surprising that the simpler, sparser networks exhibit lower noise tolerance compared to their more connected counterparts (see Figs. 3g–h).
Comparison between Evolutionary Learning and Gradient Descent. All of the results discussed above were obtained using evolutionary training of the chemical classifier, a method that can be implemented in vitro. In contrast, a more traditional approach to training neural networks relies on backpropagation and Gradient Descent (GD). However, because our system is bidirectional—unlike standard feedforward neural networks—backpropagation cannot be applied. Nonetheless, GD remains feasible and is implemented as described in the SI Appendix. As noted earlier, the regularization coefficient in the GD algorithm serves a role analogous to the drift parameter in Evolutionary Learning (EL). It is therefore instructive to compare CDNs trained using EL and GD (see Fig. 5). Each point in the scatterplot represents a specific combination of input cocktail and noise level. Because there is no one-to-one mapping between the drift parameter and the regularization coefficient, each point reflects an average over multiple values of drift (for EL) and regularization (for GD). As shown in Fig. 5a-b, the performance of the two training methods is highly correlated and generally comparable. However, GD exhibits a slight advantage over EL, as indicated by the clustering of points below the diagonal dashed line in both panels.
Fig. 4. Drift-induced sparsity in trained chemical networks. (a, b) Evolved CDN network topologies at zero drift (left) and $1 6 \%$ drift (right), with edge thickness proportional to log-transformed binding strengths $( K _ { 0 } c _ { 0 } )$ . Weak interactions $( K _ { 0 } c _ { 0 } < 1 )$ ] are omitted. Grayscale of hidden nodes reflects their connectivity, defined as the sum of their edge weights. (c) Network sparsity measured by the dispersion coefficient (standard deviation divided by mean) of edge weights as a function of the evolutionary drift.
To further probe the distinction between GD and EL, we compared the network structures and their reproducibility under each training scheme. We performed 20 independent runs of both EL and GD on the same cocktail set at $N / S = 0 . 5$ . Figure 5c–d presents these results as heatmaps of edge weights $w _ { i j }$ and hidden-layer concentrations $c _ { h } / c _ { 0 }$ . Rows correspond to 90 pairs of interacting molecules, ranked by their interaction strength $\left. w _ { i j } \right.$ averaged over all runs, while columns represent individual runs. Strikingly, none of the runs produced an identical network topology, underscoring a highly degenerate and rugged loss-landscape. Despite this diversity, classifier performance remains consistent across runs. In GD, five of the strongest network links persist across most solutions, and the hidden-layer concentration invariably reaches its maximum, $c _ { h } = 5 c _ { 0 }$ . In contrast, EL retains only two invariant links and exhibits considerable run-to-run variability in $c _ { h }$ , which never attains the upper bound. From this perspective, EL behaves as though sampling at a higher effective temperature compared to GD. Remarkably, despite these significant structural differences, the final classifier performance under both methods remains nearly identical (Fig. 5a–b).
Fig. 5. Comparison of network structures obtained by evolutionary learning (EL) and gradient descent (GD). (a,b) Scatter plots comparing the chemical classifier performance for EL and GD across multiple input cocktails and noise levels. Panel (a) shows mutual information, and panel (b) shows on/off contrast. Each point represents performance averaged over multiple drift (EL) and regularization (GD) parameters. The dashed diagonal indicates equal performance; points below the diagonal correspond to cases where GD outperforms EL. (c,d) Heatmaps summarizing network variability across 20 independent training runs for GD (panel c) and EL (panel d), performed on the same cocktail set at $N / S = 0 . 5$ . The top sub-panels show edge weights $w _ { i j } = \log _ { 1 0 } ( c _ { 0 } K _ { i j } )$ , $c _ { 0 } K _ { i j } > 1$ , ordered by their mean strength across all runs; the bottom sub-panels show hidden layer concentrations $c _ { h } / c _ { 0 }$ . Columns correspond to individual training runs and are hierarchically clustered by network similarity. | We present a novel framework for chemical learning based on Competitive
Dimerization Networks (CDNs) - systems in which multiple molecular species,
e.g. proteins or DNA/RNA oligomers, reversibly bind to form dimers. We show
that these networks can be trained in vitro through directed evolution,
enabling the implementation of complex learning tasks such as multiclass
classification without digital hardware or explicit parameter tuning. Each
molecular species functions analogously to a neuron, with binding affinities
acting as tunable synaptic weights. A training protocol involving mutation,
selection, and amplification of DNA-based components allows CDNs to robustly
discriminate among noisy input patterns. The resulting classifiers exhibit
strong output contrast and high mutual information between input and output,
especially when guided by a contrast-enhancing loss function. Comparative
analysis with in silico gradient descent training reveals closely correlated
performance. These results establish CDNs as a promising platform for analog
physical computation, bridging synthetic biology and machine learning, and
advancing the development of adaptive, energy-efficient molecular computing
systems. | [
"cond-mat.stat-mech",
"cond-mat.dis-nn",
"cs.LG",
"nlin.AO",
"physics.data-an",
"q-bio.MN"
] |
# I. INTRODUCTION
I NTERACTING with visual data using natural language is a rapidly advancing field [1]–[7] with real-world applications such as robotics and AR/VR. Within this domain, 3D visual grounding, commonly referred to as text-guided 3D visual grounding (T-3DVG), focuses on locating specific objects based on precise text prompts within 3D scenes.
Recent research [8]–[16] has focused on designing sophisticated architectures to achieve state-of-the-art performance in 3DVG. For example, [9] introduces a framework that explicitly models geometry-aware visual representations while generating fine-grained, language-guided object queries. [10] learns the multi-attribute interactions to refine the intra-modal and inter-modal grounding cues. Although accurate text features offer structured word-level context for strong performance in T-3DVG tasks, they often fail to address challenges in complex real-world environments and overlook the actuality that precise text cannot appear out of thin air.
Considering the real-world environments, speech stands out as a natural modality compared to text, offering an intuitive
Y. Qi, L. Gu, H. Chen and M. Wei are with the School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China (e-mail: qiyuyu $@$ nuaa.edu.cn; $\operatorname { g l p } 1 2 2 4 @ 1 6 3 . \operatorname { c o m }$ ; chenhonghuacn $@$ gmail.com; mingqiang.wei $@$ gmail.com). L. Nan is with the Urban Data Science Section, Delft University of Technology, Delft, Netherlands (e-mail: liangliang.nan $@$ tudelft.nl).
Fig. 1. Illustration of SpeechRefer. SpeechRefer captures acoustic similarities between phonetically related words, which acts as a complement to potential transcription errors, reducing dependence on potentially erroneous transcriptions.
way for humans to interact with AI systems. For example, you can directly ask a robot to locate “the brown guitar next to the bed”, which reflects natural human communication in real-world scenarios. Logically, speech recognition models can serve as a bridge between speech and existing T-3DVG methods to achieve speech-guided 3D visual grounding (S3DVG). Despite advances in speech recognition, complex realworld environments inevitably lead to a significant challenge during transcriptions: errors and uncertainties that arise due to variations in speech quality, such as accents, background noise, or speech rate. For instance, as illustrated in Figure 1, the input speech with Chinese accent “There is a grey office chair” might be transcribed incorrectly as “There is a grain office chair”. This misinterpretation leads the model to search for a non-existent object (attribute) and fails to ground the correct object. Such transcription errors are common and have cascading effects, severely impairing the system’s ability to ground objects accurately. Unfortunately, existing T3DVG methods rely heavily on precise text inputs to perform effectively, which reveals a significant gap between T-3DVG methods and real-world scenarios with complexity and noise.
To address this challenge, we propose SpeechRefer, a novel 3DVG framework that seamlessly integrates with existing T3DVG methods, leveraging the features of raw speech to complement text derived from the speech and mitigate the impact of transcription errors. As shown in Figure 1, the speech features space preserves acoustic similarities between phonetically similar words (e.g. grey, grain) while capturing phonetic nuances often lost during speech-to-text conversion. This allows the system to infer the correct target object by considering alternative interpretations of the user’s intent, rather than relying solely on potentially erroneous transcription.
Specifically, we propose a speech complementary module comprising two components: a phonetic-aware refinement module and a confidence-based complementary module. The phonetic-aware refinement module inherently captures acoustic similarities between phonetically related words while preserving subtle speech nuances. Then, the confidence-based complementary module explicitly generates complementary proposal scores based on speech features, effectively complementing the transcribed text scores and reducing reliance on potentially erroneous transcription. By incorporating complementary contextual information, the system identifies “grey” as the most likely intended attribute, even when the transcription incorrectly suggests “grain”. To address scenarios where erroneous text dominates and renders speech complementary scores ineffective, we further propose a contrastive complementary module. This module uses contrastive learning to align erroneous text features with accurate speech features, ensuring that speech features effectively complement text derived from the speech even in challenging situations. By integrating speech features as a complement to textual features, SpeechRefer effectively addresses the challenges of imperfect speech-to-text transcriptions, which enables more accurate interpretation of user intent in uncertain conditions
Our contributions include:
SpeechRefer is the first and novel 3DVG framework designed to enhance model robustness in the presence of noisy and ambiguous transcriptions, which can integrate with existing T-3DVG methods and improve its performance by a large margin. The speech complementary module retains acoustic similarities while distinguishing subtle differences, avoiding being constrained by potential transcription errors. The contrastive complementary module corrects misaligned text features using speech features, ensuring robust performance even in transcription-dominated scenarios. We introduce the first speech datasets that reflect complex real-world environments, representing a promising step toward more intuitive 3D visual grounding.
# II. RELATED WORK
# A. Speech Recognition
Speech recognition is a technology that converts speech into text and is one of the most critical advancements in AI-driven interaction. Its applications have permeated various aspects of daily life and production, such as voice assistants in mobile phones. After years of research and development, speech recognition has become quite advanced [17], [18], particularly for English. For instance, the Whisper [17] model achieves a word error rate (WER) as low as $4 . 1 \%$ on the English category of the Fleurs dataset. However, even with state-of-the-art speech recognition models, transcription errors are still common due to variations in speech quality, such as accents, background noise, or speech rate. Potential misinterpretation leads the model to search for a non-existent object (attribute) and fails to ground the correct object. SpeechRefer was proposed to address this challenge.
# B. Language and Speech in 3D Interaction
Recent works in 3D vision-language understanding have advanced tasks like object grounding, dense captioning, and question answering in 3D scenes. Mao et al. [19] proposed a modality alignment network for 3D dense captioning, capturing both local and global spatial relationships to improve semantic alignment. Ye et al. [20] introduced 3DQA, extending traditional VQA into 3D environments by jointly modeling geometry, appearance, and language inputs . These efforts highlight the potential of language as a powerful interface for 3D scene understanding.
Meanwhile, audio-visual learning has been increasingly explored in 3D tasks such as 3D facial animation [21] and embodied interaction, where speech serves as a driving signal for motion generation or behavior control. This line of work highlights the growing interest in leveraging speech as a natural modality for enhancing realism, interactivity, and user engagement in 3D environments.
Inspired by these directions, our work explores speechguided 3D visual grounding under noisy conditions—a novel yet promising task. By enhancing robustness to speech ambiguities, we aim to support more intuitive, speech-based 3D interaction in real-world applications such as AR/VR.
# C. 3D Visual Grounding
Since the introduction of the T-3DVG task by ScanRefer [22] and ReferIt3D [23], it has developed rapidly and found applications across various fields. Existing T-3DVG methods are typically categorized into two-stage and onestage frameworks, with the majority adopting a two-stage framework. Two-stage methods [24]–[29] first extract text features from the query language using language models [30], [31] and generate proposal boxes using a pre-trained detector [32], [33]. These features are then fused to select the bestmatched object. Among these, ConcreNet [34] presents four novel stand-alone modules designed to enhance performance in challenging repetitive instances. MA2TransVG [10] learns the multi-attribute interactions to refine the intra-modal and inter-modal grounding cues. Conversely, one-stage methods [35]–[37] directly localize the target in a single step. Notably, 3D-SPS [35], the first one-stage model, utilizes text features to guide visual keypoint selection, facilitating progressive object grounding. EDA [36] introduces a text decoupling module to generate textual features for each semantic component, significantly enhancing grounding performance, and even surpassing most existing two-stage frameworks.
In general, most methods focus on designing sophisticated architectures to achieve state-of-the-art grounding performance, but they overlook the challenges posed by complex real-world environments and practical applications. Our SpeechRefer is the first and novel 3DVG framework that could integrate with existing T-3DVG methods, bridging the gap between T-3DVG methods and real-world scenarios with complexity and noise.
Fig. 2. Overview of SpeechRefer. The framework builds on existing T-3DVG methods, incorporating two novel modules highlighted here: (1) Speech complementary module: This module encodes speech features $F _ { s }$ and generates speech scores $S _ { s }$ to explicitly complement the transcribed text. By integrating speech features, it helps identify the correct object and mitigates the impact of potential transcription errors. (2) Contrastive complementary module: This module aligns potentially erroneous text features with corresponding speech features by contrastive learning. This alignment ensures robust performance even when erroneous text features dominate. Key components are color-coded, where gray represents pre-existing modules (e.g., visual and text encoders). S, $T$ , and $O$ denote the global speech feature, sentence-level transcribed text feature, and the mean of object features of all target objects associated with the same description, respectively.
# III. METHOD
# A. Overview
As shown in Figure 2, our network has two inputs: one is point cloud $P \in \bar { \mathbb { R } ^ { N \times ( 3 + K ) } }$ that represents the whole 3D scene by 3D coordinates and K-dimensional auxiliary feature (e.g., RGB, normal vectors, or the pre-trained multi-view features [22]). Another input is speech, which is a human voice describing a target object in the 3D scene. The output is the predicted target object, which is an axis-aligned bounding box with the center $\mathbf { c } = [ c _ { x } , c _ { y } , c _ { z } ] \in \mathbb { R } ^ { 3 }$ in the world coordinate, and the size $\textbf { s } = ~ [ s _ { x } , s _ { y } , s _ { z } ] ~ \in ~ \mathbb { R } ^ { 3 }$ . Our SpeechRefer is a novel S-3DVG framework that could seamlessly integrate with existing T-3DVG methods, so the visual encoder, language encoder, and cross-modal fusion module are consistent with the T-3DVG models.
Our SpeechRefer features two key innovations. First, the speech complementary module (III-C) captures acoustic similarities between phonetically related words while distinguishing subtle distinctions. This module generates complementary proposal scores from speech signals, reducing dependence on potentially erroneous transcriptions. Contrastive complementary module (III-D) aligns erroneous text features with corresponding correct speech features through contrastive learning, ensuring robust performance even when errors dominate. Finally, Section III-E describes the total loss.
# B. Baseline: Text-guided 3D Visual Grounding via Speech-toText
To clarify the challenges addressed by our approach, we first describe the baseline T-3DVG method that relies on speechto-text transcription. The speech recognition model serves as a bridge between speech and T-3DVG by converting speech to text, which is then processed by any T-3DVG model to locate objects. However, even advanced speech recognition models frequently introduce critical errors due to factors like accents, background noise, and speech rate, reflecting real-world conditions. Figure 3 illustrates common transcription errors when the input speech contains uncertainties such as accents, where the original T-3DVG model often fails to ground the correct objects. Among these errors, inaccuracies in transcribing target object names are particularly frequent and problematic. Prior studies [36] have confirmed that grounding results significantly degrade when object names are omitted. Additionally, errors in attributes, such as misinterpreting “white” as “wide” can also mislead the model by overlooking crucial contextual cues, increasing its susceptibility to distractors within the same category and ultimately impairing overall performance.
# C. Speech Complementary Module
To address transcription errors, we propose a speech complementary module that incorporates phonetic features directly from speech. It draws on the acoustic similarities and nuances of input speech as a complement to mitigate the negative effects of potentially erroneous transcriptions. It mainly com
(a) Target object name (b) Object attribute (c) Target object name & attribute Wrong Results OrTiegixntal tbhedbheadsiswihnitferaontdobfltahcek rmigahttrmeossteswiwnitdhofwo.utrhle gs. the pillow is white. the pillow is on the orange couch. trheedrepiillsoawsoqnuatrhe gcroauychc.hair. it is next to a Transcribed The baddies in front of the right most window. The The pillow is wide. The pillow is on the orange couch. rTehderpeililsowa soqnutahre cgoruecahte.r. It is next to a Text bad has white and black mattresses with four legs.
Fig. 3. Incorrectly transcribed text may include errors such as misinterpreting the target object name or its attributes. These mistakes can mislead the model into searching for a non-existent object or attribute, ultimately failing to correctly identify and ground the intended object.
Fig. 4. Cross-modal Fusion Module. The module consists of two identical cross-modal matching modules, which is the specific matching module of existing T-3DVG models.
prises two submodules.
Phonetic-aware Refinement Module. First, we utilize the Whisper [17] as our speech encoder, which inherently preserves information about acoustic similarities that are often lost during the speech-to-text conversion. Whisper is a generalpurpose speech recognition model trained on a large, diverse speech dataset. As a multitask model, it can also perform language identification and speech translation, effectively exploiting contextual clues in speech, which makes it an excellent source of initial speech features for a wide range of downstream networks. Before the raw speech is fed into the network, it’s resampled to $1 6 { , } 0 0 0 ~ \mathrm { H z }$ , and an 80-channel logmagnitude Mel spectrogram representation $M _ { s } \in \mathbb { R } ^ { 8 0 \times L }$ is computed on 25 millisecond windows with a stride of 10 milliseconds [17]. The speech log-magnitude Mel spectrogram $M _ { s }$ is encoded into speech feature $W _ { s } \in \mathbb R ^ { N _ { s } \times D _ { s } }$ through the Whisper encoder, where $N _ { s }$ denotes the temporal sequence length of speech, and $D _ { s }$ denotes semantic feature dimension.
To enhance the pre-trained Whisper model, we design learnable layers and introduce classification loss to fine-tune the speech features. This process emphasizes the speech features of the target object class, which enables the model to better distinguish subtle differences. The initial speech feature $W _ { s }$ is the input of learnable layers, which can be formulated as:
$$
W _ { s } ^ { \prime } = S e l f A t t e n t i o n ( L i n e a r ( W _ { s } ) )
$$
where Linear(·) denotes the linear projection, SelfAttention(·) denotes the multi-head self-attention mechanism [38]. Then, the global speech feature is extracted via max pooling, denoted as $F _ { s } \in \mathbb { R } ^ { 1 \times D _ { s } }$ , and the features are repeated and stacked, generating the final speech feature representation $F _ { s } ^ { \prime }$ . After that, a speech classification loss is included, which can be formulated as:
$$
\mathcal { L } _ { c l s - s } = - \sum _ { i = 1 } ^ { C } y _ { i } l o g ( \hat { y } _ { i } )
$$
where $\mathbf { C }$ is the number of classes, $y _ { i }$ is the ground truth one-hot encoding, $\hat { y } _ { i }$ is classification scores after a classifier head. This enables the model to dynamically capture acoustic similarities in speech while distinguishing subtle distinctions by emphasizing the features of the target object name, further improving the expressiveness of speech features.
Confidence-based Complementary Module. Based on the former, the confidence-based complementary module explicitly generates complementary scores for proposals based on speech features to complement the textual score, mitigating the impact of potential transcription errors. Figure 4 illustrates the structure, the cross-modal matching component is the specific matching module used in existing T-3DVG models. We utilize the same network to fuse speech-visual and transcribed textvisual features respectively, ensuring feature consistency. Then the speech-visual fusion features are fed into an FFN layer to generate the speech scores $S _ { s }$ for the $\mathbf { M }$ generated proposals except for transcribed text scores $S _ { t }$ . This allows the model to incorporate multiple potential interpretations of the user’s intent, leveraging speech scores to complement the transcribed text ones. This reduces the model’s reliance on potentially erroneous transcriptions. The final scores $S = \beta S _ { s } + ( 1 - \beta ) S _ { t }$ , where $\beta$ is a weighting factor. Eventually, the bounding box with the highest final score will be considered as the final grounding result. Additionally, we introduce a reference loss to supervise the module, encouraging that the proposal with the highest score is closer to the ground truth, thereby promoting more accurate target object selection:
$$
L _ { r e f } = \alpha _ { 1 } L _ { r e f - s } + \alpha _ { 2 } L _ { r e f - t }
$$
$$
L _ { r e f - s } = - \sum _ { i = 1 } ^ { M } t _ { i } l o g ( s _ { s i } ) , L _ { r e f - t } = - \sum _ { i = 1 } ^ { M } t _ { i } l o g ( s _ { t i } )
$$
where $L _ { r e f - s }$ and $L _ { r e f - t }$ are cross-entropy losses based on speech and transcribed text scores respectively, following the
strategy in ScanRefer [22], we set the label $t _ { i }$ for the $i ^ { t h }$ box that has the highest IoU score with the ground truth as 1 and others as 0.
# D. Contrastive Complementary Module.
While utilizing speech information significantly mitigates the negative impact of potential transcription errors, situations still arise where erroneous text weighs heavily on proposal selection, leading to failures in identifying the target object despite the complementary speech scores. To address this, we propose a contrastive complementary module. This module builds on the observation that most objects (or attributes) incorrectly generated by transcription, such as “bat”, “grain”, etc., do not exist in the input scene. At the feature level, even if the textual features represent “grain”, they should align with the speech features of “grey” to correctly infer the target object. To achieve this, we employ contrastive learning to align incorrect textual features with the correct speech features, ensuring more accurate target object inference:
$$
\mathcal { L } _ { c } ^ { T S } = - \sum _ { i = 1 } ^ { N } l o g \frac { e x p ( s ( T _ { i } , S _ { i } ) / t ) } { \sum _ { j = 1 } ^ { n } e x p ( s ( T _ { i } , S _ { j } ) / t ) }
$$
where $N$ is the number of samples in one batch, $t$ is a temperature parameter, and $s ( \cdot )$ is the cosine similarity. $S$ is the global speech feature (i.e. $F _ { s }$ ), and $T$ is the sentence-level transcribed text feature (i.e. $F _ { t } ^ { \mathrm { ~ \tiny ~ . ~ } }$ ).
Additionally, to align with the corresponding visual features, we introduce additional contrastive loss between language and visual features:
$$
\begin{array} { r } { \mathcal { L } _ { c } ^ { S , T \to O } = - ( \displaystyle \sum _ { i = 1 } ^ { N } l o g \frac { e x p ( s ( S _ { i } , O _ { i } ) / t ) } { \sum _ { j = 1 } ^ { n } e x p ( s ( S _ { i } , O _ { j } ) / t ) } + } \\ { \displaystyle \sum _ { i = 1 } ^ { N } l o g \frac { e x p ( s ( T _ { i } , O _ { i } ) / t ) } { \sum _ { j = 1 } ^ { n } e x p ( s ( T _ { i } , O _ { j } ) / t ) } ) } \end{array}
$$
$O$ is the mean of object features of all target objects paired with the same description. We symmetrize these losses by adding the analogous terms, $\mathcal { L } _ { c } ^ { S \mathrm { \bar { T } } }$ , $\mathcal { L } _ { c } ^ { O \to S , T }$ . The total contrastive loss is the average of these contrastive losses. By aligning text features with corresponding speech features, the model can more effectively correct transcription errors, ensuring robust performance even when errors dominate.
# E. Total Loss
We train the network end-to-end with the total loss ${ \mathcal { L } } =$ $\mathcal { L } _ { d e t } + \gamma _ { 1 } \mathcal { L } _ { c } + \gamma _ { 2 } \mathcal { L } _ { r e f } + \gamma _ { 3 } \mathcal { L } _ { c l s }$ . $\mathcal { L } _ { d e t }$ varies with different integration methods. For example, in 3DVG-Transformer, $\mathcal { L } _ { d e t } = 1 0 \mathcal { L } _ { v o t e - r e g } + \mathcal { L } _ { o b j n - c l s } + \mathcal { L } _ { s e m - c l s } + 1 0 \mathcal { L } _ { b o x }$ , and this object detection loss exactly follows the loss used in [33] for the ScanNet dataset [39]. To better adapt the total loss function to different methods, we adjust $\gamma _ { 1 } , \gamma _ { 2 } , \gamma _ { 3 }$ to ensure that all the loss components are balanced and remain in the same order of magnitude.
Fig. 5. Speech Duration Distribution of SpeechRefer dataset.
# IV. EXPERIMENTS
# A. Experimental Setting
SpeechRefer and SpeechNr3D datasets. We introduce new speech datasets designed to simulate real-world variations, such as accents, speech rates, and background noise, based on ScanRefer [22] and Nr3D [23]. To incorporate accents, we use the off-the-shelf text-to-speech tool1 to generate speech with a Chinese accent, creating the SpeechRefer CA and SpeechNr3D CA datasets. For background noise, we overlay one of five common sounds, i.e., birds chirping, rain, cars, people talking, and piano, onto the speech recordings. Datasets with normal rate and without accent and background noise are denoted as SpeechRefer VF and SpeechNr3D VF (Variations-free) accordingly. Our SpeechRefer dataset matches the size of ScanRefer and classifies scenes as “unique” (containing a single object of the class) or “multiple” (containing multiple objects). Similarly, the SpeechNr3D dataset mirrors Nr3D and is categorized as “Easy” or “Hard” based on the number of distractors, and as “View-dependent” or “View-independent” depending on the reliance on the speaker’s viewpoint. The distribution of speech durations in the SpeechRefer dataset is illustrated in Figure 5, showing a prominent peak in the 4–8 second range, which represents the most frequent duration span. Overall, durations vary from 1 to 35 seconds, though instances at the extremes—particularly those shorter than 3 seconds or longer than 15 seconds—are comparatively rare. The average duration across all samples is 5.89 seconds. To further evaluate the robustness of SpeechRefer, we manually recorded an additional set of speech data comprising 699 training samples and 352 validation samples, designed to reflect real-world usage scenarios more accurately. The corresponding evaluation results are presented in IV-B.
Evaluation metrics. We adopt the same metrics as other T-3DVG methods: $\operatorname { A c c } @ 0 . 2 5 \mathrm { I o U }$ and $\operatorname { A c c } @ 0 . 5 \operatorname { I o U }$ , which measure the percentage of correctly predicted bounding boxes with IoU scores exceeding 0.25 and 0.5. For SpeechNr3D, we evaluate the percentage of successful matches between predicted and ground truth bounding boxes, following previous work [23].
TABLE I RESULTS ON THE SPEECHREFER CA DATASET (I.E., SPEECHREFER DATASET WITH CHINESE ACCENT) AND COMPARISON RESULTS AGAINST BASELINE. NOTABLY, SPEECHREFER EXHIBITS PROMISING PERFORMANCE WHEN FINE-TUNED ON THE SPEECHREFER VF DATASET (VARIATIONS-FREE SPEECHREFER DATASET).
TABLE IIRESULTS ON SPEECHNR3D CA DATASET SET WITH GT BOXES AND COMPARISON RESULTS WITH THE BASELINE.
Implementation details. We train the model end-to-end on a single NVIDIA RTX4090. The optimizer and learning rate are consistent with the T-3DVG methods integrated with SpeechRefer. For 3DVG-Transformer, we train for 200 epochs with a batch size of 2, pairing each scene with 32 sentences. For M3DRefCLIP, the network is trained for 60 epochs with a batch size of 4, pairing each scene with 8 sentences. Other training configurations align with standard T-3DVG methods.
# B. Experimental Results
Tabel I presents the results of 3DVG-Transformer [40] and M3DRefCLIP [41] integrated with our SpeechRefer on the SpeechRefer CA dataset, alongside baseline results from four methods [22], [36], [40], [41].
The baseline performance suffers significantly when compared to results using precise text inputs due to errors in transcribed text. In contrast, SpeechRefer shows marked improvements. For 3DVG-Transformer, SpeechRefer achieves gains of $2 . 2 3 \% / 1 . 7 6 \%$ in overall accuracy. For M3DRefCLIP, the overall accuracy improves by more than $2 . 3 0 \% / 1 . 6 0 \%$ with a notable $2 . 8 0 \% / 2 . 1 0 \%$ gain on the “multiple” subset, demonstrating SpeechRefer’s ability to effectively distinguish distractors. Further fine-tuning on the variationsfree SpeechRefer VF dataset yields even better results than training directly on it (please refer to the results in supplementary materials). This highlights SpeechRefer’s robustness in handling variations. Quantitative results demonstrate that our SpeechRefer can reliably localize target objects from speech descriptions and can easily integrate with existing T-3DVG methods, significantly enhancing performance and robustness. For the SpeechNr3D CA dataset, as shown in Table II, our results outperform the baseline across all subsets, with an overall accuracy improvement of $3 . 6 0 \%$ . Compared to the SpeechRefer dataset, SpeechNr3D is more sensitive to transcription errors due to its shorter descriptions (11.4 words on average, compared to ScanRefer’s 20.27 words). The shorter descriptions make transcription errors more impactful, but SpeechRefer mitigates these adverse effects effectively, demonstrating robust performance.
Furthermore, we evaluate SpeechRefer on the aforementioned real-world dataset, which closely mirrors practical application scenarios, with the results summarized in Table III. The experiments show that SpeechRefer can effectively adapt to real-world speech inputs with a relatively small amount of fine-tuning. Specifically, it’s fine-tuned on 699 manually recorded training samples and evaluated on 352 validation samples. For comparison, the results in the first row of Table III are obtained using the same number of synthesized validation samples, which are drawn from the introduced SpeechRefer CA dataset, further demonstrating the model’s robustness and generalization ability in realistic settings.
# C. Ablation Studies
Speech rate and background noise. We evaluate SpeechRefer’s robustness under varied noisy conditions by introducing two additional variations: speech rate and background noise. For speech rate, we tested slower (100) and faster (200) rates. Results in Table IV show improved performance at slower rates, where reduced speed enhances semantic clarity. Even at faster rates, SpeechRefer maintains strong performance, showcasing its resilience. Under background noise conditions, SpeechRefer consistently outperforms baseline, which indicates its robustness in real-world noisy scenarios.
Fig. 6. Qualitative results. We utilize green, red, and blue boxes to represent the ground truth, the prediction of baseline, and the prediction of SpeechRefer based on 3DVG-Transformer, respectively.
TABLE III PERFORMANCE ON REAL SPEECH DATA. BOTH EVALUATIONS—ON SYNTHETIC AND HUMAN-RECORDED SPEECH—WERE CONDUCTED USING THE SAME 352-SAMPLE VALIDATION SET. THE SYNTHETIC SPEECH SAMPLES WERE TAKEN DIRECTLY FROM THE SPEECHREFER CA DATASET, WHILE THE REAL-WORLD SPEECH CONSISTED OF MANUALLY RECORDED AUDIO COLLECTED FROM HUMAN SPEAKERS.
TABLE IV ABLATION STUDY OF SPEECHREFER’S ROBUSTNESS UNDER DIFFERENT CONDITIONS, INCLUDING VARYING SPEECH RATE AND BACKGROUND NOISE, BASED ON 3DVG-TRANSFORMER.
Module contributions. Table V provides ablation results for different module combinations based on 3DVG-Transformer. The second row highlights the significant gains achieved by the speech learnable layers, which capture subtle differences in addition to acoustic similarities, thereby enhancing speech feature expressiveness. From the final row, we can see that the combination of the confidence-based complementary module and the contrastive complementary module further improves performance. This validates their roles in mitigating transcription errors and enhancing robustness.
TABLE V ABLATION STUDY OF DIFFERENT MODULE COMBINATIONS IN SPEECHREFER: SLL (SPEECH LEARNABLE LAYERS), CBM (CONFIDENCE-BASED COMPLEMENTARY MODULE), CCM (CONTRASTIVE COMPLEMENTARY MODULE). EVALUATED ON SPEECHREFER CA DATASETS.
TABLE VI PERFORMANCE CROSS ALIGNMENT TYPES (T REPRESENTS TEXT FEATURES, O REPRESENTS VISUAL FEATURES, S REPRESENTS SPEECH FEATURES). ROWS 1–4: SPEECHREFER $^ +$ 3DVG-TRANSFORMER; ROWS 5–6: SPEECHREFER $^ +$ M3DREFCLIP.
Alignment types in Contrastive Complementary Module. Table VI illustrates performance across alignment types (T: text features; O: visual features; S: speech features). When aligning only speech and visual features or only text and visual features, the performance is not optimal.
Only Speech. While transcribed text improves overall accuracy, it is not an ideal solution for the speech-guided 3D visual grounding task, as incorporating text adds complexity to the network and training process. Therefore, we conducted a preliminary exploration of a simplified SpeechRefer architecture that relies solely on the speech and point cloud modality. The results, presented in Table VII, show that our approach significantly outperforms AP-Refer [42], the first and only 3DVG network relies solely on the speech and point cloud. This demonstrates the effectiveness and robustness of our SpeechRefer framework.
We also explore the impact of different speech encoders. Table VIII presents the results of the speech-only network using Imagebind and Whisper as speech encoders, respectively. The results indicate that when Imagebind is used as the encoder, the performance is notably lower compared to Whisper. We attribute this discrepancy to the fact that, while Imagebind is designed to learn joint embeddings across six modalities, it primarily integrates these modalities through image-paired data. As a result, when applied to point cloud-centric tasks, a gap remains between the modalities. In contrast, Whisper is an encoder specifically optimized for speech processing, enabling it to more effectively capture the phonetic similarities within speech and guide the network toward better results.
TABLE VII COMPARISON RESULTS OF AP-REFER [42] AND SIMPLIFIED SPEECHREFER INTEGRATE WITH 3DVG-TRANSFORMER (THE SECOND ROW) AND M3DREFCLIP (THE FINAL ROW).
TABLE VIII ABLATION STUDY OF SPEECH ENCODER. RESULTS ON DIFFERENT SPEECH ENCODERS (IMAGEBIND [43] AND WHISPER [17]), EVALUATING ON 3DVG-TRANSFORMER.
Ablation study of confidence score weight. Table IX presents the results from various weight combinations assigned to the speech confidence score $S _ { a }$ and the text confidence score $S _ { t }$ in computing the final confidence output $S$ . Among the configurations tested, the best performance was achieved with equal weights of 0.5 for both modalities. In contrast, relying solely on either the text or speech confidence score resulted in significantly lower performance. This highlights the importance of both speech and text information in guiding the network, demonstrating that a balanced integration of these modalities yields the most effective results.
# D. Visualization and Limitations
To further demonstrate the effectiveness of SpeechRefer, Figure 6 illustrates qualitative comparisons between our method and the baseline on 3DVG-Transformer. Transcription errors often lead the baseline to misidentify objects, while SpeechRefer overcomes these challenges to successfully identify the target objects.
Despite the promising performance, our method still has limitations. As shown in Figure 7, significant transcription errors, such as consecutive mistranscribed words caused by accents or background noise, can cause the textual features to deviate too much, leading the model to misidentify targets.
TABLE IXABLATION STUDY ON CONFIDENCE SCORE WEIGHTING WITH VARIOUSCOMBINATIONS OF THE SPEECH SCORE $S _ { s }$ AND TRANSCRIBED TEXTSCORE $S _ { t }$ . FIVE DIFFERENT WEIGHT CONFIGURATIONS WERE TESTED.EVALUATED ON SPEECHREFER CA DATASET.
“the bathtub is beige and wide. the bathtub is behind a toilet.” Speech to text The best harvest beige and white. The best harvest behind a toilet.
While text features improve overall accuracy, reliance on text is not ideal for speech-guided 3D visual grounding tasks. Future work will explore improved extraction and utilization of semantic information directly from speech, enabling reliable reasoning without text dependency. | Existing 3D visual grounding methods rely on precise text prompts to locate
objects within 3D scenes. Speech, as a natural and intuitive modality, offers a
promising alternative. Real-world speech inputs, however, often suffer from
transcription errors due to accents, background noise, and varying speech
rates, limiting the applicability of existing 3DVG methods. To address these
challenges, we propose \textbf{SpeechRefer}, a novel 3DVG framework designed to
enhance performance in the presence of noisy and ambiguous speech-to-text
transcriptions. SpeechRefer integrates seamlessly with xisting 3DVG models and
introduces two key innovations. First, the Speech Complementary Module captures
acoustic similarities between phonetically related words and highlights subtle
distinctions, generating complementary proposal scores from the speech signal.
This reduces dependence on potentially erroneous transcriptions. Second, the
Contrastive Complementary Module employs contrastive learning to align
erroneous text features with corresponding speech features, ensuring robust
performance even when transcription errors dominate. Extensive experiments on
the SpeechRefer and peechNr3D datasets demonstrate that SpeechRefer improves
the performance of existing 3DVG methods by a large margin, which highlights
SpeechRefer's potential to bridge the gap between noisy speech inputs and
reliable 3DVG, enabling more intuitive and practical multimodal systems. | [
"cs.CV"
] |
# 1 INTRODUCTION
L parogerelsasnignubaogtehmgeonderlsa (dLoLmMasin)ahpaplviecamtiaodnes (re.mg.a,rokpaebnledomain question answering [332], cross-modal video summarization [175], general-purpose code generation [191]) and specific domain applications (e.g., biomedical literature analysis [394], legal document review [221], SQL generation for business intelligence [250]). As shown in Figure 1, apart from technical advances in LLMs [289], [64], [460], [301], [241], [227], data management has emerged as a critical factor in unlocking LLMs’ full potential in these applications (DATA4LLM). It includes efficient and scalable solutions for data processing, storage, and serving across the LLM lifecycle, as evidenced in recent academic studies [157], [285], [254] and industry reports [327], [433], [69], [39]. Conversely,
LLM-powered techniques are increasingly being adopted to enhance data management tasks, such as data manipulation, analysis, and system optimization (LLM4DATA).
DATA4LLM. Effective data management is fundamental to the scalable development and deployment of LLMs. To illustrate this, we highlight representative scenarios where LLMs depend on specialized techniques for data processing, storage, and serving across various stages of the LLM lifecycle.
Example- 1⃝ Data Processing for LLMs. Processing a large-scale training dataset (e.g., ${ \sim } 4$ TB multi-modal tokens utilized in Qwen2.5-VL pretraining [70]) poses several challenges. First, acquiring diverse raw data (e.g., over 10,000 object categories for visual grounding) demands substantial efforts in data collection (Section 2.3.1) and, in many cases, data synthesis (Section 2.3.6). Second, preparing high-quality training samples requires robust pre-processing, including rigorous data filtering (Section 2.3.3), along with dedicated evaluation approaches. Third, the overall performance of LLMs depends heavily on an end-to-end pipeline that effectively schedules and coordinates these processing tasks, especially for the pretraining stage (Section 2.3.7).
Example- $\textcircled{2}$ Data Storage for LLMs. Managing storage for LLMs, spanning both training datasets (see Example- $\textcircled{1}$ ) and massive model parameters (e.g., DeepSeek-R1 with 671B parameters [162]), poses significant challenges. First, largescale datasets must be partitioned and distributed across multiple storage nodes, introducing challenges in data placement and consistency management (Section 2.4.2). Second, to support efficient LLM training and inference, these storage nodes must deliver high I/O throughput for timely data transfer to compute nodes (Section 2.4.4). Third, the massive size of model parameters increases the risk of training interruptions, necessitating robust fault tolerance mechanisms to recover and resume training from intermediate states (Section 2.4.5). Example– $\textcircled{3}$ Data Serving for LLMs. Data serving plays a critical role in selecting and preparing input data (e.g., the task-specific prompts), directly affecting the quality of LLM’s responses. Taking retrieval-augmented generation (RAG) as an example, EyeLevel.ai [37] observed that when relying solely on vector similarity, RAG accuracy declines notably with 10,000-page documents, and the performance degradation can reach up to $1 2 \%$ with 100,000 pages (still fewer than enterprise-scale datasets). Several challenges arise in this context. First, the retrieved knowledge is typically noisy and must be filtered and re-ranked to ensure relevance and factual accuracy (Section 2.5.1). Second, the retrieved content is often lengthy and exceeds the input capacity or comprehension of LLMs, necessitating effective compression techniques to preserve utility while improving performance (Section 2.5.2).
LLM4DATA. Conversely, various LLM-based techniques can be leveraged to enhance core data management tasks, including data manipulation, data analysis, and system-level optimization. The following examples illustrate how LLMs can be applied to improve these tasks in practice.
Example- 1 LLM-based Data Manipulation. Data manipulation, including cleaning, integration, and discovery, is critical for ensuring high-quality datasets. Traditional methods depend on rigid rules and domain-specific configurations, requiring extensive manual efforts and struggling with complex data samples [243], [78], [74]. For instance, standardizing date formats (e.g., “Fri Jan 1st 10:36:28 2021” vs. “1996.07.10 AD at 15:08:56”) or resolving textual inconsistencies (e.g., “Monticello VA, Jasper” vs. “Monticello VAA”) typically requires intricate programming scripts or handcrafted constraints [319], [432]. These approaches also struggle with cross-row error detection, such as mismatched city-state-zip entries. In contrast, LLMs can infer semantic similarities and autonomously generate cleaning workflows to resolve such inconsistencies without requiring explicit rule definitions [237], [432], [454]. This semantic understanding enables LLMs to adapt flexibly to diverse data issues and support more scalable and context-aware data manipulation (Section 3.1).
Example- 2 LLM-based Data Analysis. Data analysis over heterogeneous sources, such as medical records and transactional data, is essential in many real-world applications. Traditional deep learning models, while effective at performing specific semantic-level analysis, struggle to generalize across diverse data formats and task types. For instance, tasks such as table extraction and table-based question answering across heterogeneous sources (e.g., relational tables and knowledge graphs) often require the development of separate, specialized models. This process is both resource-intensive and difficult to scale. In contrast, LLMs offer a unified reasoning framework that leverages broad semantic understanding, enabling them to support a wide range of analytical tasks across various data modalities with greater flexibility and reduced efforts for task-specific engineering (Section 3.2).
Example- 3⃝ LLM-based System Optimization. System optimization entails configuring parameters (e.g., memory settings) and monitoring runtime status (e.g., resource utilization) to ensure optimal system performance. Traditional approaches, such as manual tuning or deep learning-based methods, are time-consuming and inefficient [474]. For instance, methods of Bayesian Optimization (BO) or Reinforcement Learning (RL) require numerous workload replays over 20 hours to identify promising configurations for a single TPC-H workload [177]. Moreover, root cause analysis over anomalies can be error-prone, particularly in multi-cause scenarios where metrics are highly interdependent [490]. In contrast, LLMs offer a new paradigm by integrating domain knowledge (e.g., tuning manuals) and applying advanced reasoning to instruct optimization. By leveraging retrievalaugmented prompts, LLMs can efficiently identify root causes or recommend precise configurations, enabling faster and more accurate optimization in complex environments [489], [248], [223] (Section 3.3).
# 1.1 Techniques of DATA4LLM
Characteristics of LLM Datasets (§ 2.2). As shown in Figure 1, datasets (following the “IaaS” concept) play a critical role in enabling the desired capabilities at each LLM stage, including (1) pre-training, (2) continual pre-training, (3) finetuning, (4) reinforcement learning, (5) retrieval-augmented generation (RAG), (6) LLM agents, and (7) evaluation. For each stage, we separately analyze the characters of required data (e.g., preferred formats and emphasized aspects within IaaS) and the corresponding data techniques (see Table 1).
Data Processing for LLMs ( $\ S$ 2.3). We introduce techniques to prepare high-quality datasets for LLMs based on a
series of processing steps.
Data Acquisition. Data acquisition aims to (1) extract relevant data (e.g., text and images) from noisy data sources with certain structures (e.g., dynamically rendered web pages) [73], [144], [76], [73], [6], [19], [30], [31], and (2) extract data from complicated data sources (e.g., scanned or handwritten documents) with techniques such as complex layout analysis [202], [18], [392], [180], [391], [407], [257], [326], [406].
$\bullet$ Data Deduplication. Data deduplication aims to identify duplicates in large-scale textual or multi-modal data, including exact string matching [122], [299], hash identification [88], [81], [122], [299], [347], [358], [207], [298], sample reweighing [167] and embedding-based clustering [46], [385], [360]. . Data Filtering. We review data filtering methods at two primary levels: (1) Sample-level filtering selects high-quality and diverse samples using strategies like perplexity measuring [383], [61], [288], influence assessment [254], [168], clustering methods [45], [436], prompt-based scoring [411], [264], [345], or mixes of these strategies [285], [84], [126]; (2) Content-level filtering aims to remove undesirable or harmful content from large-scale datasets, such as toxic language, personal identifiable information (PII), biased statements [268], [275], and improper images and videos [437], [216], [390]. $\bullet$ Data Selection. Data selection aims to select sub-datasets and evaluate their ability to accurately represent the target distribution, especially when handling diverse datasets or domains. There are methods like similarity-based data selection [423], [421], [321], [80], optimization-based data selection [130], [417], [269], and model-based data selection [465]. $\bullet$ Data Mixing. Data mixing aims to effectively integrate datasets from diverse domains without degrading quality or destabilizing LLM performance. Key techniques include: (1) Heuristic optimization, which empirically tunes data ratios to enhance downstream performance. Examples include twostage mixing [139], source rebalancing [347], and entropybased weighting [152]; (2) Bilevel optimization, which formulates data weighting as a nested optimization problem to jointly balance training and validation objectives [302], [135]; (3) Distributionally robust optimization, which enhances resilience to worst-case domain shifts by emphasizing underperforming or rare data domains [420], [278]; (4) Modelbased optimization, which builds predictive models to map data mixing ratios to loss and task performance. Approaches include linear predictive modeling (e.g., REGMIX [263]), nonlinear function fitting [152], [439], [160], scaling law-based estimation [323], and latent source attribution [251].
$\bullet$ Data Synthesis. We introduce data synthesis techniques designed to address the following key challenges: (1) Mitigating harmful characteristics such as toxicity or bias, which can be inherited or amplified in synthetic data (e.g., program-aided verification [496], semantic scoring [173], and multi-agent consistency filtering [346]); (2) Balancing data utility and privacy, through privacy-preserving synthetic rewriting and key-entity obfuscation methods during the RAG stage [450]; (3) Generating diverse and logically consistent reasoning data using approaches like formal proof-based validation [178], Chainof-Thought (CoT) branching and error correction [173], and high-quality problem synthesis guided by structure and complexity constraints [260], [442]; (4) Automating human-like evaluation and feedback generation with LLM-based preference modeling [71], judge models for response ranking [476], and clustering-based diversity quantification [92].
$\bullet$ Data Pipelines. We first introduce frameworks that integrate basic data processing operators and interfaces, serving as the general foundation for building data pipelines [90], [305], [368]. Then we showcase typical pipelines with heuristic mechanisms that properly arrange these operators (mainly for LLM pretraining) [311], [236], [310]. Finally, we discuss strategies that go beyond heuristic designs to further optimize these data processing pipelines [91].
Data Storage for LLMs (§ 2.4). We review data storage techniques for LLMs from the following main aspects.
Data Formats. We review commonly-used dataset and model data formats for LLMs. Dataset formats include TFRecord [44], MindRecord [40] for multimodal data, and tf.data.Dataset that can be directly fed into LLMs [43]. For model data storage, there are formats like Pickle [13] and ONNX [27].
LLM Data Distribution. LLM data distribution aims to store data across multiple storage nodes in a cluster, which mainly serves for storing large-scale LLM training data. Key approaches include (1) distributed storage systems like JuiceFS [16] and 3FS [15]; and (2) heterogeneous storage systems for model data (e.g., across GPUs and CPUs) [333], [334], [337], [336], [435].
LLM Data Organization. LLM data organization aims to transform data into a format suitable for storage and retrieval (mainly for the RAG stage) in heterogeneous forms. First, for vector RAG, relevant techniques include content formatting [97], [172], [57], [89], chunking [480], embedding [94], [24], [249], compression [50], [380], [381], [381]. Second, for graph RAG, we discuss indexing techniques such as generating textual summary for quick retrieval [127], [164], [136]. We also introduce the systems that integrate these techniques, including vector search engines [125], [26], [34], [25] and graph storage platforms [292], [65], [1].
$\bullet$ LLM Data Movement. LLM data movement aims to improve the speed of data movement across storage and compute nodes. Relevant techniques include (1) caching data [219], [161], [469]; (2) offloading data/operator to multiple devices (e.g., across CPUs) [158], [67], [159], [468]; and (3) overlapping of storage and computing in training stage [466], [479].
LLM Model Data Fault Tolerance. LLM model data fault tolerance aims to enhance the ability to recover from system failures during model training. Relevant techniques include (1) checkpointing [291], [194], [403], [389], which stores checkpoints across a hierarchical storage system; and (2) redundant computation, which leverages redundant states of LLM in parallel training (e.g., pipeline parallelism [382], hybrid parallelism [186], [147]) to support rapid fault recovery.
$\bullet$ KV Cache in LLMs. KV caching in LLMs is essential for enabling fast and efficient inference by managing key-value memory usage. Existing techniques include: (1) Memory layout and allocation, which optimize the physical organization of KV memory for high performance and scalability [220], [428]; (2) Storage offloading, which places KV data on suitable storage media to balance speed and capacity [197], [148]; (3) KV compression, which reduces memory footprint through techniques like encoding compression [265], [255], [150]; (4) Efficient indexing, which accelerates KV access via specialized retrieval structures [440], [478].
Data Serving for LLMs ( $\ S$ 2.5). We provide an overview of data serving techniques tailored for LLMs from four aspects.
$\bullet$ LLM Data Shuffling. LLM data shuffling aims to determine the appropriate order of data application during stages like LLM training and RAG. In the training stage, we discuss data pruning techniques (e.g., sample-scoring-based approaches [137], [66], model-state-based approaches [372], [56], [416], [276]) and data-centric training strategies [123]. In the RAG stage, we discuss RAG knowledge filtering [280], [114], [87] and re-ranking [128], [12], [318], [47].
LLM Data Compression. LLM data compression aims to compress the model’s input data to stay within the context window limit or to facilitate model understanding. Relevant techniques include: (1) RAG knowledge compression (e.g., rule-based [427], [348], [200] and model-based method [101], [335]); and (2) prompt compression (e.g., metric-based [189], [190] and model-based method [303], [293], [102]).
$\bullet$ LLM Training Data Packing. LLM training data packing aims to ensure uniform sequence lengths in training inputs. Relevant techniques include: (1) short sequence insertion [116], [259]; (2) optimizing sequence combination [218], [316]; and (3) semantic-cased packing [364], [349]).
LLM Inference Data Provenance. LLM inference data provenance aims to ensure the factual consistency of LLMgenerated content. Relevant techniques include: (1) embedding markers [482], [105], [256]; and (2) statistical provenance [212]).
# 1.2 Techniques of LLM4DATA
LLM for Data Manipulation ( $\ S$ 3.1). LLMs have been increasingly applied to data manipulation tasks, with the goal of preparing high-quality datasets for non-LLM applications and enhancing data quality for downstream usage. Key areas include data cleaning, data integration, and data discovery.
$\bullet$ Data Cleaning. This task involves standardizing and refining datasets through a series of operations. We highlight three major subtasks: (1) Data Standardization, which reformats data samples using handcrafted standardization prompts [279], [63] or agents that generate cleaning operations or pipelines [319], [237]; (2) Data Error Processing, which identifies and corrects noisy data via direct LLM prompting [103], [461], [432], context-enrichment techniques [78], [74], or task-specific finetuning for error handling [432]; (3) Data Imputation, which fills in missing values using explicit imputation instructions and retrieval-augmented generation (RAG) methods [129].
$\bullet$ Data Integration. This task focuses on identifying and reconciling semantically related datasets across heterogeneous sources. We review two core subtasks: (1) Entity Matching, which aligns data entries referring to the same real-world entity using structured prompts [308], [134], sometimes augmented with predefined code-based reasoning strategies [430]; (2) Schema Matching, which establishes correspondences between schema elements using direct prompting [304], RAG techniques incorporating multiple models [267], knowledge graph-based methods [277], and agent-based workflow generation [320], [340].
$\bullet$ Data Discovery. This task aims to extract informative insights from a dataset. We cover two key subtasks: (1) Data Profiling, which generates descriptive metadata and summaries using task-specific prompts [456], [58], and enhanced with context via RAG techniques [72]; (2) Data Annotation, which assigns semantic labels or types through various prompting strategies [203], [204], [217], supported by classical retrieval-based [408] and LLM-generated context [163].
LLM for Data Analysis (§ 3.2). LLMs significantly improve the analytical capabilities across structured, semistructured, and unstructured data.
Structured Data Analysis. For relational data analysis, natural language interfaces allow users to write high-level questions instead of SQL/Python code [452]. Multi-step QA frameworks (e.g., TAPERA [475] and ReAcTable [464]) decompose complex queries, while some end-to-end solutions fine-tune LLMs specifically for tabular tasks (e.g., TableGPT [240]), apply content retrieval (e.g., CABINET [306]) or convert tables into images for analysis (e.g., Table-LLaVA [477]). For graph data, LLMs facilitate semantic queries with GQL generation (e.g., $R ^ { 3 }$ -NL2GQL [493]) and knowledge-aware QA by retrieving or reasoning over relevant subgraphs [424].
$\bullet$ Semi-Structured Data Analysis. Meanwhile, handling semistructured data (e.g., JSON and spreadsheets) remains challenging. Recent benchmarks (e.g., TEMPTABQA [165] and SPREADSHEETBENCH [281]) reveal substantial performance gaps.
Unstructured Data Analysis. Finally, unstructured data analysis leverages LLMs to address document and program analysis tasks. For document analysis, OCR-dependent approaches involve performing OCR on document images followed by the integration of textual, layout, and visual features for reasoning (e.g., UDOP [376] and DocFormerV2 [62]). OCR-free methods directly generate the answer with end-to-end multimodal LLMs (e.g., Pix2Struct [225] and DUBLIN [49]). For program analysis, LLMs could serve as vulnerability detection tools using program analysis based training (e.g., PDBER [271]) or case-driven prompt engineering (e.g., VUL-GPT [270]). For program related analysis, LLMs could summarize repositories (e.g., SCLA [284]) or serve as a repository-level code completer (e.g., RepoFusion [357]) using their powerful semantic reasoning abilities.
LLM for Data System Optimization ( $\ S$ 3.3). LLMs equipped with advanced reasoning and code generation capabilities have been increasingly adopted in core system optimization tasks. These include: (1) configuration tuning (identifying optimal system settings); (2) query optimization (rewriting or refining input queries for performance gains); and (3) anomaly diagnosis (analyzing system issues to ensure performance reliability).
• Configuration Tuning. This task leverages LLMs to determine effective configuration parameters for improved system performance through: (1) Prompt engineering tailored to tuning tasks, using both manually crafted [243], [132], [156] and automatically generated prompts [491], [473]; (2) Retrieval-augmented generation (RAG), which incorporates prior tuning experiences during offline knowledge base preparation [223] and online knowledge retrieval [96]; (3) Objectivealigned tuning, which is enhanced through targeted training techniques [491], [177].
$\bullet$ Query Optimization. This task utilizes LLMs to rewrite queries or improve execution plans by: (1) Designing optimization-oriented prompts that include explicit guid
(b) Continual (c) Supervised (a) Pre-Training Data (Morethan1T Pre-Training Data + Fine-Tuning Data Unlabeledsamples) (10M\~10BUnlabeledsamples) (10k\~10Mlabeledsamples)
package kg.jl.common; This is a list of characters RedPajama OBELICS Medical Imaging Data Closed QA
import android.c..o.ntent; Aglraivnat.i.n.e... GeDnaertal (315431M itemxatgaensd) caWnhcaetrPi?sr(olMumnalgipgtnant tLyupnegocfamncaleirg insanat
gQe: Itshtishetroekaeneypwtahye Iscetaunp? 14.8% 2.3% (1.D2oTmtaoiknesn) OtupemnorQiAncludes .. ... tumor that .
f\\$iusxnGed\$csi:trib.eo.e.cnta{eIsnditrgmorpadlpuehctiwoitnh} tLheet 215..070% SoDuartcaes tLoyuruipmgneiognroacftahemnastcaileinrgtinhsaenat MaTnhcheemstaetrchCibtyetawnedeAnFC Ttohimsoigmragpehya p(CpeTa) r.s.c.taonboef tahceolumnpgust.e..d Task Captioning ... toTChfhTeimslsaulicisnganganasi,nmtm.w.e.aidtgihceaolf A Little General Domain Data sentence description area circled in red, of the figure.
Datasets BAAI DataComp OMON OBELICSanc Skywork Nniteds Datasets LLEMMA Datasets Dolly Stanford EarthGPT (f) LLM Agent Data (e) LLM RAG Data (d) Reinforcement Learning Data
Interaction Trajectory Data Medical Diagnosis Report (1k\~1Mlabeledsamples) Instruction(i) What's the weather? Instruction RLHF (1) Lung cancer is a disease. suopuetrcvoismioen Action(a1) Togo<eltpLUyosthacogatneio>Dnata GPS TCyopnet:exMt:alLiugnganctatnucmeroirs one of the most proceWshstnuicpselrur?nvigsion (31) ItLcuangscparencaedrtios iassdiusesa..s.e. 0 Observation(o1) Location=London common and serious types of cancer (3) It always causes fever... Tool Usage Data globally... Action(a2) ge<tL/Wpoeyctahtiohonenr>By Location <python> Image: accsomCmalopluagnmiheodunbty Consisdyesrtceo<mtmhdiminsokena>sres;p..i.ratory It i<safnrsacwteur>e. Answer Current weather is... of hemoptysis; It i<s/ftrhaicntku>re. </answer> . ·
Datasets $\textcircled{9}$ Datasets Kragas PLRJs ARAGOG Datasets PRM800K PKU Beaver NVIDIA NeMo-Aligner B A hh-rlhf
ance [363], [491], [438] and in-context examples [248]; (2) Enriching optimization knowledge using RAG techniques, including LLM-generated and hybrid retrieval strategies [369]; (3) Enhancing optimization performance through taskspecific training [53], [196], [438]. Anomaly Diagnosis. This task involves identifying the root causes of anomalies and suggesting effective solutions via: (1) Direct LLM prompting based on detailed diagnosis context [155]; (2) RAG-based enrichment using relevant historical diagnosis experience [490], [425]; (3) Multi-agent collaboration mechanisms for comprehensive diagnosis [490], [359].
# 1.3 Comparison with Existing Surveys
Different from existing LLM and data management surveys [405], [55], [86], [398], [272], [274], [374], [488], our survey offers a comprehensive and detailed overview of the key intersections between LLMs and data management, highlighting how they can mutually benefit from each other. We uniquely position our work at the intersection of data for LLMs (e.g., how to acquire, process, store, and serve LLM data) and LLMs for data (e.g., how LLMs can be leveraged to enhance data management tasks).
• We propose the IaaS concept as a principled lens to assess LLM dataset quality. The IaaS concept identifies four essential dimensions, including inclusiveness, abundance, articulation, and sanitization. This concept is promising to offers an evaluative criteria for guiding data management and understanding its impact across the LLM development lifecycle (see Section 2.1).
We investigate the unique characteristics of data across different LLM development stages (Figure 2), and provide a systematic overview of the associated challenges and techniques in data processing, storage, and serving (Table 1). In contrast, prior surveys [405], [55], [86] primarily center on the pre-training stage without covering the full LLM lifecycle like supervised fine-tuning (SFT), retrieval-augmented generation (RAG), and agent-based applications.
We provide a lifecycle-based taxonomy of DATA4LLM, introducing key tasks in data processing, storage, and serving. For each task, we summarize representative methodologies, discuss their design principles, and analyze their strengths and limitations. In comparison, [405] focuses on deduplication and filtering, [55] emphasizes data selection, and [373] reviews data annotation strategies, none of which offer a systematic perspective across the data management pipeline.
$\bullet$ We introduce recent advances in LLM4DATA, outlining key components of LLM-driven data optimization. While earlier work [488] has investigated the application of classical machine learning in data management, it largely neglects the distinctive strengths and limitations of LLMs, particularly in manipulating data for non-LLM tasks, processing semistructured and unstructured data, and enabling system-level optimizations.
We highlight open challenges and future directions from both ends: (1) improving data management techniques to meet practical LLM training and deployment needs (e.g., efficient data evaluation, scalable multi-modal storage), and (2) enhancing LLMs’ ability (e.g., private knowledge understanding, informative representation for non-sequential and non-textual data) to perform complex data management tasks across diverse real-world scenarios.
# 2 Data Management for LLM (DATA4LLM)
# 2.1 “IaaS” Concept of LLM Data
Based on our investigation of over 400 papers 2, we introduce the IaaS concept for evaluating the quality of LLM datasets. (1) Inclusiveness: LLMs require data with broad and diverse coverage across multiple dimensions, including domains (e.g., general knowledge, specialized fields like finance, medicine, math [98], and physics [233]), task types (e.g., question answering, summarization, code completion [401], [290], [353], [45], [436]), data sources (e.g., GitHub, Wikipedia [149], [11], [330], [347]), languages [93], [347], expression styles (e.g., academic, casual, formal [282], [470]), and data modalities (e.g., text [149], [11], images [145], [185], videos [437], [216], [390], tables [330]).
(2) abundance: LLMs require data with appropriate volume and balanced composition to prevent overfitting on homogeneous data. Specifically, abundance of data involves: (i) constructing well-balanced datasets during pre-training [139], [302], [420], [263], ( $_ { \mathit { 2 2 } }$ ) adjusting data ratios to align with target applications during fine-tuning [278], [135], and (iii) continually enhancing domain-specific capabilities while maintaining acceptable general performance degradation in continual pre-training [323], [160]. Notably, the strength of LLMs lies not only in large-scale data [282], [481], [11], [330], [149], [347], but also in constructing purposefully balanced datasets, which can further accelerate training and reduce computational cost. (3) articulation: LLMs require data that exhibit strong articulation, including three key aspects: $( i )$ the data should be wellformatted (e.g., proper punctuation and capitalization [90]), clean (free from duplicates, typos, and irrelevant content such as spam or gibberish [90]), and self-contained, featuring clear, fluent, and unambiguous language [282], [470], (ii) the data should be instructive [178], [179], [98], i.e., offering sufficient context, guidance, and intermediate explanations that help the model connect questions to relevant background knowledge and understand the reasoning process. (iii) the data should involve step-by-step reasoning[230], [442], [346], [173], [496], such that enhancing the LLMs’ reasoning capabilities by decomposing complex tasks into smaller, interpretable steps.
(4) Sanitization: LLMs require data to be sanitized, meaning it is rigorously controlled and filtered to remove harmful elements while maintaining inclusiveness and neutrality. This involves four critical dimensions: (i) Privacy compliance, which requires the exclusion of personally identifiable information (e.g., ID numbers, phone numbers), inferred social relationships, and geolocation-related metadata [450], [268], [275]; (ii) Toxicity-free content, ensuring the complete removal of hate speech, incitement to violence, and psychologically harmful language, as well as eliminating any discriminatory or aggressive semantic constructs [296]; (iii) Ethical consistency, which prohibits the presence of extremist ideologies, instructions for illegal activities, and stereotype-reinforcing narratives that may cause social harm [345], [360], [296]; and (iv) Risk mitigation, filtering out unverified medical claims, politically misleading information, and culturally insensitive expressions to prevent misinformation and value misalignment. Sanitized data must maintain a neutral tone and adopt an inclusive contextual framework, serving as a critical foundation for building safe LLMs [345], [360].
# 2.2 Data Characters across LLM Stages
Next we specifically discuss the data characteristics across different LLM stages, together with the distinct techniques for data processing, storage, and serving (Table 1).
Data for Pretraining. In the pre-training stage, LLMs rely on TB-scale, diverse datasets to acquire broad language and even cross-modality understanding capabilities, while reducing the risk of overfitting. These datasets are typically sourced from a wide range of domains and formats, including web crawls (e.g., HTML pages and WARC files [11]), opensource code repositories (e.g., raw source code files with metadata [14]), books (e.g., plain text or EPUB formats [497]), academic papers (e.g., LaTeX source or PDF-converted text [2]), and interleaved image-text corpora (e.g., aligned captioned images in JSON or WebDataset format [224]).
Data for Continual Pre-training. Continual pre-training (or continued pre-training) typically involves datasets containing millions to billions of tokens, which are often over 100 times smaller than those used in the initial pre-training stage. The primary objective is to fill knowledge gaps and adapt the model to specific domains. Representative domain-specific datasets are like: (1) Finance: BBT-FinCorpus [273], a largescale and diverse financial datasets comprising approximately 300 GB of text; and (2) Healthcare: Medical-pt [429], a Chinese-English medical dataset containing 360,000 entries curated from medical encyclopedias.
Data for Supervised Fine-Tuning (SFT). Unlike pretraining, SFT relies on data presented in the form of instruction-response pairs, where the response includes not only the correct answer but also guidelines on tone, style, and reasoning steps to ensure user-friendly output.
The SFT stage typically involves much smaller datasets compared to pre-training. These datasets often consist of thousands to millions of labeled examples, with each example carefully crafted to guide the model in learning a specific, narrower set of tasks. For instance, in Figure 2, (1) the summarization task constructs prompts using problem descriptions and summarization objects; (2) closed QA using questions and corresponding knowledge texts; (3) open QA tasks using only questions without knowledge text; and (4) captioning tasks using task descriptions and images. These prompts are paired with unique responses for model finetuning.
The composition of SFT datasets varies based on the application scenarios:
(1) General Instruction Following: For LLMs as generalpurpose chatbots, SFT data include instructions for various daily tasks. Databricks-dolly-15K [110] is a corpus containing over 15,000 records. It encompasses seven types of tasks, including creative writing, closed QA, open QA, summarization, information extraction, classification, brainstorming. This dataset is designed to enhance LLM to better adapt to specialized outputs that align with human-style requirements across diverse tasks. For example, in text summarization, it provides concise summary statements; whereas in text organization tasks, it structures outputs in table-of-contents format. (2) Specific Domain Usage: For models specialized in fields such as law, finance, or medicine, the SFT data focuses
TABLE 1: Technique Comparison - Data Processing, Storage, and Serving Techniques for Different LLM Stages. “N/A” indicates that no relevant work has been reported yet, although the corresponding techniques could potentially be applied.
Information Extraction Alpaca - GPT4 Judgement Prediction Refactoring/ Code Cleanup Classificat ion Documents Corpus LeSgualmCmaasrei zation Firefly M odel Corpus Summarization Classification SFT Brainstorming JPurdegdiecmtieont LSaFTw Legal Question M isc CoSFdT e Bug Fixes General Creative Writing LDegetael cEtvioent M40o3dKelSaimzepl1e3sB Answering M59oKdeSlaSmizpel e1s6B Model(aSi)ze 12B ExaJumdiincai tailon (5k Samples) (c) (e) Open QA Legal Element Legal Question Closed QA SimilaErxtCrascteiso nM atching Answering Code Testing & QA Document Reading Public Opinion Summarization Development Comprehension Questio(nQAn)swering Reasoning SentimeSnttyle Transfer LegAanlswQeureisntgion JudiGcieanl eRreatison ing Code Synthesis AutoRmeaptiacirCode Natural Language Structured Data Prison Term ase Understanding
Nam ed Entity Eval SuTmoxmicairtiyzation ChargePrPerdeidcitcitoinon Eval Legal Consultation Eval Code Translation
M oRteicoongDniettioecntion Ge45nK Saemrpleas l Translation SiMmCailtoacnrhtirCnoagvsersy 11.L7KaSamwples Legal Article 7.C5KoSamdples Linguist ic (b) Classification Focus Mining (d) Recommendation (f ) Code Classifcation GraEmvamluartion Case Recognitio Code Element Recognition CreatGiveen eNrat iuorna (LNaLnGg)uage Commonsense Judicial Summarization Named Entity Recognition Code Compilation
on tasks pertinent to these fields. For example, DISC-LawSFT [447] is a legal SFT dataset containing 295k data entries from various legal scenarios, such as legal information extraction (32k), legal judgment prediction (16k), legal event detection (27k), and legal question-answering (93k). Similarly, Medical-SFT [429] is a medical SFT dataset (totaling 2,060k pieces), composed of medical inquikry data (790k), online medical encyclopedia QA data (360k), English medical inquiry data (110k), medical knowledge graph QA data (79k). For tasks such as legal question-answering and legal judgment prediction, the data is structured as triplets, comprising the prompt, response, and supporting reference information (e.g., legal provisions, case-based evidence, or regulatory documents). For the remaining tasks, they all take the form of instruction pairs composed of prompt and response.
Data for Reinforcement Learning (RL). RL is generally divided into two types: one is RLHF (Reinforcement Learning with Human Feedback), and the other is Reasoning-oriented Reinforcement Learning (RoRL).
(1) RLHF: RLHF data is typically smaller than SFT data (e.g., thousands to dozens of millions of data samples), which involve more complex data annotations. Specifically, annotators compare multiple candidate responses to the same instruction and rank them according to human preference (e.g., levels from most helpful to least helpful). Collecting these preference pairs or rankings is more time-consuming than constructing instruction-response pairs in SFT.
In the general domain, UltraFeedback [113] consists of 64,000 samples. For each sample, different models are used to generate 4 responses for each prompt (totaling 256,000 responses). GPT-4 is then employed to generate feedback for these four responses, which is used to help LLMs to generate outputs that are in line with human standards and appropriateness.
In specific domains such as healthcare, MedicalRLHF [429] has 4,000 random questions from a Chinese medical dialogue dataset. Each question is paired with a wellorganized answer (i.e., the human doctor’s reply) and a weaker answer from Llama-based model fine-tuned over synthesized QA samples. These labeled data are used to train a reward model. During the training of the LLM, the reward model provides feedback based on the LLM’s answers, guiding the training process towards generating high-quality responses.
(2) RoRL: Compared to the complex annotated data in RLHF, RoRL allows the model to discover the best reasoning approach on its own through the correctness of the reward model. Specifically, it focuses on tasks requiring long-term reasoning, such as mathematical, coding, and logical designing experiments [162]. Under the premise of providing feedback on whether the answer is correct or not, algorithm such as the Group Relative Policy Optimization (GRPO) [162] and longCoT RL [377] are adopted to train the model to independently discover the optimal problem-solving steps and converge.
Data for Retrieval-Augmented Generation (RAG). The RAG stage differs from above training stages, which involves large-scale dataset (reference corpus) for LLMs to retrieve from during inference. In this stage, data must be strictly reviewed to ensure authenticity and validity, while dynamic data requires real-time updates. The domain of RAG datasets varies depending on the specific application scenarios. For instance, (1) in the medicine-specific LLM application (Medical-Graph-RAG), MIMIC-IV is used as the RAG dataset [415]. This dataset contains data from over 65,000 ICU patients and more than 200,000 patients treated in emergency departments; (2) in the legal field, the RAG knowledge base used by DISC-LawLLM [447] contains more than 800 national and local laws, regulations, and rules, as well as 24,000 legal-related exam questions. Besides, RAG data can include users’ historical conversation records or personal information, in order to build a user-personalized LLM [350], [451], [453].
Data for LLM Evaluation. Suitable evaluation datasets are essential for evaluating the performance of LLMs. They provide representative data samples that reflect different aspects of an LLM’s capabilities.
In the general domain, the MMMU benchmark is used to assess the performance of LLMs across major multi-modal tasks in six key disciplines, covering 30 subjects and 183 subfields. It is built from 11,500 carefully curated questions and effectively tests models’ perception, knowledge, and reasoning abilities [448].
In specific domains, typical evaluation datasets include those in coding, healthcare and law domains: (1) OpenAI’s HumanEval dataset includes 164 programming problems, complete with function signatures, docstrings, bodies, and multiple unit tests. These problems are handcrafted to ensure they are not part of the training sets used for code generation models [95]; (2) MedQA [198] contains a large number of medical exam questions from various regions, totaling 61,097 questions; (3) LexEval [232] constructs 23 evaluation tasks based on a legal cognitive classification framework, covering different aspects of legal knowledge, with at least 100 evaluation samples for each task.
Data for LLM Agents. Beyond vanilla LLMs, agents strive for more advanced capabilities such as planning, tool orchestration and multi-turn dialogue capability [262]. These capabilities impose higher requirements on the training data for LLMs. First, many studies [396] aim to enhance planning abilities through interaction trajectory data, which refers to a sequence of records generated during the interaction between the agent and the environment, typically represented as (instruction $i$ , action $a _ { 1 }$ , observation $o _ { 1 } , . . .$ , action $\boldsymbol { a } _ { n }$ ). UltraInteract [446] takes the instruction as the root node, and uses both the correct actions and their corresponding incorrect actions as nodes to construct a preference trajectory tree, enabling the agent to learn the human preference of different actions. Second, other studies focus on enhancing the agent’s tool usage capabilities using tool usage data. For instance, AutoTools [351] fine-tunes models on tool data that is labeled with special tags, such as <python>code</python>, thereby grounding language in concrete tool invocations. Third, to enhance the agent’s multi-turn dialogue capability, UltraChat [117] employs an additional LLM to simulate user instructions and conversational content, thereby collecting multi-turn dialogue data.
TABLE 2: Data Acquisition for LLMs.
# 2.3 Data Processing for LLM
# 2.3.1 Data Acquisition
Unlike classic machine learning, which primarily relies on collecting labeled data within a specific domain for supervised training (e.g., data for sentiment analysis and sentence similarity estimation), data acquisition for LLMs typically (1) relies on large-scale web scraping to collect extensive data across diverse domains for unsupervised pretraining and (2) employs techniques such as layout analysis and entity linking to extract additional data from the collected content.
# Principles
Unlike classic ML data acquisition, LLMs rely heavily on large-scale web scraping to ensure broad coverage and robust generalization. The main challenge is extracting high-quality textual content, often aided by layout-based and entity-linking methods. Managing time and resource efficiency at scale remains vital.
Data Sources. The data is gathered from two primary sources: (1) Public Data, often freely available under open licenses, include resources such as webpages [11], books [497], and publicly accessible code repositories [214]. • Webpage sources provide extensive pre-processed website content, such as 1.56T english text from crawled websites in C4 [331], 6.6B multilingual pages in mC4 [431], 6.3 trillion tokens of multilingual pages in CulturaX [297]. Digitized books supply structured, high-quality text, such as over 75,000 eBooks in Project Gutenberg [38], over two
Training Stage Requirement Inference Stage Requirement RAG Stage Requirement Multiversity Low Repetition Rate ■ Effectiveness Standard Format Large Scale Content Privacy Content Safety Fast Content Safety 厂 Content Privacy 1 Fast Retrieval Fast ■ Accurate Retrieval 2. Data Deduplication 1 3. Data Filtering Data Storage 3. RAG Data Storage
ProDcaetsas ng ExacMtHaStacushbhisntgring SimMHDa5sh SuMffiinxHAarsrhay ESvtaltiusatiicoaln PeGrpaldeixeitnyt/ShaplKe-ymVeaalunes Storag1e. TrainMiulntigmoDdalata StLoLrMa-ngateiv OrCghauninzkaitnigon LSQoegumiecraynl-tiBUcansiotersd dynaTmriececShturnuckitnurge □ baEsemdbCeldudsitnegri-ng with TeSFxeta/imIrmDeaeDgdeupE ncoder EvSHaclyoubrairtinidgon PerMmeutraitcion ComMbeitnriaction Distributed Data Chunking AMsseotcaidataitoan cEomVbepecrdtedosirsnigon LinReadruFiDcntiiemo-neTnu.nedNMond-ReLeilndBeuacsrteioDdinmen. □ FrAenqauleysnicsy SoftDeDup Bloom Filter Regular Expression ProFmiltpetr-ibnagsed Storage CAhpaipnorRteioplniecadtiQounerwiietsh Vector Storage Table Schema Columnar Storage Graph-Based Data Caching ↓ 1. Data Acquisition 6. Data Synthesis Data Data/Operator Offloading (CPUs) Indexing AHgiegraergcahtiicoaln ASweamreanteiscs Movement Web Crawling Trafilatura Dragnet Prompt Program-Aided StoragOevearnladpCpoinmgputing Graph Storage PrBoapserdty- TBraipsled- MuSlutip-pMortdel □ 8 HHEHHEEEE Knowledge Distillation Distillation Layout Analysis PaddleOCR GOT2.0 Distillation Entity Linking UMIE Fox ReFinED Multi-Stage Collaboration KD 2. Model Data Storage 4. Data Storage For Inference Fault Asynchronous Redundant Chunking-Based Create Prefix 4. Data Selection Insutrction-Response Pair Synthesis Tolerance Checkpoint Calculation KV Space Management Index Similarity-Based Cosine Similarity Bayesian Similarity DPartea-STryanitnhiensgis Mathematical Data Synthesis Offload Model Offload (GPUs,CPUs,NVMe Memories) Cache Shrinking within (between) KV Layers Selection Lexicon Set Overlap Bayes-based Selection Rephrasing Cross-Language Code Synthesis Data Synthesis Synthesis Data Serving 3. Data Serving For RAG □
Optimization-Based Linear Search Gradient-based Search Selection Kernel Density Regularization SFT Knowledge & QA Pair Synthesis 1. Data Serving For Training KnFioltwelreidnge LanEgvualugaetiMonodel MSoedlelc-tBiaosned Prompt-based Scoring Data Synthesis DaAtlaigSnymntehnetsis DaRtea Ssyontinhegsis PaDcaktiang OptimiSzihnogrtSeSqeuqeunecnece ISnesmeratnitoicn-Based KRne-orwalnekdingge LLM-based Metric-Based Combination Packing 5. Data Mixing 7. End-to-End Pipelines SelDeactiaon Model- Statme-ple ScEoxrpienrigence-Based CKonmopwrlesdsigoen SLM-based LLM-based Empirical Quality-based Tweaking Ranking Framework Data-Juicer Dataverse ↑ Strategies Two Stage Training Data Diversity By Entropy CCNet CC_Cleaner MDR ↓ 2. Data Serving For Inference ModMeilx-ibnagsed RoDbiussttr bOupttiiomniazlalyt on ReLgirnesasiron NRoeng-rLeisnsieoarn OrchestrationDCLM-BASDELaItNaE-JuiceMroSdaeln-sdpbeocixfic Pipelines Pipeline ProDamtpatPCrovmepnraenscsieon CoverMteMtraicr-kBeraseEdmbeding SLM-BWaosred Frequency SLtLatMis-tBicased ■
million free ebooks in Open Library [28], and film-aigned book descriptions in BookCorpus [497]).
• Code repositories (e.g., GitHub [14], GitLab [20], Bitbucket [7]) offer abundant programming data that can facilitate code search and analysis tasks, such as CodeSearchNet [181] with 2M (comment, code) pairs.
(2) Private Data involve proprietary or confidential information not publicly available, such as internal company documents, customer support logs, application event logs, subscriber-only content (e.g., premium news articles, licensed scientific databases). Collecting this data requires careful attention to ethical and legal constraints (e.g., GDPR, CCPA) and mandates removing sensitive details (e.g., employing anonymization or pseudonymization) and using secure pipelines (e.g., CI/CD systems) with encryption and rolebased access controls. For instance, proprietary codebases and user-generated content (chat logs, Q&A sessions) must be gathered under secure processes to maintain confidentiality.
Data Acquisition Methods. As shown in Table 2, there are three main techniques for data acquisition, including website crawling, layout analysis, and entity recognition and linking. (1) Website Crawling. Most data are obtained through website crawling, which aims to extract textual content from crawled HTML files or multimodal image-text pairs using various extraction tools and browser automation assistants.
Generally, we first parse the raw HTML to separate meaningful textual content from boilerplate elements. Second, since typical extraneous components (e.g., headers, footers, advertisements, sidebars) often contribute little to the data value (e.g., for LLM training), we execute scripts (using CSS selectors or XPath queries) to identify and extract critical elements like article text, headlines, dates, and author bylines. Third, once the relevant text has been scraped, we store it in structured format such as JSON, CSV, database (see data storage in Section 2.4) for further processing. Specifically, for image elements encountered in HTML files, the image source URL is recorded, and the content of the alt attribute within the <img> tag is extracted and utilized as the corresponding image’s textual caption.
Rule-based Crawling. Most existing tools use heuristic rulebased matching algorithm. Trafilatura [73] is a heuristic algorithm based on hand-crafted rules (e.g., match HTML DOM nodes with the class equal to “navbar” to filter the navigation bar). BET [144] employs the cumulative HTML tag distribution to find the largest region of fewest tags per text and extracts the corresponding text as the main content.
ML-based Crawling. Since many website regions cannot be easily classified by rules, some works [76], [73] design a HTML tag classifier to judge whether a DOM node contains textual content, where they adopt $L ^ { 2 }$ regularized logistic regression that inputs text density features and word frequencies in ”id“ and ”class“ attributes and outputs the probability that a given node contains textual useful content.
Auxiliary Tools. Moreover, some auxiliary tools integrate user-friendly APIs for operating and interacting with HTML DOM trees. Beautiful Soup [6] is widely used to parse the raw HTML in Python. Selenium [19] automates browser actions and handles dynamic pages by controlling a web driver that communicates with the browser. Playwright [30] provides a high-level API to automate browser tasks while Puppeteer [31] communicates directly with the browser using the DevTools Protocol, allowing for headless browser interactions (e.g., in JavaScript-heavy websites).
(2) Layout Analysis. Layout analysis focuses on extracting textual content from handwritten or non-textual data (e.g., from the crawled ones), which can contain valuable information and require advanced layout analysis techniques for effective extraction. Existing methods include pipeline-based and end-to-end approaches.
Layout Analysis Pipelines. Intuitively, many works adopt OCR technology (e.g., Tesseract [202]) to convert raw data (e.g., scanned books) into machine-readable formats [18], [392] in a pipeline manner, which consist of multiple small models. PaddleOCR [18] passes an image through a Layout Analysis model, which divides the image into different regions such as text, tables, and formulas for separate processing. The table area is sent to the Form Recognition module for structured recognition, and the text areas and formulas are input to the OCR engine for text recognition. Finally, the Layout Restoration module reconstructs all the regions in textual format using heuristic rules based on the relative location information of different extracted regions.
Similarly, MinerU [392] works in a pipeline manner. It fine-tunes LayoutLMv3 [180] for layout detection and YOLOv8 [391] for formula detection to improve the system’s generalization (handling a wider range of document types). The detected data are kept in markdown or JSON format.
$\bullet$ End-to-End Models. End-to-End layout analysis refers to adopt multi-modal LLMs to conduct end-to-end text acquisition. For instance, GOT2.0 [407] is a acquisition model composed of (i) a high-compression encoder that transforms the image to tokens, (ii) a long-context decoder that outputs the corresponding OCR results, and (iii) a linear layer acting as the connector to map the channel dimension between the vision encoder and the language decoder. Another example is Fox [257], which employs the natural content-aware CLIP-ViT [326] and the artificial content-aware Vary [406] as two vision encoders, enabling the model to perform finegrained interactions and multi-page document understanding. The end-to-end architecture reduces maintenance costs and enhances versatility, enabling the recognition of more complex elements (e.g., charts, sheet music) and supporting improved readability formats for formulas and tables (e.g., LATEX, Markdown). However, due to the use of LLMs with larger parameter size (e.g, $<$ <20M for PaddleOCR vs. 580M for GOT2.0 and 1.8B for Fox), the inference efficiency of these methods still needs improvement.
(3) Entity Recognition & Linking. Additionally, we can derive more valuable LLM samples by identifying and linking entities from the above extracted data. WEBIE [412] introduces a large-scale, entity-linked information extraction dataset with 1.6M sentences from Common Crawl. It links entities using ReFinED [68], and applies distant supervision (DS) to extract 4.8M triples, where each triple consists of a subject, a relationship, and an object.
TABLE 3: Data Deduplication for LLMs.
Furthermore, to ensure the consistency of derived and origin samples (e.g., translation across English and other languages), Alignment-Augmented Consistent Translation (AACTRANS) model [215] uses a Seq2Seq framework that incorporates reference text in the target language to guide translations, ensuring consistency across related pieces of text. During training, aligned text pairs are augmented with reference-based word alignments to bias the model toward consistent translations. At inference, a common reference translation of the original sentence is used to align and translate related extractions using the AACTRANS model.
However, AACTRANS fails to leverage shared knowledge across tasks, limiting the alignment performance. Instead, UMIE [367] integrates text and visual inputs and produces structured outputs to learn linking knowledge from multiple tasks. The UMIE model is composed of four modules: (1) a text encoder for task instruction comprehension, (2) a visual encoder for image understanding, (3) a gated attention mechanism for cross-modal integration, and (4) a text decoder for structured output generation. Following different task instructors, UMIE is capable of performing various MIE tasks and generating corresponding structured outputs, thereby facilitating knowledge sharing.
Notably, recent LLMs could automatically learn the relationships among samples from randomly provided data, rendering the explicit entity linking an optional procedure in the data acquisition process [119].
# 2.3.2 Data Deduplication
The collected raw data often contains significant redundancy, which can negatively impact LLM performance either by reducing its generalization ability to new or rarely-seen tasks [299] or by memorizing and overfitting to the repeated subsets [169], [422]. Various deduplication methods have been proposed to detect and mitigate duplication, either by (1) completely removing duplicate samples [122], [299], [347], [358], [207], [46], [385], [360] or by (2) down-weighing duplicate samples for data resampling [167]. We classify these methods into four main categories.
Exact Substring Matching. Exact substring matching methods identify and remove exactly identical samples across datasets, which can happen if (1) a sample references another sample (e.g., a report related to another), or (2) two individual datasets accidentally include the same sample (e.g., a webpage of a popular website). It is commonly used as a preliminary step to remove duplications. Relevant methods leverage techniques like hashing [122] and suffix array [299] at the sample or sentence level.
# Principles
Compared to structured classic ML data, LLM data is unstructured and requires careful identification and removal of duplicate or near-duplicate content from training datasets to improve efficiency, prevent overfitting, and mitigate bias using statistical metrics like perplexity or model evaluation. Challenges include (1) how to encode semantic texts into representations that could be precisely and efficiently compared and (2) the scalability of the deduplication methods.
Sample-Level. [122] conducts sample-level deduplication by calculating the MD5 hashing value of each sample and deduplicate samples with identical MD5 values.
$\bullet$ Sentence-Level. [299] performs sentence-level deduplication by using Suffix Array, which combines all the samples into one sentence, computes the sentence Suffix Array, and deduplicates samples with common prefixes in the Suffix Array. Suffix Array [283] is a data structure that stores the starting indices of string suffixes in lexicographical order. For instance, given the string “patata”, its suffixes in lexicographical order are [“a” (index 5), “ata” (index 3), “atata” (index 1), “patata” (index $\boldsymbol { \theta }$ ), “ta” (index 4), “tata” (index 2)], so its suffix array is (5, 3, 1, 0, 4, 2). As identically duplicate samples have the same prefix, they will become adjacent in the suffix array, making it easier to find the duplicates across the samples. In practice, they construct a suffix array on the sequence with a threshold of 50 tokens (empirically determined for significantly reducing the false positives), and find the duplicate samples with common prefixes in linear time.
Approximate Hashing-based Deduplication. Hashingbased methods hash each sample into a fixed-length vector and deduplicate samples with significant vector overlap. Compared with the exact matching-based approach, it can identify near-duplicate samples with only a few words of difference (e.g., advertisements generated using the same template). Unlike normal hashing algorithms like MD5, hashes generated in this approach do not change significantly with even a bit of modification, making it possible to detect near-duplicate samples. There are various hashing algorithms, including SimHash [88], MinHash [81], DotHash [298], and their variants [347], [358].
MinHash [81] hashes samples into vectors using a series of hashing functions, where only the minimum value is retained for each function, and estimates similarity for each pair of vectors through Jaccard Index $\begin{array} { r } { J a c c a r d ( X , Y ) = \frac { X \cap Y } { X \cup Y } } \end{array}$ , where X and Y represent sets of elements (For example, if , b, c, d and Y = b, c, d, e, f, the Jaccard Index over X and Y would be $\textstyle { \frac { 1 } { 2 } }$ ). [356] demonstrates that MinHash generally outperforms SimHash. In practice, [122] employed MinHash to the code data on both the sample and the repository levels for diversity and integrity, and [299] employed MinHash on the sample level.
Moreover, MinHash has various variants for acceleration. MinHashLSH [347], [358] improves MinHash by involving locality-sensitive hashing (LSH), which divides a vector into multiple bands and only compares the samples with partially identical vector bands instead of the whole vector, mitigating the computational overhead in sample comparison. LSHBloom [207] further improves MinHashLSH by using Bloom Filter, which hashes each band into a single integer value and inserting it into each corresponding Bloom Filter, and the sample will be flagged as a duplicate if any band’s hashed value collides with an entry in the Bloom filter, accelerating duplicate samples searching while reducing memory usage with negligible false positive rate (e.g., 1e-5 in experiments).
However, MinHash-based methods require building massive vector sets. When the number of samples and their lengths grow large, constructing vector sets becomes exceedingly expensive in terms of both time and space. Moreover, as the feature vector computation for each sample depends on this shared vocabulary, it is difficult to fully parallelize the process.
SimHash [88]. To address MinHash’s issues, SimHash [88] generates a sample’s feature vector solely from the words it contains, converts each sample into a fixed-dimensional binary vector for similarity comparison. Specifically, it first hashes each token in the sample (e.g., by BPE tokenizer [75]) into a fixed-dimension vector of $\{ 0 , 1 \} ^ { d }$ (e.g., $[ 1 , 0 , 0 , 1 ]$ and $[ 1 , 1 , 0 , 0 ]$ ) weighted by the pre-defined weight $w$ (e.g., $w _ { 1 }$ and $w _ { 2 }$ ), where the weight is positive for $^ { 1 }$ and negative for $0$ (e.g., $\left[ w _ { 1 } , - w _ { 1 } , - w _ { 1 } , w _ { 1 } \right]$ , $\left[ w _ { 2 } , w _ { 2 } , - w _ { 2 } , - w _ { 2 } \right] )$ . Then it added up these weighted vectors to a new vector of the same dimension $d$ (e.g., $\left[ w _ { 1 } + w _ { 2 } , - w _ { 1 } + w _ { 2 } , - w _ { 1 } - w _ { 2 } , w _ { 1 } - w _ { 2 } \right] \rangle$ . Finally, the values of the new vector are mapped to another vector of $\{ 0 , 1 \} ^ { d }$ , where the positive values are mapped to $^ { 1 }$ and 0 otherwise. The final vector is the fingerprint of each sample, and the similarity of the two samples is estimated by calculating the Hamming distance between their vectors.
Compared with MinHash, SimHash stores and compares only one hash signature for each sample, greatly reducing the storage and computing overhead. However, keeping only one signature makes it harder to distinguish between two samples, especially those with low Hamming distances, requiring careful curation of data features.
$\bullet$ DotHash [298]. Moreover, to further improve the deduplication accuracy and efficiency, DotHash [298] assumes that uniformly sampled vectors in high-dimensional space are quasiorthogonal. It encodes each sample into a combination of sample elements represented as fixed-length basis vectors, and the dot product of these vectors is an unbiased estimate of their intersection. For example, given two samples with their element basis vectors $\begin{array} { r } { a = \sum _ { a \in A } \psi ( a ) } \end{array}$ and $\begin{array} { r } { b = \sum _ { b \in B } \psi ( b ) } \end{array}$ , the intersection is calcu ated by $\mathbb { E } [ a \cdot b ] = | A \cap B |$ .
However, [121] found that DotHash performs badly if the length of the basis vector is lower than the number of basis vectors, where quasi-orthogonal no longer holds.
Approximate Frequency-based Down-Weighting. To prevent the loss of potentially valuable information by retaining only one sample and removing the rest, SoftDeDup [167] deduplicates by reweighting samples, where samples with higher commonness are assigned lower sampling weights. Specifically, SoftDeDup computes the frequency of each ngram across all the samples and calculates the commonness of each sample by multiplying the frequencies of all the ngrams that appear in the document. Samples with higher commonness are more likely to be duplicates and thus be down-weighted.
Embedding-Based Clustering. Except for samples with the same or similar substrings, some samples with similar semantics but different formats (i.e, expressed differently) may also negatively affect LLM training performance. For instance, for the following two sentences: (i) “Unleash your potential with our lightweight, high-performance sports shoes – designed for comfort, speed, and style”; (ii) “Step into greatness with durable, breathable sports shoes perfect for running, training, and everyday adventures”. Both of the sentences are sports shoe advertisements but expressed differently, and such duplicates could degenerate model performance by making data imbalanced and introducing bias to the model. To address this issue, another approach leverages language models’ embeddings (representing similar items as vectors close to each other in the vector space) for deduplication.
SemDeDup [46] identifies semantic duplicates by clustering embeddings and deduplicating those with high cosine similarities. It first encodes each sample into an embedding by leveraging the OPT [462] text encoder and the CLIP [325], [182] image encoder, and clusters the embeddings with Kmeans, so one can save time by finding duplicates within the cluster rather than the whole vector space. Then, within each cluster, it searches for semantic duplicates with cosine similarity above the pre-defined threshold. Finally, within each group of duplicates, it retains only the sample closest to the cluster centroid. As a multi-modal method, it can be applied to both text and image data, making it possible to deduplicate image data. In practice, [45] leverages SemDeDup to deduplicate the image-text pair dataset LAION-400M [341].
Like MinHash, SemDeDup also has many variants for performance improvement. [385] combines SemDeDup with the Self-Supervised Learning (SSL) Prototypes metric, which clusters the samples and retains the samples in each cluster based on their distance to their corresponding cluster centroid, where the samples closer to the centroid are more likely to be removed. FairDeDup [360] modifies the logic of SemDeDup to improve the representation of underrepresented sensitive groups by prioritizing the retention of samples that align with sensitive concepts defined through user-provided prototypes, such as demographic subgroups. Within each cluster, instead of selecting the farthest sample from the centroid, it selects the sample that maximizes similarity to the least-represented group in the cluster to prevent samples with sensitive concepts from being pruned.
Non-Text Data Deduplication. As LLMs are increasingly applied to multimodal tasks (e.g., image-text retrieval, visual question answering), non-text data types such as images are becoming integral to LLM training datasets, necessitating dedicated deduplication techniques. Similar to texts, images can also be encoded into embeddings through neural networks designed for image-like data such as CNN, after which embedding-based deduplication methods can be applied. SemDedup [46] adopts a semantic-based method by computing cosine similarity between image embeddings; two images are considered duplicates if their similarity exceeds a predefined threshold, which is tuned to balance detection precision and recall. In contrast, MINT-1T employs a hashbased approach, using SHA256 checksums to identify and remove exact duplicates efficiently. Meanwhile, the DataComp pipeline [146] leverages the CNN-based near-duplicate detector [445] to eliminate subtle duplicates and prevent evaluation set leakage. Models trained on these deduplicated image sets exhibit improved performance over baselines such as CLIP [325] for higher precision and recall.
TABLE 4: Data Filtering Methods for LLMs.
Fig. 5: Example Data Filtering Workflows [238], [45], [264].
# 2.3.3 Data Filtering
Data filtering removes low-quality or sensitive samples from the dataset to reduce computational overhead and protect privacy, while the model trained on the subset exhibits similar or even better performance than the one trained on the original dataset. To achieve this, one has to $( i )$ remove samples with low quality (Sample-level filtering) or partial noisy information (Content-level filtering), and (ii) keep the selected samples diverse enough to cover various domains.
Sample-level Filtering refers to evaluating samples using metrics or models and removing the samples that fail to meet the threshold (e.g., quality and diversity). There are multiple metrics in this category:
# Principles
Compared to classic ML data filtering, LLM data filtering emphasizes turning unstructured text into measurable metrics, with the main challenge being the effectiveness of evaluation methods, the standards of low-quality samples, and the computational complexity of these methods across massive datasets.
(1) Statistical Evaluation uses various statistical methods to evaluate samples by directly applying statistical metrics to the samples (e.g., clustering results) or indirectly capturing characteristics from the models trained on the dataset (e.g., loss or perplexity from a surrogate model). Applicable statistical metrics include perplexity (and its variants), influence on model parameters, and clustering.
$\bullet$ Perplexity Measuring. Perplexity measures the difficulty of a model generating the responses, represented as aggregated probabilities of the $j$ -th response token given the question tokens and previous $j \ : - \ : 1$ response tokens ${ \mathrm { P P L } } ( y | x ) \ =$ $\begin{array} { r } { \exp \left( - \frac { 1 } { N } \sum _ { j = 1 } ^ { N } \log p ( y _ { j } | x , y _ { 1 } , . . . , y _ { j - 1 } ) \right) } \end{array}$ . The higher the perplexity value is, the harder the model generates the response. It is commonly used in selecting high-quality subsets in pretraining and fine-tuning phases. Based on the original perplexity, there have been several studies for improving the metric, including computing perplexities using a smaller-sized model for training a larger-sized model to reduce computational overhead, or employing advanced techniques such as Learning Percentage (LP) and Instruction-Following Difficulty (IFD) to identify and select challenging samples.
Specifically, [383] uses an existing model to compute perplexity scores for multiple domains and selects pretraining samples from the domains with high correlation between the downstream benchmark error and the perplexity scores on the domain samples. The correlation is measured through a rank-based correlation coefficient $\begin{array} { r } { \gamma _ { j } = \sum \mathrm { s i g n } ( y _ { k } - } \end{array}$ $y _ { l } ) ( \mathrm { r a n k } _ { j } ( x _ { k , j } ) - \mathrm { r a n k } _ { j } ( x _ { l , j } ) )$ , where the rank difference reflects the model performance difference on the same sample, helpful in estimating $\theta ^ { * }$ . They then rank the domains based on $\gamma _ { j }$ and select samples from the top-ranked domains. To scale the process, a fastText classifier [199] is trained to distinguish selected documents, enabling page-level data selection.
To enhance efficiency, [61] leverages a smaller-sized surrogate model to select high-quality pre-training subsets via perplexity score for training larger-sized models, greatly reducing the computational overhead in model training while still achieving the same performance as with the full dataset. They first train a surrogate model, a smaller-sized MosaicML [378] model with 125 million parameters, on a random subset of the pre-training dataset to compute the perplexity scores for the remaining samples. Based on the perplexity scores, they find the optimal subset through a combination of selection criteria: $( i )$ the part of samples to keep (e.g., samples with low/medium/high perplexity scores), and $( i i )$ the fraction of samples to keep (e.g., $2 5 \%$ , $5 0 \%$ , $7 5 \%$ ). The subset is evaluated by training a larger-sized MosaicML model on it and analyzing the model’s performance on downstream benchmarks. While the result shows that the smaller-sized model can effectively and efficiently filter data for the larger-sized model, they also find that the effectiveness highly depends on the dataset. For example, keeping the high perplexity samples exhibits better performance on the Pile dataset [149], while keeping the medium perplexity samples exhibits better performance on the Dolma dataset [361].
Furthermore, there are some variants of perplexity-based evaluation. First, [288] proposes a perplexity-based metric, Learning Percentage (LP), to select samples that are more challenging for models to learn. Learning Percentage $\begin{array} { r } { \begin{array} { r } { \mathcal { L P } ( i ) ~ = ~ \frac { \mathcal { P } _ { i - 1 } - \mathcal { P } _ { i } } { \mathcal { P } _ { 0 } - \mathcal { P } _ { n } } } \end{array} } \end{array}$ measures the perplexity drop ratio of a sample between the specific epoch $i$ and the whole training procedure. The key idea is that models tend to learn easier samples first and harder samples later, so one can find harder samples that are not thoroughly learned during early epochs. The authors use $\mathcal { L P } ( 1 )$ (the learning percentage after the first epoch) to rank the training samples from the hardest to the easiest and split them into three equal-sized parts. It shows that the smaller-sized variant of the model can effectively select samples for the larger-sized variant, and models of all sizes trained on the harder part outperform the ones trained on all the samples.
Also based on perplexity, [239] proposes the InstructionFollowing Difficulty (IFD) metric to select samples that are more difficult for models to follow. IFD ( $\mathrm { I F D } _ { \theta } ( Q , A ) \ =$ P P PL (L(A|AQ) ) measures the influence of the questions (instructions and inputs combined) on generating corresponding responses by comparing the perplexity of the response with or without the question strings $P P L ( A | Q )$ and $P P L ( A )$ . A higher IFD score suggests higher model following difficulty. The authors first build a pre-experienced subset by clustering and resampling the samples from the WizardLM [426] and Alpaca-GPT4 [312] datasets, on which they train the model for one epoch to obtain initial knowledge. The model is then used to calculate the IFD score on all the samples, and the ones with high IFD scores are prioritized.
Superfiltering [238] further enhances [239] by employing the surrogate model from [61]. Instead of training a smallersized model, the authors directly use GPT-2 [327] as the surrogate model to calculate IFD scores on the same datasets. Compared to their previous work [239], the adoption of surrogate model simplifies the procedure and accelerates the filtering process.
$\bullet$ Influence Assessment. Another data filtering approach is to assess the influence of a sample on LLM model performance or learning process by measuring how the metrics change when the sample is upweighted or removed. The samples with substantial impact on the model parameters are regarded as influential and thus are selected.
DEALRec [254] identifies influential and challenging finetuning samples through two metrics: (i) Influence Score for assessing the influence of a specific sample on the model performance. It starts by measuring the influence on parameter change, where a surrogate model is trained on the full dataset to estimate how the model parameters would change when certain sample is removed or upweighted, expressed by $\begin{array} { r } { \hat { \theta } _ { - s } \ - \ \hat { \theta } \ \approx \ \frac { 1 } { n } H _ { \hat { \theta } } ^ { - 1 } \nabla _ { \theta } \mathcal { L } ( s , \hat { \theta } ) } \end{array}$ , where $H _ { \hat { \theta } }$ is the Hessian matrix and $\nabla _ { \boldsymbol { \theta } } \mathcal { L } ( s , \hat { \boldsymbol { \theta } } )$ is the loss gradient of sample $s$ . The formula is then evolved to measure the influence on empirical risk change, expressed by Iremove, $\ l _ { \mathrm { l o s s } } ( s , \mathcal { D } ) =$ $\begin{array} { r } { \frac { 1 } { n } \sum _ { i } \frac { 1 } { n } \nabla _ { \theta } \mathcal { L } ( s _ { i } , \hat { \theta } ) ^ { \mathrm { T } } H _ { \hat { \theta } } ^ { - 1 } \nabla _ { \theta } \mathcal { L } ( s , \hat { \theta } ) } \end{array}$ ; (ii) Effort Score for assessing the difficulty for the surrogate model to learn a specific sample for generalization to new samples, defined as $\delta _ { s } ~ = ~ \| \nabla _ { \phi } \mathcal { L } ^ { \mathrm { L L M } } ( s ) \| _ { 2 }$ , where $\Phi$ is the model parameter. A higher effort score suggests greater difficulty. The final score combines the above two scores, written as $I _ { s } ~ =$ Influence $\mathrm { S c o r e } + \lambda$ Effort Score.
Besides, SHED [168] utilizes the Shapley value [339], which estimates the contribution of a member to the group, to calculate the influence of a sample on the model performance and select representative samples with high influence. The method first clusters the samples and selects the ones closest to each cluster centroid as the representative samples to reduce computational overhead. It then calculates the Shapley value for each representative sample $i$ by iteratively removing $n$ samples from the dataset until all the samples have been removed and calculating the contribution of the removed $n$ samples in each iteration $a$ to the model performance compared with the previous iteration, written as: $c _ { ( a n + 1 . . ( a + 1 ) n ) \in D _ { p } } = v ( D _ { p } \setminus \{ 1 . . a n \} ) - v ( D _ { p } \setminus \{ 1 . . ( a + 1 ) n \} )$ . The process will be repeated for $k$ times for higher accuracy, after which the Shapley value for each representative sample $i$ i dlecfitned aitsh $\begin{array} { r } { S _ { i } \ \approx \ \frac { 1 } { k } \sum _ { k } \frac { c _ { i } ( k ) } { n } } \end{array}$ r. Fm ntahlle tph-e snukbsseatsmpclaens or weighted sampling the samples through $\begin{array} { r } { \operatorname* { P r } ( i ) \ : = \ : \frac { e ^ { f S _ { i } } } { \sum _ { i } e ^ { f S _ { i } } } } \end{array}$ where $f$ controls the trade-off between quality and divPersity.
$\bullet$ Clustering. A common approach to select high-quality and diverse subsets is to encode the samples into embeddings in the latest space and cluster them using cosine similarity, where similar samples are usually clustered into the same group. Selecting within the clusters reduces redundancy, while selecting across the clusters increases diversity.
Density-Based Pruning (DBP) [45] selects high-quality and diverse subsets by clustering samples into clusters and resampling the samples based on the cluster complexity. They encode the samples into embeddings using a pre-trained vision model DINOV2-L/14 [300] and cluster them using Kmeans. For each cluster, they calculate the average intracluster cosine-distance to the internal centroid $d _ { i n t r a }$ and inter-cluster cosine distance to the other centroids $d _ { i n t e r }$ , and the cluster complexity as a product of the two distances $C = d _ { i n t r a } \times d _ { i n t e r }$ . The cluster complexity is later converted to probability using softmax to resample the samples across clusters, where clusters with higher complexity have higher weights.
Rather than the sample embedding itself, SmallToLarge [436] selects a diverse subset by clustering the samples based on their loss trajectories. It first trains a smaller-sized surrogate LLM model on the whole dataset to obtain the loss trajectories of each training sample, defined as ${ \mathcal { L } } _ { i } { \big ( } \phi ^ { ( t ) } { \big ) } =$ $- \log p _ { \phi ^ { ( t ) } } ( \mathbf { y } _ { i } | \mathbf { x } _ { i } )$ , where $\phi ^ { ( t ) }$ is the model parameters at time $t$ . These samples are then clustered based on loss trajectories and randomly resampled to form a diverse subset.
(2) Model Scoring uses LLMs for evaluating sample quality. The quality criteria can either be specified $( i )$ explicitly via LLM prompt engineering or $( i i )$ implicitly learned from human-labeled data.
QuRating [411] selects high-quality pre-training samples by prompting LLM to compare pairs of samples along the four quality criteria (writing style, fact & trivia amount, educational value, and the expertise required to understand), training a rater on the scalar quality ratings, and filtering samples using the rater. Initially, GPT-3.5-turbo is prompted on each pair of samples to judge which one is better on each quality criterion, where the binary confidence $p _ { B \succ A } \in [ 0 , 1 ]$ that the sample B is preferred over the sample A is recorded. The pairwise binary confidence is then translated into sample quality ratings $p _ { B \succ A } ~ = ~ \sigma ( s _ { B } - s _ { A } )$ through the BradleyTerry model. A QuRater model is later trained on these quality ratings to predict quality ratings for new samples on each criterion. The new samples are resampled with the probability $\begin{array} { r } { p ( d _ { i } ) \ \propto \ \exp \left( \frac { s _ { i } } { \tau } \right) } \end{array}$ , where $\tau$ adjusts the trade-off between quality and dive rsity.
Rather than prompting the models to compare samples, Data-Efficient Instruction Tuning for Alignment (DEITA) [264] prompts LLM models to evolve and score the samples for building sample scorers. The authors first prompt ChatGPT to evolve the samples along instruction complexity and response quality, and again prompt ChatGPT to score these evolved samples. They then train scorers on the evolved samples with their corresponding scores to enable their scoring abilities. Finally, they use these scorers to score new samples and multiply the scores to form the final score, where the new samples are resampled based on the final scores for diversity.
Model scoring methods also help mitigate bias and toxicity. LLM often exhibit harmful biases due to the massive and unchecked datasets they are trained on, which can have various biases, ranging from gender and racial stereotypes to cultural and socioeconomic prejudices [296]. Safety-enhanced Aligned LLM Fine-tuning (SEAL) [345] selects high-quality and safe fine-tuning samples through a safety-aligned selector. The selector is trained based on a safety-aligned model, Merlinite-7b [366], using bi-level optimization, which minimizes the safety loss on the safe dataset while minimizing the fine-tuning loss on the filtered dataset during training to ensure the selector always prioritizes safe and high-quality samples during selection. After the selection, the top-p% samples will be selected.
(3) Hybrid Methods. Instead of relying on a single method, some methods mix various kinds of data filtering methods and evaluate each permutation of these methods or parameters to find the best combination of methods or parameters that further boosts model performance.
[285] selects high-quality pre-training data based on three metrics: (i) Perplexity, (ii) EL2N $\chi ( x _ { i } , y _ { i } ) = \mathbb { E } \| f ( x _ { i } ) - y _ { i } \| _ { 2 }$ for measuring the prediction probability discrepancy between the reference model and the ground truth, and $( i i i )$ Memorization factor $\begin{array} { r } { s c o r e ( M , N ) = \frac { 1 } { N } \sum _ { i } ^ { N } 1 ( z _ { M + i } = \hat { z } _ { M + i } ) } \end{array}$ ) for measuring the fraction of N tokens correctly generated after prompting the model with the first M tokens [77]. For each metric, they retain samples based on two criteria: $( i )$ the fraction of samples to keep (10%, 30%, $5 0 \%$ , and $7 0 \%$ ) and $( i i )$ the part of samples to keep, e.g., the bottom (for Perplexity and L2-Norm Error) and top (for Memorization). They train LLM for each case and select the best-performing one, and the result shows that Perplexity effectively removes the “easiest” samples, improving model performance and outperforming other metrics.
Instead of comparing metrics and choosing the best of them, InstructionMining [84] combines various metrics (e.g., including input/output length, reward score, perplexity, etc.) into one linear function with each metric as indicator, written as $l o g L _ { l o s s } \propto L _ { 0 } + \beta _ { 0 } + \beta _ { 1 } I _ { 1 } + \beta _ { 2 } I _ { 2 } + \cdot \cdot \cdot + \beta _ { n } I _ { n } + \epsilon$ . The $\beta$ parameters are estimated using least squares. In practice, it evaluates fine-tuning samples on a fine-tuned model LLaMA2-7B [386] and selects samples by finding the optimal set of samples to keep using the hyperparameter optimizer BlendSearch [395].
MoDS [126] considers diversity into selection and iteratively selects high-quality, diverse, and necessary subsets and adds the samples the LLM model performs poorly on during fine-tuning using a reward model and the K-Center greedy algorithm [342]. The method is conducted mainly in three steps: $( i )$ Use a reward model to score the quality of each (instruction, input, output) triplet in the dataset, where the low-quality ones are filtered out, forming a high-quality dataset. (ii) Use the K-Center greedy algorithm [342] to select the samples in the high-quality dataset that are farthest apart from each other in the BERT [206] embedding space, forming a diverse seed dataset. $( i i i )$ Fine-tune a pre-trained LLM model on the seed dataset to enable its instruction-following ability and generate responses for the high-quality dataset. The generated responses are evaluated using the same reward model, and those with low quality scores, which means the model is weak at generating such responses, will be collected. The collected samples with their original responses will be selected again using the K-Center greedy algorithm and then added to the seed dataset, forming the final dataset.
Content-level Filtering. To avoid removing too many critical samples from the dataset and weakening the model performance, some works only filter out noise or sensitive content within the samples. For noise removal, common methodologies include removing or replacing specific characters (e.g., remove invisible or invalid characters, unescape HTML characters and detect punctuation misuse), removing unnecessary texts (e.g., the texts that appear as decorating elements on the web pages such as “print”, “likes” and “loading” ), and cleaning harmful information (e.g., spam, gambling, pornographic content and site links) [433].
For privacy anonymization, LLMs can memorize private and sensitive information (e.g, user identity details or clinical health data) from datasets during pre-training and fine-tuning, which can be leaked through specially crafted prompts, thereby posing significant privacy risks. [275] demonstrates that it is possible to extract, reconstruct, and infer personally identifiable information (PII) from LLM models by identifying the most frequent PII appearing in model responses or by prompting models with partial information about a specific individual. From a data management perspective, these privacy threats can be mitigated by identifying and filtering out potential sensitive information in the datasets.
DeID-GPT [268] utilizes existing LLMs to identify and remove PII from unstructured medical text without changing its meaning. In their case, the LLMs are prompted to deidentify information from clinical notes in accordance with HIPAA privacy regulations. An example prompt is: “Please de-identify the following clinical notes by replacing any terms that could be a name, an address, a date, or an ID with the term ‘[redacted]’.”
Instead of using general LLMs, [275] uses Named Entity Recognition (NER) models such as spaCy [33] and Flair [52] to tag PII in the samples and removes or replaces them with hashed tags, entity tags like “[NAME]” or “[LOCATION]”, or a simple tag like “[MASK]”. The last tag was adopted to maximize privacy, as the other ones are still vulnerable to membership inference by linking the samples.
TABLE 5: Comparison of Different Data Selection Methods.
The rise of multi-modal LLMs, particularly large video generation models, drives the need for robust video data filtering. CogVideoX [437] employs a pipeline focusing on coherent motion, removing videos with poor dynamics. It defines negative labels for artificial edits, low motion connectivity, visual flaws, and excessive text. A manually annotated subset trains six Video-LLaMA[455]-based filters, while optical flow and aesthetic scores ensure motion coherence and visual appeal, refining the dataset to approximately 35M high-quality 6- second clips.
HunyuanVideo [216] uses a multi-step pipeline: splitting videos into clips, encoding embeddings, deduplication, and resampling. Filters include motion (OpenCV-based optical flow), OCR (text removal), clarity (visual blur detection), aesthetic (Dover[414]-based scoring), and source (YOLOX[153]- like watermark/border removal). This process generates five progressive training sets with increasing thresholds.
Wan [390] applies pre- and post-processing pipelines. Preprocessing filters unsuitable data using OCR, aesthetic evaluation (LAION-5B [341]), NSFW scoring, watermark detection, and resolution thresholds, removing approximately $5 0 \%$ of low-quality data. Samples are clustered for diversity, manually scored, and an expert model selects high-quality, naturally distributed data. Videos are classified into six tiers, prioritizing smooth motion. Post-processing refines images by selecting top $2 0 \%$ via an expert model and manually curating gaps. For videos, top candidates are filtered by visual quality and motion complexity, ensuring balance and diversity across 12 themes.
# 2.3.4 Data Selection
Different from previous reviews [55], [398], we define data selection as the process of choosing subsets of already wellcleaned data samples in order to adapt LLMs to specific domains (e.g., medical or legal LLMs).
# Principles
Unlike traditional ML data selection, LLM data selection focuses on aligning the topics of the text samples, requiring encoding semantic topics into measurable distributions. However, managing computational efficiency and ensuring robust generalization across diverse tasks remain critical unresolved issues.
Similarity-based Data Selection. One class of methods aims to select subsets similar to the specified target data.
$\bullet$ Cosine Similarity: Domain-Adaptive Continual Pre-training (DACP) [423] adapts a general-purpose LLM to a target task by selecting domain-specific unlabeled data based on similarity (cosine similarity), novelty (perplexity), and diversity (entropy). For the similarity part, it identifies data most similar to the task-specific labeled data by encoding both into embeddings (using [33]) and choosing domain samples that align with the task’s embedding distribution.
$\bullet$ Bag-of-Words Similarity: DSIR [421] selects a subset of unlabeled pre-training data matching the target distribution by computing feature distributions $( \hat { p } _ { \mathrm { f e a t } } , \ \hat { q } _ { \mathrm { f e a t } } )$ for raw and target data represented as bag-of-words, estimating importance weights $\begin{array} { r } { w _ { i } = \frac { \hat { p } _ { \mathrm { f e a t } } ( z _ { i } ) } { \hat { q } _ { \mathrm { f e a t } } ( z _ { i } ) } } \end{array}$ pqˆfeat(zi) , and resampling raw data with probability $\frac { w _ { i } } { \sum _ { i = 1 } ^ { N } w _ { i } }$
Lexicon SetPOverlap: [321] selects the subset with the most shared lexicons using the Domain Specific Score (DSS), which quantifies the relevance of a dialogue set $T$ to specific domains by measuring the overlap between $\{ l _ { 1 } , l _ { 2 } , \ldots , l _ { m } \}$ , calculated as $\begin{array} { r } { \mathrm { D S S } ( T , L ) ~ = ~ \frac { 1 } { m } \sum _ { i = 1 } ^ { m } \frac { | T \cap l _ { i } | } { n } } \end{array}$ $T$ and domain lexicons $L =$ where $n$ is the number of tokens in $T$ .
$\bullet$ Bayes-based Selection: CoLoR-filter [80] formulates pretraining subset selection as a Bayesian optimization problem, which selects a subset $S$ by maximizing downstream likelihood $\operatorname* { P r } ( D _ { \mathrm { d o w n } } | S )$ . It uses two auxiliary models: A “prior” model ( $\theta _ { \mathrm { p r i o r } } )$ trained on a large general dataset $D _ { \mathrm { d o w n } }$ and a “conditional” model ( $\theta _ { \mathrm { p r i o r . } }$ ) fine-tuned on the union of the large general dataset and a small downstream dataset $D _ { \mathrm { p r i o r + d o w n } } .$ . The selection criterion for a data point $x _ { i }$ is the conditional loss reduction (CoLoR): $\mathrm { C o L o R } ( x _ { i } ) = - \log \mathrm { P r } ( x _ { i } | \theta _ { \mathrm { p r i o r + d o w n } } ) -$ $\left( - \log \operatorname* { P r } ( x _ { i } | \theta _ { \mathrm { p r i o r } } ) \right)$ . The key idea is to score samples based on the likelihood difference between these two models and select the ones that exhibit higher likelihood under the conditional model and larger conditional loss reduction.
Optimization-based Data Selection. Optimization-based data selection methods select subsets towards reducing model loss and improving model performance on the target tasks.
$\bullet$ Linear Search. Model-Aware Dataset Selection with Datamodels (DsDm) [130] selects the optimal subset of training data that minimizes the model’s loss on target tasks by employing linear datamodel [184], a parameterized function that maps a subset of training data to the model outputs for the specified target, to estimate how the inclusion of each training sample would affect the model’s loss on the target, reducing computational overhead. In practice, a linear datamodel $\tau _ { \theta _ { x } } ( 1 _ { S } ) = \theta _ { x } ^ { \mid } 1 _ { S }$ with parameters $\theta _ { x }$ and a characteristic vector $1 _ { S }$ (a binary vector indicating which samples are in $S$ ) is adopted to map the subset $S$ to the model loss on a sample $x$ through $L _ { x } ( S ) = \mathbb { E } [ \ell ( x ; A ( S ) ) ]$ . For each target, the characteristic vector $1 _ { S }$ is adjusted to reflect the subset, and the parameters $\theta _ { x }$ are estimated using a regression loss function like mean squared error over the training subset. After training, the datamodel selects the subset $S$ of the size $k$ that minimizes the loss $\begin{array} { r } { \hat { L } _ { D _ { \mathrm { t a r g } } } ( S ) = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \tau _ { \theta _ { x _ { i } } } ( 1 _ { S } ) } \end{array}$ for the target task.
Gradient-Influence Search. Low-rank Gradient Similarity Search (LESS) [417] identifies the most impactful subset of data for fine-tuning LLMs by analyzing gradient similarities. It first fine-tunes the model on a random subset (e.g., 5% of data) for a few epochs using LoRA to reduce trainable parameters and accelerate gradient computation, and saves the checkpoints after each epoch. Next, LESS computes Adam LoRA gradients for each training sample, projects them into lower-dimensional gradient features via random projection, and stores them in a gradient datastore. For downstream tasks, it calculates gradient features of fewshot validation samples and estimates the influence of each training sample $_ { z }$ on a validation sample $z ^ { \prime }$ using cosine similarity: $\begin{array} { r } { \operatorname { I n f } _ { \mathrm { A d a m } } ( z , z ^ { \prime } ) \triangleq \sum _ { i = 1 } ^ { N } \bar { \eta } _ { i } \cos ( \nabla \ell ( z ^ { \prime } ; \theta _ { i } ) , \Gamma ( z , \theta _ { i } ) ) } \end{array}$ , where $\Gamma ( z , \theta )$ is the Adam update. The training samples with the highest influence scores are selected for fine-tuning.
Kernel-Density Regularization. Task-Specific Data Selection (TSDS) [269] identifies high-quality pre-training or finetuning data for particular tasks by balancing two objectives: $( i )$ distribution alignment with the target task data and $( i i )$ diversity to avoid near-duplicates, accomplished via kernel density estimation (KDE) regularization. Concretely, one begins with a small set of target task samples $\begin{array} { r c l } { Q } & { = } & { \{ q _ { i } \} _ { i = 1 } ^ { M } } \end{array}$ and a large candidate pool $\begin{array} { r l } { D } & { { } = } \end{array}$ $\{ x _ { j } \} _ { j = 1 } ^ { N }$ , both of which are embedded into a shared metric space (e.g., using gradient-based or semantic embeddings). The optimization for distribution alignment is confrom ducted by solving for probability mass $q _ { i }$ to $x _ { j }$ ): $\begin{array} { r } { \operatorname* { m i n } _ { \gamma \in \mathbb { R } _ { > 0 } ^ { M \times N } } \frac { \alpha } { C } \sum _ { i = 1 } ^ { M } \sum _ { j = 1 } ^ { N } \gamma _ { i j } d _ { i j } \ + \ ( 1 \ - \ } \end{array}$ $\gamma _ { i j }$ (transported $\alpha ) G _ { \mathrm { K D E } } ( \gamma )$ s.t. $\begin{array} { r } { \sum _ { j = 1 } ^ { N } \gamma _ { i j } } \end{array} = \begin{array} { l } { \frac { 1 } { M } , \forall i } \end{array} \in \begin{array} { [ M ] } { } \end{array}$ , where $d _ { i j }$ is the distance between $q _ { i }$ and $x _ { j }$ in the metric space, and $G _ { \mathrm { K D E } } ( \gamma )$ is the regularization term that adds diversity and penalizes over-density using KDE: $G _ { \mathrm { K D E } } ( \gamma ) ~ =$ $\begin{array} { r } { M \operatorname* { m a x } _ { i , j } { \stackrel { \cdot } { \rho _ { j } } } \left| \gamma _ { i j } - \frac { 1 / \rho _ { j } } { M \sum _ { j ^ { \prime } } 1 / \rho _ { j ^ { \prime } } } \right| } \end{array}$ , where $\begin{array} { r c l } { \rho _ { j } } & { = } & { \sum _ { x ^ { \prime } \in D } ( 1 ~ - } \end{array}$ $f ( x _ { j } , x ^ { \prime } ) ^ { 2 } / h ^ { 2 }$ is the density est
imate for candidate $x _ { j }$ (higher for near-duplicates). Afterwards, it samples $x _ { j }$ with probability $\begin{array} { r } { p _ { j } = \sum _ { i } \gamma _ { i j } ^ { * } } \end{array}$ .
Model-based Data Selection. These methods aim to determine subsets guided by prompting the LLM itself.
Autonomous Data Selection (AutoDS) [465] prompts the LLM to assess and select mathematical and educational samples from a larger dataset. For each sample, the LLM is asked two questions: $( i )$ Is it mathematically relevant, and $( i i )$ It it educationally valuable. The LLM responds to each question with “Yes” or “No”, and the logit of each response is extracted to compute the LM-Score: $\begin{array} { r } { \mathrm { L M } \mathrm { - S c o r e } ( \cdot ) = \frac { \exp ( \log \mathrm { i t } ( \cdot \mathrm { Y E S } ^ { \prime } ) ) } { \exp ( \log \mathrm { i t } ( \cdot \mathrm { Y E S } ^ { \prime } ) ) + \exp ( \log \mathrm { i t } ( \cdot \mathrm { N O } ^ { \prime } ) ) } } \end{array}$ , and the composite score: L $\mathrm { M - S c o r e } ( Q _ { 1 } , Q _ { 2 } ) = \mathrm { L M - S c o r e } ( Q _ { 1 } )$ LM-Score $\left( Q _ { 2 } \right)$ . The composite score ranks and selects highquality math samples.
# 2.3.5 Data Mixing
Since LLMs rely on massive and diverse datasets, the composition of these datasets significantly impacts model performance [295]. For instance, as shown in Figure 3, we can see LLMs require different ratios of domain data to achieve capabilities such as medical diagnosis, coding, and solving math problems. To this end, data mixing refers to the strategy of (1) combining datasets from different domains, sources or structures in specific proportions to train LLMs or (2) making LLMs give different proportions of attention on different domains (e.g., by changing the sampling probabilities) in the training session. Effective data mixing ensures that the model captures broad generalization capabilities while balancing performance across tasks and domains [140]. Existing data mixing methods can be classified into two main categories:
TABLE 6: Comparison of Data Mixing Methods for LLMs.
# Principles
Unlike traditional ML models like BERT (trained on smaller, domain-specific data with homogeneous distributions), LLMs require massive multilingual or multi-domain corpora, raising the critical challenge of optimizing dataset mixing ratios for performance. Current methods use heuristic experimentation or formulate ratio-performance relationships (e.g., validation loss), but cost-effective determination of optimal ratios, beyond heuristics, remains unresolved due to high cost demands for functional approximations.
Before-Training Mixing (Human Experience). This method provides empirical data mixing strategies such as setting different ratios of datasets based on various factors (e.g., complexity and diversity of the datasets) that likely improve LLMs’ abilities.
First, to study the effect of data mixture, there are works that experiment heuristically on different data ratios for pretraining of LLMs. [139] suspects training sequence from simple to complex data would improve LLMs’ performance, thus introduces a two-stage data mixing strategy for LLM pre-training: (1) It first blends web-crawled data with minimal high-quality content (1.9% math, $1 5 \%$ code), testing ratios ( $< 3 5 \%$ high-quality) and selecting optimal mixtures via evaluations on CommonsenseQA [371] and HumanEval [95]. (2) It then filters low-quality data, boosting math $2 4 \% 2 9 \%$ ), code (20%→29%), and instructional alignment data. Ratios are similarly optimized through empirical validation. The method iteratively refines proportions using down-sampled Megatron-8B [355] for efficiency, then scales findings to a 25B model, balancing diversity-quality tradeoffs with reduced experimental overhead. Similarly, Slimpajama [347] explores the impact of data source diversity and weight distribution on model performance by adjusting the proportions of data from multiple sources, such as Commoncrawl [11], C4 [330], Github [14] .
Second, we can utilize metrics to judge different datasets and mix them. To calculate the best result rather than just try different combinations, Bimix [152] adopts entropy metrics (e.g., Shannon entropy [343], conditional entropy [343]) as the quality scores which are then normalized to compute the proportions of each domain (e.g., conditional entropy, written as as $H _ { i } \left( X _ { i } ^ { ( t + 1 ) } \mid X _ { i } ^ { ( t ) } \right) =$ $- \sum _ { x \in X _ { i } ^ { ( t ) } } \sum _ { x ^ { \prime } \in X _ { i } ^ { ( t + 1 ) } } P ( x , x ^ { \prime } ) \log P ( x ^ { \prime } | x )$ , where $X _ { i } ^ { ( t + 1 ) }$ Xi(t) are sets of tokens at positions t + 1 and t separately, $x$ and $x ^ { \prime }$ are tokens belonging to them, $P ( x , x ^ { \prime } )$ is the joint probability, $P ( x ^ { \prime } \mid x )$ is the conditional probability.
Before-Training Mixing (Model-Based Optimization). This category of methods design linear or non-linear models that depict $( i )$ the relation between the distribution of each domain, $( i i )$ validation loss, and $( i i i )$ some other variables like training steps, based on which they find the optimal settings through various model-based techniques.
(1) Linear Regression Model: Some methods utilize pairs like data mixtures and corresponding model performance to fit a linear regressing model, such that finding the best data mixture ratios.
Typically, REGMIX [263] defines the domains by source (like ArXiv, FreeLaw, etc.), which uses Dirichlet distribution (which controls the distribution of probabilities across multiple categories with a parameter) to generate all kinds of data distribution of several domains to train a small-scale proxy model to collect performance data, which is then used to fit a linear regression model (LightGBM [205]) to predict the optimal data mixing distribution. Then REGMIX uses both the best distribution and the average of top-100 distributions to verify on variations of TinyLlama [459] with additional layers with versions of 1B and 7B.
(2) Non-linear Regression Model: There are also many methods that design non-linear regression models for data mixing by considering more complex training characters.
Bivariate Data Mixing Law. Based on observations of validation loss changes due to variables like domain proportion (where the data come from different sources like
Pile-CC) and training steps, BiMix [152] proposes Bivariate Data Mixing Law that depicts the relation among domain’s proportion, training steps and validation loss, which can be written as $\begin{array} { r } { L _ { i } \left( r _ { i } , s \right) ~ = ~ \frac { A _ { i } } { r _ { i } ^ { \alpha _ { i } } } \left( \frac { B _ { i } } { s ^ { \beta _ { i } } } + C _ { i } \right) } \end{array}$ , where $A _ { i }$ , $B _ { i }$ , $C _ { i }$ are domain-dependent scaling coefficients, $\alpha _ { i }$ and $\beta _ { i }$ are powerlaw exponents that control the influence of domain proportion and training steps respectively, $s$ represents the training step count. It utilizes the law to fit the actual data curves by fixing the domain’s proportion or training steps and varies the other one to get validation loss by training a small model (decoderonly transformers based on the DoReMi [420] architecture with 280M parameters). After depicting the relation, we model the task as an optimization problem (resolvable by Lagrange multipliers) and then verify on larger LLM (decoderonly transformers based on the DoReMi [420] architecture with 1B parameters).
Chinchilla Scaling Law. D-CPT [323] establishes a mathematical relationship which could be used to find the best mixture of general and domain-specific data between validation loss, model size, data size, and domain data mixing ratios based on Chinchilla Scaling Law [170] to optimize domainspecific continual pre-training as $\begin{array} { r } { L ( N , D , r ) \ = \ E + \frac { A } { N ^ { \alpha } } \ + } \end{array}$ $\begin{array} { r } { \frac { B \cdot r ^ { \prime \prime } } { D ^ { \beta } } + \frac { C } { ( r + \epsilon ) ^ { \gamma } } } \end{array}$ ( $N$ is model parameter count, $D$ is training data volume (number of tokens), $r$ is domain corpus ratio, $E , A , B , C , \alpha , \beta , \gamma , \eta , \epsilon$ are fitting parameters), with a variation which introduces K which describes the difficulty to learn the ma( iskna wfiltetidngge psa $\begin{array} { r } { L ( N , D , r ) = E + \frac { A } { N ^ { \alpha } } + \frac { B \cdot r ^ { \prime \prime } } { D ^ { \beta } } + \frac { C } { ( r + \epsilon ) ^ { \gamma } } + } \end{array}$ $\frac { F } { K ^ { \mu } }$ $F ^ { \prime }$ through small-scale experiments to predict performance under different training configurations and find the suitable ratio to minimize the domain validation loss while ensuring the generalization loss does not exceed the specified threshold.
$\bullet$ Exponential Functions. Data Mixing Law [439] establishes an exponential relationship between validation loss and data mixing ratios of several domains (e.g., public datasets like Pile-CC, Books3), $\begin{array} { r } { L ( r ) = c + k \exp \left( \sum _ { i } t _ { i } r _ { i } \right) } \end{array}$ , where $L ( r )$ is the validation loss, $r$ represents the mixing ratios of different domains, and $c$ , $k$ , and $t _ { i }$ are learnable parameters. That is, it experiments on a small model with the exponential relationships to predict the best data domain mixing ratios on LLM performance with scaling laws, which combines training step scaling laws ( $L ( S ) = c + k S ^ { \alpha }$ , where $S$ is the number of training steps, and $\alpha$ is a fitting parameter.), which is used to infer the validation loss at target training steps from results at smaller steps, and model size scaling laws ( $L ( N ) = c + k N ^ { \beta }$ , where $N$ is the number of model parameters, and $\beta$ is a fitting parameter), which is used to infer the validation loss for large model sizes from smaller model sizes.
$\bullet$ Classification Model. [251] aims to find the data proportion of closed-source model by data proportion detection, which first generating large-scale data from the LLM, then using a classification model to categorize the generated data and compute perplexity, deriving the proportions of pre-training data based on the Data Mixing Law [439] (which is a mathematical formula describing the relationship between the proportion of pre-training data and the model’s loss in different domains.).
Power-law Function. CMR [160] aims to optimize the continual pre-training by finding the best ratio of generic dataset and domain-specific dataset. Based on the research before and the data observed on different sizes of models with different ratios of data, the relationships between loss and mixture ratio, and training volume fit in power-law forms, which are described as $L ( R ) = { \boldsymbol { \alpha } } \cdot R ^ { s } + \beta$ and $L ( T ) = \alpha _ { 1 } \cdot T ^ { s _ { 1 } } + \beta _ { 1 }$ , where $\alpha$ , $\beta$ , $s$ , $\alpha _ { 1 }$ , $\beta _ { 1 }$ and $s _ { 1 }$ are fitting parameters. Based the relationships, they propose a metric Critical Mixture Ratio, which is the maximum data mixing ratio that balances between (1) significantly reducing domain loss while (2) keeping the increase in general loss within a pre-defined tolerance range. Based on the two aspects, the ratio is defined as $R ^ { * } = \operatorname* { m a x } \{ R \mid R \in F \}$ , where $R$ is the ratio of generic dataset and domain-specific dataset, $F$ is feasible mixture ratios which comprises all mixing proportions that satisfy the constraints of the general loss function.
During-Training Mixing (Bilevel Optimization). This method adopts a closed-loop optimization technique that ensures model parameters are well optimized [108]. Generally, Bilevel optimization involves two nested optimization problems: (1) the inner-level problem ensures model parameters are optimized under given weights (e.g., minimizing weighted training loss), while (2) the outer-level updates weights through backpropagation of validation loss, forming a closed-loop optimization.
Typically, ScaleBiO [302] reconstructs the data sampling weight optimization problem into a bilevel optimization problem, where outer-level problem is adjusting data weights to minimize validation loss; and the inner-level problem is adjusting model parameters to minimize weighted training loss and it could be applied to tasks like multilingual training (mixture of languages) and instruction following (mixture of quality). ScaleBio first experiments on small models. Then it extends to larger models like LLaMA-3. ScaleBiO initialize the weights equably for all data sources. In each iteration, it randomly selects a subset of data sources to update their weights: for the selected data sources, it adjusts the weights by optimizing the gradient of the validation loss, prioritizing the increase of weights for data that contribute significantly to model performance, while decreasing the weights for data that have less impact on performance. After updating the weights, retrain the model parameters and repeat the process until convergence.
To enhance the efficiency of BiO-based data mixing, DoGE [135] defines $( i )$ inner-level problem as that under the condition of fixed data mixing ratios, optimize the proxy model parameters to minimize the weighted sum of domain losses; and $( i i )$ outer-level problem as adjusting the data mixing ratios such that the model parameters obtained through inner-level problem optimization achieve optimal performance on the target loss. The method is executed on a small-scale proxy by following steps: Initially, it sets the domain weights as a uniform distribution. In each iteration, it dynamically adjusts the weight of each domain based on the gradient alignment value (calculated as the inner product of the gradient of current data domain and the sum of gradients from all data domains), which measures the contribution of the data from the current domain to the gradient direction of all other domains’ data. Using the updated weights, it resamples the data and updates the model parameters. Repeat the process for multiple iterations until the weights stabilize, then apply to actual LLM pre-training.
# During-Training Mixing (Distributionally Robust Op
timization). To search for a robust data mixing strategy (which can be sub-optimal but with low uncertainty), some methods adopt Distributionally Robust Optimization (DRO) for data mixing. DRO achieves robustness against distributional uncertainty by optimizing for the worst-case scenario within a set of distributions (referred to as the uncertainty set or ambiguity set).
$\bullet$ For LLM pre-training, DoReMi [420] defines the worst case as domains where the proxy model underperforms compared to the reference model, which initially sets the domain weights as a uniform distribution and each domains contains several sample sets, and uses it to train Transformer decoder-only LM with 280M parameters and computes loss in each example set, which provides a reference point to measure the improvement potential (the loss difference) of the proxy model in each domain. Next, DoReMi trains a small-scale proxy model (also Transformer decoder-only LM with 280M parameters) by adjusting the domain data weights through DRO, which dynamically adjusts the domain weights and tilt the weights toward domains with larger losses (compared to the reference model). Finally, validate performance of weighted domain data on large models (Transformer decoder-only LM with 8B parameters).
$\bullet$ For LLM fine-tuning, tDRO [278] defines the worst case the same as DoReMi, which computes the relative loss for each domain with a proxy model (e.g. Qwen1.5-0.5B [69]); and they compare the training loss of domain data with the reference model (e.g., Qwen1.5-0.5B), and evaluate each domain’s potential for model improvement, and update the domain weights accordingly, giving more attention to highloss domains. Finally, the updated weights are normalized to form a new sampling distribution and repeat the process to get final data distribution.
# 2.3.6 Data Distillation and Synthesis
Synthetic data, which mimics real-world scenarios, is particularly valuable for resolving problems such as (i) data scarcity (e.g., augmenting data for a small dataset) [426], (ii) privacy concerns (e.g., replacing sensitive data with synthesis data) [419], (iii) the need for diverse and high-quality datasets (e.g., generating examples for underrepresented cases) [260], (iv) lack of reasoning data (e.g., for code, chain of thought), (v) human alignment (e.g., label better LLM’s response by human beings or LLMs).
# Principles
Traditional ML methods use rule-based templates, basic augmentation (lexical substitution, backtranslation), or statistical models to create limited synthetic data, addressing data scarcity/class imbalance. While LLM-driven synthesis employs LLMs to produce diverse, high-quality data, tackling data scarcity, privacy concerns, and diverse training needs. Key paradigms include: (i) sample-driven generation, (ii) domain-aligned synthesis, and (iii) reasoningcentric formatting. Challenges involve ensuring rigorous reasoning chain synthesis and optimizing costquality balance in data production.
Despite the advantages, synthetic data can negatively impact LLM training, such as when characteristics like toxicity are inherited from the source model or even amplified [352]. Thus, it is vital to design data synthesis methods for LLMs [495]. As shown in Figure 4, we discuss methods dealing these problem through the diverse LLM stages, including preTraining, SFT, Reinforcement Learning and RAG.
Knowledge Distillation. Due to LLMs’ massive parameter scale and high resource demands which make practical deployment challenging, so we utilize knowledge distillation (such as designing paradigms to prompt LLM to generate highquality data) to training a student LLM with less parameters to mimic the target model’s generation ability.
Task-Specific Prompt Distillation. To significantly reduce inference costs and latency while maintaining performance, [353] employs task-specific prompts: (1) Chain-of-Density (CoD): Iteratively adds entities to summarize for enhanced density. (2) Chain-of-Thought (CoT): Guides reasoning tasks (e.g., math) through stepwise logic. Using GSM8K [106] data and Llama-3.1-405B-Instruct, synthetic data is generated for fine-tuning smaller models (Llama-3.1-8B/70B-Instruct) paired with simplified prompts, balancing efficiency and task specialization.
$\bullet$ Code Verification and Error Correction Distillation. Existing knowledge distillation methods (e.g., Chain-of-Thought Fine-tuning) rely on synthetic data generated by LLMs, but such data often contains incorrect intermediate reasoning steps which can mislead small models during learning, hindering the improvement of their reasoning capabilities.
Pad [496] proposes Program-aided Distillation (PaD) to address error-prone synthetic data in knowledge distillation with (i) Programmatic Reasoning: LLMs generate executable code (e.g., math problems as Python calculations) instead of natural language CoT, with Python compilers auto-filtering logic errors. (ii) Error-Injection Training: Models learn error correction by fixing synthetically injected AST-based errors (e.g., NameError). (iii) Semantic Validation: Decoding selects steps via semantic alignment scoring (e.g., cosine similarity) to prevent error propagation. PaD replaces flawed CoT steps with verifiable program logic, enhancing small models’ reasoning robustness through code-based distillation and selfcorrection mechanisms.
Multi-stage Collaboration Distillation Between Student models. In domains with high annotation costs (e.g., biomedical parsing) or complex task structures (e.g., syntactic/semantic parsing), labeled data is extremely scarce, making traditional supervised fine-tuning ineffective. MCKD [467] introduces Multi-stage Collaborative KD (MCKD) for lowresource generation as 3 steps. (i) Initialization: GPT-3.5 generates pseudo-labels for unlabeled data. (ii) Collaborative Distillation: Splits data into two subsets for cross-labeling via paired T5-Base models, reducing noise overfitting. Iteratively refines labels over 3 iterations. (iii) Final Training: Trains a single model on refined labels. Achieves near-supervised performance with 50 labeled examples (vs. 500 required traditionally) through multi-stage noise reduction and collaborative pseudo-label optimization.
Pre-training Data Augmentation. The pre-training stage of LLM requires a vast amount of data and it can be costly to synthesize such data with powerful models like GPT
4. Therefore, there are techniques like distillation [481], or simply mixing synthetic data into the whole corpus.
Distilled LLM for Mathematical Data Synthesis. JiuZhang3.0 [481] proposes an LLM-based synthesis method for high-quality math problems: (i) Model Distillation, fine-tunes DeepSeekMath-7B on GPT-4-generated QA pairs (with curated prompts and math texts) to mimic GPT-4’s generation. (ii) Uses gradient similarity to prioritize task-relevant data. (iii) Refines the model with filtered data to produce aligned outputs. The final math synthetic corpus are generated by the refined model based on the multi-source corpus (e.g., Wikipedia) and prompt sets.
Fintuned LLM for Instruction-Response Pair Synthesis. In order to study the effect of supervised pre-training, Instruction PT [99] introduces an Instruction Synthesizer (Mistral-7B finetuned on $^ { 4 0 + }$ task categories) to augment raw text with few-shot multi-task instructions (e.g., ”Summarize school activities” $$ QA/reasoning pairs). Unlike GPT-style pre-training, it integrates structured task execution (QA, classification) alongside language modeling. This hybrid approach boosts data efficiency (500M model $\approx$ 1B baseline) and multitask adaptability from pre-training.
$\bullet$ LLM Prompting for Mathematical Data Synthesis. Current math-specialized LLMs rely on SFT with problem-solving data (e.g., step-by-step solutions). However, since CPT improvements in math are far less significant than SFT gains.
To study the impact of problem-solving data in continual pre-training, [98] proposes enhancing models’ mathematical reasoning capabilities by augmenting problem-solving data (e.g., step-by-step solutions for common math problems) during pre-training, rather than relying solely on traditional math corpora (e.g., theorem texts). First, a student model (Llama2 [386]) is utilized to generate answers from the collected math problems. Then, it uses a teacher model (Llama2 [386] with more parameters) detects errors in a student model’s solutions and generates corrective steps guided by prompts. This teaches the target LLM self-checking and error-correction skills. Experiments indicate continual pretraining excels at learning complex reasoning (e.g., multistep equation solving) than SFT, where MathGPT-8B using only 100B well-generated math-related tokens can exhibit capabilities comparable to Qwen2-Math-72B [434].
LLM Prompting for Rephrasing Synthesis. To introduce more diversity to the data, some methods rephrase the data to different styles of texts like Q&A or concise definition. WRAP [282] leverages instruction-tuned models (e.g., Mistral-7B) to rephrase web text (C4) into four formats: (i) simple vocabulary and sentence structures that are understandable to young children. (ii) Standardized encyclopediastyle expression. (iii) Complex terminology and concise academic sentence structures. (iv) multi-turn dialogue. Mixing rephrased and original data trains LLMs to adapt to diverse formats (e.g., zero-shot QA), achieving $3 \times$ faster training and $5 0 \%$ lower perplexity on Pile benchmark [149] via hybrid realsynthetic data synergy.
LLM Prompting for Cross-language Synthesis. LLMs like Llama-3 exhibit deficiencies in cross-language tasks and multidisciplinary scientific reasoning, while continual pretraining often triggers catastrophic forgetting (e.g., performance degradation in original capabilities like English tasks). [93] proposes to synthesize data so as to enhance Llama
3’s Chinese proficiency and scientific reasoning capabilities while mitigating catastrophic forgetting. They utilize Mistral7B [188] to generate multidisciplinary scientific questionanswer pairs (e.g., Q&A on “explaining the electrostatic repulsion principle of ion double layers in electrolyte solutions”) from seed data collected and classified into multiple disciplines by TinyBERT [195] and BERT-Tiny-Chinese [23] from Dolma’s CC [361] and C4 [120]. And generate coding problems with LeetCode algorithm tasks as seeds by Magicoder-SDS-6.7B [409] .These are mixed with Chinese, English, and synthetic data in a 1:7:2 ratio, significantly boosting scientific reasoning.
Additionally, through substitution experiments (validating data strategies using TinyLlama-1.1B [459] as a proxy model), they find that (1) a $2 0 \%$ synthetic data ratio with an error rate below $3 0 \%$ yields optimal results; and (2) a curriculum progressing from simple to complex topics outperforms random training.
Code Interpreter $^ +$ LLM Prompting for Code Synthesis. Current code generation models rely heavily on large teacher models (e.g., GPT-4) to generate synthetic training data, leading to poor scalability, high costs. And most datasets focus on direct code completion or text-to-code translation, but lack Input-Output (I/O) case-based reasoning tasks (e.g., inferring code from example mappings like “hello” $$ “olleh”). This gap results in weak generalization for inductive programming challenges.
To bridge this gap, Case2Code [344] generates training data through four steps: (i) Extract executable Python functions (with input/output parameters) from open-source repositories; (ii) Use lightweight LLMs (e.g., InternLM2-7B) to analyze function logic and generate diverse input samples; (iii) Execute functions to obtain real outputs and filter invalid results; (iv) Convert I/O pairs into natural language prompts with diversified templates for improved generalization. This method leverages ”code interpreter $^ +$ lightweight LLM” to cost-effectively produce 1.3M training samples, eliminating reliance on expensive teacher models.
$\bullet$ LLM-based Clustering for Synthetic Data Evaluation. In order to study the impact of diversity of large-scale synthetic data, [92] introduces an LLM-based clustering method to quantify synthetic data diversity and analyze its impact on model performance. (i) Builds hierarchical topic trees from web-crawled data via GPT-4 (e.g., Quantum Computing $$ Qubit Types $$ Superposition); (ii) Generates diverse datasets by varying topics, prompts (styles, target audiences, etc.) and LLMs (GPT-4o, Llama-3, etc.). Experiments across different diversity combinations show synthetic data diversity positively correlates with model performance on benchmarks like HellaSwag [449] and ARC-Challenge [142].
$\bullet$ LLM Prompting for Multimodal Image-Text Synthesis. Current approaches for synthesizing multimodal pre-training data typically employ two main approaches: (1) the generation of images conditioned on textual input using text-to-image models, and (2) the augmentation of uncaptioned or simplecaptioned source images via multimodal models. In the domain of text-to-image synthesis, current methods use diffusion models [145] for image generation. Examples include DiffuseMix [185], which enhances datasets by augmenting image samples through the blending of original and diffusiongenerated images, and EDA [387], which applies diffusion models to produce variations of real images that retain semantic consistency while augmenting the dataset. Concerning image captioning, several studies focus on improving the quality of image-text pairs. LaCLIP [133] uses ChatGPT to rewrite existing captions, thereby introducing greater diversity in linguistic expression while maintain the core semantic content. A limitation of this method is the potential for visual semantic loss due to the language model’s lack of direct access to the image. To mitigate this, VeCLIP [222] incorporates a multimodal LLM (LLaVA) to provide a detailed visual description of the image contents (e.g., color and shape attributes, objects, and relations among objects). This description is then fused with the original caption by a LLM to yield a more comprehensive final caption. To simultaneously synthesize both image and text samples, CtrlSynth [83] proposes a system comprising three modules: the Florence-large [418] vision tagging model to extract basic visual elements of an image (e.g., color and shape attributes, objects, and relations among objects), the Qwen2-7B-Instruct [434] language model to generate synthetic text which meets the requirements in the instruction, and the stable-diffusion-x1-base-1.0 [314] text-to-image model to generate novel and diverse image samples based on text prompts.
SFT Data Augmentation. The SFT stage of LLM training mainly focus on improvement of specific domains (math, medicine, etc.), aligning LLM’s knowledge to instructions, enhancing reasoning ability, etc. Current methods take LLMs as the main method to generate data with some designed frameworks. Many works [179], [260], [290] take existed datasets as seeds to synthesize mimic datasets.
LLM-based Knowledge and Q&A Pairs Synthesis. To enrich or enhance the diversity of data for better model performance, there are various prompt frameworks such as building topic taxonomy [233] and iterative synthesis [179].
For example, to cover various domains of human knowledge, GLAN [233] introduces a knowledge-classification framework for synthetic text generation by GPT-4. (i) Organize knowledge domains (natural sciences/humanities) into disciplines (math/programming) by; (ii) Develop course outlines with units (e.g., ”Intro to Calculus”) and core concepts (e.g., ”Limits”); (iii) Use GPT-4 to create diverse questions by combining concepts, then generate answers with faster GPT3.5. This structured approach ensures systematic coverage of knowledge areas while balancing generation quality and efficiency.
Though this could enhance understanding of LLM about many domains, but to get better enhancement still needs to focus on one aspect, like math, KPDDS [179] identifies mathematical problem themes (e.g., algebra, geometry) and core skills (e.g., factoring) using GPT-4, then constructs a matrix mapping theme co-occurrence probabilities to guide logical problem generation. GPT-4 synthesizes new questions based on these themes and solutions, which are evaluated for quality (clarity, coherence) and refined via GPT-4 voting. The method further diversifies questions through variations and applies iterative voting to optimize output. This structured approach ensures contextually coherent, avoiding random combinations.
Instead of combining elements like KPDDS (e.g., combining algebra and geometry to synthesize problems),
TABLE 7: Data Synthesis for LLM.
MMIQC [260] enhances mathematical reasoning by iteratively generating complex, diverse problems from existing ones for fine-tuning. Using a seed dataset, GPT-4 creates problems via added constraints, variables, or extended reasoning. A filtering mechanism ensures logical consistency, problem-solution alignment, and correctness, with validated data expanding the dataset iteratively.
$\bullet$ LLM-based Alignment Data Augmentation. Domain knowledge is one thing, and lead LLM’s knowledge align with instruction is another thing that could be done to get better performance through techniques like few-shot prompting.
AgentInstruct [290] uses LLMs to create scalable, diverse Q&A data. GPT-4 converts raw input (text/code) into structured formats (argument passages, API lists) to enable diverse instruction creation. Multiple GPT-4 agents generate varied task instructions and answers following a detailed taxonomy (e.g., reading comprehension, coding tasks). GPT4 and Claude-3 then refine tasks by adding complexity (e.g., integrating dense context or escalating difficulty), ensuring high-quality, adaptable outputs.
Similarly, SELF-INSTRUCT [401] aligns LLM’s knowledge to prompts by generating task instructions and examples: Starting with a small set of manually written seed tasks, a LLM (e.g., GPT-3) is prompted to generate new task instructions covering various task types, such as classification, question-answering, and generation. Next, different strategies are employed to generate inputs and outputs based on the task type. For instance, for classification tasks, possible class labels (e.g., ”positive” and ”negative”) are generated first, followed by inputs corresponding to each label. For open-ended tasks, a question description is generated first, followed by an answer. The generated data undergoes multiple rounds of filtering, including removing duplicates or invalid data and ensuring input-output alignment.
SFT Reasoning Data Augmentation. Synthesize reasoning data (e.g., code, chain of thought) through techniques like Chain-of-thought(CoT), or utilizing verification tools for more rigorous reasoning.
$\bullet$ Prompting LLM To Math Reasoning With Verify Tool. Also for math, MUSTARD [178] utilizes mathematical proof tools to get reasoning enhancement. First, fundamental concepts from the field of mathematics are selected as seeds, and GPT4 generates corresponding problems through two types of solutions: (1) One is a natural language explanation of the reasoning process, and (2) the other is a formal language solution that can be verified (e.g., code compatible with mathematical proof tools). Next, formal solutions are verified using mathematical proof tools to ensure the correctness of the reasoning and answers. For content that fails verification, the model adjusts based on feedback and re-verifies until a correct result is generated.
$\bullet$ CoT Data Synthesis By LLM Exploring. Works mentioned above highly rely GPT-4 for its advanced ability for math to generate problems and solutions to fine-tune for higher reasoning ability. While more recent research try to enhance LLMs’ reasoning ability by technique like Chain-of-Thought (CoT, which let LLMs use tokens to output their reasoning steps) and synthesis or label finer reasoning data for training.
By generating CoT data that covers a wide range of reasoning paths through a trial-and-error self-verification loop, [173] breaks the traditional limitation of relying solely on correct reasoning paths. Specifically, multiple LLMs (e.g., Qwen-7B, Llama-3-8B) are utilized to generate diverse solutions for the same mathematical problem (20-50 responses per problem) to encourage models to explore incorrect paths (e.g., wrong formulas, logical leaps) while retaining complete error analysis. Then a verifier LLM (e.g., GPT-4) performs critical analysis on each response: (a) For incorrect paths, annotate the error steps and generate correction suggestions (e.g., “Step 3 misapplies the cosine theorem, which should be replaced with the Pythagorean theorem”). (b) For correct paths, extract key reasoning steps to form a concise CoT. Merge corrected incorrect attempts with correct paths to construct multi-branch CoT.
Similarly, Satori [346] introduces Chain-of-ActionThought (COAT), a reasoning framework with meta-action tokens (Continue / Reflect / Explore) enabling dynamic pauses, logic verification, and strategy shifts with a two-stage pipeline: (i) Multiple LLM agents generate COAT-formatted reasoning chains to fine-tune a base model for COATformatted syntax mastery. (ii) Partial rollbacks (≤5 steps) from historical reasoning (correct/incorrect paths) append <reflect> to trigger revised reasoning with reinforcement learning (RL) combined with rewards for answer correctness, error correction, and penalties for failures. The RL-enhanced model is distilled into base models (e.g., Llama8B) for iterative refinement.
These works propose framework by letting LLM reason by themselves, and we also have works that label reasoning data for fine tuning to get reasoning ability.
Reasoning Data Labeling. [253] compares the effects of outcome supervision (provides feedback based solely on the correctness of the final answer) and process supervision (provides feedback for each step in the reasoning process) on mathematical reasoning tasks by comparing manually labeling the reasoning steps generated by GPT-4 with outcome supervision. The results showed that process supervision model achieved significantly higher problem-solving accuracy (78.2%) compared to outcome supervision model (72.4%)
But this would cost too much manual effort, so MATHSHEPHERD [399] proposes a method to automatically generate process-annotated data for training Process Reward Models (PRM, which evaluate the quality of each reasoning step). First, complete the remaining reasoning and answers multiple times for the initially generated reasoning steps with LLM, then each step is scored based on two metrics: (1) Hard Estimation (whether the correct answer is generated, with values of 0 or 1). (2) Soft Estimation (the proportion of correct answers generated through this step). These scores assess the step’s ability to derive the correct answer.
High Quality and Well Format Data Are The Keys To Better Reasoning. Moreover, LIMO [442] and [230] state that high quality and well-formatted reasoning data are keys to high performance. [442] emphasizes stimulating complex reasoning capabilities in LLMs through a small number of high-quality training examples with questions and reasoning chains. Powerful models (such as R1, DeepSeek-R1-Distill-Qwen32B) are used for evaluation and synthesis, retaining problems that remain challenging. Each problem is accompanied by detailed solutions and reasoning chains (from official solutions, expert solutions, and LLMs-generated Cot, etc.) and filtered by rules-based and LLM-assisted methods.
[230] finds that the overall structure of the reasoning steps is more important than the specific content. With problems from Numina-Math [235] etc. and long CoT generated by DeepSeek-R1 [162] and QwQ-32B-Preview [379] as data to fine-tune. With modification of the fine-tune data, reveals that training the model with incorrect answer samples results in an accuracy drop of only $3 . 2 \%$ compared to training with correct samples. However, shuffling 67% of the reasoning steps in the training samples leads to a $1 3 . 3 \%$ drop in accuracy on AIME 2024 problems relative to training with correct samples.
Reinforcement Learning The RL stage of LLMs find the most human-preferential responses within the multiple responses generated by LLM of one instruction. Works like [71], [476] manually label the responses or let LLMs do the job.
Label better LLM’s response by human or LLMs. To align the model’s responses with human expectations, [71] gathers helpful and harmless data through open-ended conversations. Then, a preference model is trained to score the responses in the data, providing a basis for reward optimization in reinforcement learning. The preference scores guide the optimization of the language model’s responses. Next, the latest model generates new data, continuously updating the preference model to improve performance on high-quality data. To improve efficiency, [476] proposes a new chatbot evaluation method using language models as ”judges” to compare and score chatbot responses, with the goal of automating the evaluation process and reducing human involvement. It introduces two benchmarks: one focusing on multi-turn conversation performance and another collecting user preferences via crowdsourcing. The method also addresses potential biases, such as preferences for answer order or length, through strategies like swapping answers, using few-shot examples or Chain-of-Thought. The approach demonstrates that language models can achieve high consistency with human evaluators, providing a scalable and interpretable framework for efficient chatbot assessment.
Retrieval-Augmentation Generation. The RAG stage mainly offers knowledge and documents from outside to avoid additional training cost. Main works in this stage of data synthesis focus on privacy issues.
Replace sensitive data with synthesis data. In order to mitigate the privacy issue, [450] proposes a two-stage synthetic data generation and privacy-enhancing method for the RAG stage of LLM.
In the first stage, key information is extracted from the original data (such as “symptom description” and “treatment plan” in medical dialogues), and LLM is used to generate synthetic data based on key information but does not contain sensitive details.
In the second stage, LLMs are applied to the synthetic data, and rewriting strategies are employed to eliminate potential privacy leaks (such as removing specific names or obfuscating descriptions).
This process of evaluation and rewriting is repeated to ensure that the generated data retains its key utility while completely avoiding privacy concerns.
# 2.3.7 End-to-End Data Processing Pipelines
With above data processing methods, we separately introduce existing frameworks that support common processing operations; practices of integrating some of these methods within pipelines in real-world LLM data preparation; together with some preliminary pipeline orchestration methods.
# Principles
When designing data processing pipelines, several critical factors must be considered: (1) the trade-off between data quality and quantity; (2) dependencies across the processing operations (e.g., text extraction necessarily preceding operations like deduplication and filtering); (3) efficiency optimization (e.g., conducting computationally intensive steps like modelbased filtering after lightweight processing steps like URL filtering).
# 2.2.7.1 Typical data processing frameworks
Data processing frameworks provide built-in libraries, operators, and intuitive interfaces that can benefit the design of data processing pipelines for different LLMs. Here we showcase three typical data processing frameworks.
(1) Data-juicer [90] is an open-source framework designed for customizable, high-quality, and efficient data processing. It offers a diverse range of pre-built data processing operators such as data formatting, mapping, filtering, and deduplication. Additionally, the framework features visualization and automatic evaluation, enabling users to receive immediate feedback on their data pipeline. To manage large-scale datasets effectively, Data-juicer is optimized for distributed computing, ensuring robust performance and scalability.
(2) Dataverse [305] is an open-source framework designed to simplify custom ETL (Extract-Transform-Load) pipeline development through an easy-to-use block-based interface that enables users to easily customize by adding, removing, or rearranging blocks. The platform offers a diverse range of pre-built data processing operators, including deduplication, decontamination, bias mitigation, and toxicity reduction, while also supporting the integration of data from multiple sources. Similar to Data-juicer, Dataverse integrates with Apache Spark for distributed processing and supports AWS integration for cloud scalability.
(3) [368] introduces a data processing framework that allows users to customize data processing pipelines using a comprehensive suite of operators categorized in two main modules: (1) The processing module consisting of data reformatting (read and import strctured data), cleaning (removed undesired data such as HTML tags and translate text), filtering, and deduplication (using MinHashLSH in Section 2.3.2) operators; (2) The analyzing module featuring refined data probing and automatic evaluation.
# 2.2.7.2 Typical data pipelines
Data processing pipelines aim to orchestrate a subset of data processing operations (in a specific order) that transform raw data into high-quality LLM training data (mostly for the pre-training stage). Here we showcase three representative pipelines.
The MacroData Refinement (MDR) pipeline is designed to construct the RefinedWeb Dataset, which has been used for pre-training Falcon LLMs [311]. MDR refines web-scale data from Common Crawl [11] through three main operations. (i) Data acquisition: MDR first applies a lightweight URL filter to exclude irrelevant links before any computationally intensive steps. It then extracts text from WARC files using warcio and Trafilatura [73], followed by language identification (i.e., removing content with limited natural language) using fastText [199] as implemented in CCNet [410]. (ii) Data filtering: To eliminate low-quality content, MDR employs both (1) document-level filtering [328] and (2) linelevel filtering, which removes noisy content such as social media counters or navigation links. (iii) Data deduplication: Despite prior filtering, substantial content duplication remains, which can degrade model performance. MDR performs both fuzzy deduplication using MinHash and exact deduplication with suffix arrays to minimize redundancy. To address computational limits, the Common Crawl corpus is partitioned into 100 segments, with deduplication performed per segment. Additionally, to avoid cross-part redundancy, URL-level deduplication is applied by excluding URLs already retained in earlier segments.
Overall, MDR follows three core design principles: (i) scale first, by maximizing data volume from Common Crawl to support large model training; (ii) strict deduplication, as rigorous redundancy elimination is critical for training efficiency and generalization; and (iii) heuristic filtering, favoring rulebased filters over ML-based ones to reduce bias and maintain transparency.
$\bullet$ The DCLM-Baseline pipeline also processes data from the Common Crawl dataset. Different from MDR, in addition to text extraction and language identification, it applies efficient heuristic filtering [311] to exclude irregular content (e.g., toxic words or webpages from illegal sources). Next, DCLM-Baseline adopts a Bloom filter for data deduplication, ensuring its scalability with large datasets. Finally, over the processed data with much smaller size, it conducts modelbased quality filtering (most computationally intensive) to remove low-quality content. Specifically, a fastText classifier trained on instruction-formatted data, including OH-2.5 (OpenHermes 2.5) and ELI5 (ExplainLikeImFive), is used to retain the top $1 0 \%$ of documents.
The FineWeb pipeline (for preparing a 15T-token pretraining dataset) starts with text extraction from WARC files using Trafilatura [73], which is more custom than directly using WET format data and language filtering with fastText. Different from the above pipelines, it conducts MassiveText filtering, i.e., heuristic quality filters and repetition filters on paragraph, line, and gram level [328]. Besides, it conducts fuzzy deduplication using individual MinHash deduplication for each CommonCrawl snapshot, as this approach matches RefinedWeb’s performance, whereas global deduplication yields little improvement over non-deduplicated data. After deduplication, given the observation that the C4 dataset yields superior performance on some benchmarks despite its smaller size, a selection of C4 [330]’s heuristic filters is applied to drop low-quality content such as unpunctuated lines and policy statements. Finally, to further enhance data quality, additional custom heuristic filters are developed through a systematic process. Moreover, personal identifiable information (PII) such as email addresses is anonymized using regex patterns in the public release of the dataset.
Fig. 6: Typical data processing pipelines for LLMs.
Compared to MDR and DCLM-Baseline, the FineWeb pipeline is considerably more complex due to its integration of multiple layers of filtering, each inspired by empirical evaluations and comparisons with other datasets such as C4 and RefinedWeb. Its design reflects a trade-off that prioritizes performance over simplicity.
# 2.2.7.3 Orchestration of data pipelines
The above data pipelines are mostly designed by experience. Instead, Data-Juicer Sandbox [91] proposes a “ProbeAnalyze-Refine” workflow, which involves systematically exploring the impact of various data processing operations and their orders on model performance, combining effective operations into data recipes, and optimizing data utilization through duplication analysis and diversity analysis. The orchestrated pipelines are validated through applications on state-of-the-art models like Mini-Gemini (for image-to-text generation) and EasyAnimate (for text-to-video generation).
# 2.4 Data Storage for LLM
In this section, we introduce storage techniques for LLMs, which we categorize accroding to the tasks they address, including (1) data formats, (2) data distribution, (3) data organization, (4) data movement, (5) data fault tolerance, and (6) KV cache.
# 2.4.1 Data Formats
Data formats are file formats for training data and models. For LLMs, appropriate file formats for data and models can
enhance storage efficiency, accommodate multimodal data, be suitable for model training, ensure security, and influence compatibility across different frameworks.
# Principles
Compared to traditional machine learning, LLMs place greater demands on data being multi-modal and in a unified format. The main challenge is how to achieve high data reading efficiency in multi-modal scenarios. Current methods address this using techniques like sequential storage.
Training Data Format. For training data, file formats are required to have good storage efficiency (e.g., TFRecord [44]), be adaptable to large amounts of data (e.g., MindRecord [40]), and sometimes be suitable for model training (e.g., tf.data.Dataset [43]).
(1) Pure-Text Formats. Common formats such as CSV, JSON, TSV, and TXT are often used to store pure-text LLM data (though they are not limited to such content). However, for large-scale training datasets (at the PB scale), these formats incur significant storage overhead due to the lack of compression (e.g., not supporting binary encoding), leading to storage waste and slow data loading during LLM training.
To address these issues, TFRecord [44] is based on Protobuf (a highly efficient binary serialization protocol) and stores data in a row-based format. As a binary format, its size is significantly smaller than JSON or CSV. Besides, data can be written and read in a streaming manner, making it especially suitable for scenarios like training where data is consumed sample by sample.
(2) Multimodal Formats. Pure-text formats are not wellsuited for multimodal datasets containing images, videos, and text. To address this, file formats such as TFRecord [44] in TensorFlow and MindRecord [40] in MindSpore have been developed to natively support efficient multimodal data storage. $\bullet$ Unlike traditional formats (e.g., COCO JSON [10], which store image metadata in separate JSON files), TFRecord [44] allows users to encapsulate images, labels, and metadata within a single tf.train.Example, eliminating the need for separate label files. Moreover, as multimodal datasets substantially increase data volume, TFRecord supports data sharding, enabling the creation of distributed files that can be assigned across multiple servers to facilitate parallel training.
$\bullet$ MindRecord organizes data into two types of files: (i) the data file, which contains a file header, scalar data pages (e.g., image labels and filenames), and block data pages (e.g., image and text) to store training data; and $( i i )$ the index file, which maintains indexing information based on scalar data to support efficient retrieval and dataset analysis.
(4) Tensor Data Formats. Compared to the storage formats mentioned above, tensor formats represent data as multi-dimensional arrays. On GPUs or TPUs, such multidimensional structures can be partitioned and processed in parallel, making them highly suitable for large-scale computation. For example, tf.data.Dataset [43] can organize various raw data types (e.g., images, text) into a unified tensor format, ready for direct use by models. However, tensor formats, due to their dense multi-dimensional storage, incur large storage overhead and offer poor readability, and are typically adopted only in model training.
Model Data Format. Model storage formats need to pay attention to security (e.g., Safetensors [85]) and are usually closely tied to their respective model training frameworks [32], [42], [22].
• Pickle (.pkl [13]) is a Python-specific format supported by almost all Python frameworks and can store any Python object, not limited to model parameters, making it convenient for saving model states and other custom information.
$\bullet$ Safetensors [85] was introduced by Huggingface to address the security concerns inherent in Python’s Pickle-based serialization. While Pickle serializes both the data and behavior of Python objects—enabling arbitrary code execution during deserialization—safetensors avoids this risk by focusing exclusively on tensors and their associated metadata. This design ensures safe deserialization without the possibility of executing malicious code. Additionally, safetensors supports memory mapping (mmap), which significantly enhances the efficiency of model loading.
$\bullet$ PyTorch-specific formats (e.g., .pt, .pth [32]) are optimized for model storage. Typically, .pth files are used to save training checkpoints, including model parameters, optimizer states, and epoch information, while .pt files are used to store only the model parameters.
• TensorFlow offers two common saving formats [42]: (1) SavedModel format for saving the entire model, including computation graph, weights, optimizer; (2) .ckpt for storing model weights, optimizer states, and training metadata, and is used to save and restore progress during training.
ONNX [27] is a cross-framework deep learning model format that supports interoperability across frameworks like PyTorch, TensorFlow, and Caffe2. It offers cross-platform and cross-framework advantages, but does not store training state information.
$\bullet$ The Hugging Face Transformers library [22] adopts a modular storage design, i.e., model weights are stored in binary .bin files, model configurations are stored in .json or .txt files.
LLM Training Application Other Processing Data Loading ReRqeuaedst 1 2 3 Data Loading POSIX Client API 3FS FUSE Client Native Client Asynchronous Filesystem in Userspace Start Response FileMRetehaodidng RDMA Network Cluster Manager Storage Service Meta Service Cluster Configuration Data Raw Data Meta Data Key-Value Storage Chain Replication with Key-Value Storage Apportioned Queries 一 二 FoundationDB SSD SSD SSD FoundationDB LLM Large-scale Data Storage
# 2.4.2 Data Distribution
With the development of LLMs, the scale of LLM training datasets and the number of parameters of LLMs themselves are growing rapidly (e.g., 9.5 PB data form Common Crawl [183], DeepSeek-R1 [162] has 617B parameters). A single node cannot store such large-scale data, and the data needs to be distributed across multiple nodes. The key technologies involved mainly include (1) distributed storage systems and (2) heterogeneous storage systems.
# Principles
Compared to traditional machine learning, the data (e.g., training data and model data) used in LLMs including both is growing exponentially. The main challenge lies in how to efficiently store and manage such large-scale data. Current approaches address this through distributed and heterogeneous storage systems.
Distributed Storage Systems. Distributed storage systems refer to storing a large-scale datasets across multiple nodes (e.g., JuiceFS [16], 3FS [15]). Traditional distributed file systems (such as HDFS [79]) often come with high costs. Moreover, most distributed file systems still use the POSIX protocol when loading the training data for LLMs, which bring about significant software overhead.
JuiceFS [16], a typical distributed file system based on object storage, uses object storage (e.g., S3 [4]) as the backend to store data. Compared to traditional distributed file systems (file or block storage), distributed file systems based on object storage enables simpler horizontal scaling. It does not need complex directory hierarchy (File Storage) and does not involve complex management logic (Block Storage), thereby significantly reducing storage costs (approximately $2 0 \%$ of the cost of traditional file systems).
As shown in Figure 7, 3FS [15] employs a large number of SSDs for distributed data storage and uses the CRAQ algorithm to ensure data consistency. Specifically, a piece of data is saved as multiple same chunks, which together form a Chain. For read requests, they can be sent to any chunk in the Chain, and the chunk will return the data. For write requests, the writing operation is carried out sequentially on each chunk. When a certain chunk malfunctions, instead of using the incremental data generated during the abnormal period to overwrite the data as in traditional methods, it first moves the chunk to the end of the chain. Only when the chunk returns to normal will the entire content of other samples be copied to the abnormal chunk. These operations, while ensuring data consistency, will cause a certain delay in write operations. However, they have almost no impact on read operations, which are more important for LLM training.
Meanwhile, 3FS [15] discovers that in the context of LLM training, the File Cache significantly consumes system memory, thereby degrading overall I/O performance. To address this, 3FS adopts an asynchronous data loading approach, disables file caching and exclusively utilizes Direct I/O for data access, significantly reducing memory pressure. Moreover, it performs system-level alignment of buffer pointers, offsets, and lengths to satisfy Direct I/O requirements, thereby avoiding additional memory copies caused by user-side alignment operations.
Heterogeneous Storage Systems. Heterogeneous storage systems refers to deploying the model state across diverse storage media (e.g., GPUs, CPUs, NVMes Memory). When deploying the model, The Zero Redundancy Optimizer (ZeRO) [333] deploys model states across multiple GPUs. However, simply distributing the model across multiple GPUs often significantly increases computational costs.
Some methods [334], [337], [336], [435] alleviate GPU memory pressure by storing data in host memory or NVMe SSD. vDNN [337] utilizes a per-layer memory management approach based on a sliding window that dynamically allocates memory at runtime based on the computational demands of the current layer. Its memory transfer mechanism includes both static and dynamic policies: the static policy offloads feature maps of all layers or only convolutional layers, while the dynamic policy determines which layers and convolutional algorithms to offload at runtime, balancing trainability and performance based on network characteristics. vDNN fully utilizes CPU memory by offloading intermediate feature maps that are not immediately needed and prefetching them prior to backpropagation. ZeRO-Infinity [334] offloads model states to CPU (e.g. activations) and NVMe memory, effectively alleviating the GPU memory bottleneck. To further reduce memory pressure, it introduces a memory-centric tiling technique that lowers the working memory requirements for LLM training, enabling the execution of large operators without relying on model parallelism.
However, both vDNN and ZeRO-Infinity only utilize CPU’s memory without leveraging its computational capabilities. In contrast, ZeRO-Offload [336] retains the parameters and forward/backward computations on the GPU while offloading the remaining computations (such as optimizer calculations) to the CPU, thereby harnessing the CPU’s computational power.
Unlike the aforementioned methods that often rely on manual parameter tuning (e.g., specifying offloading targets like CPU or NVMe), ProTrain [435] introduces a modeland hardware-aware automated framework. It incorporates a Memory-Aware Runtime Profiler for monitoring real-time memory and compute loads, partitions parameters into persistent (resident on GPU) and non-persistent (offloaded/loaded on demand) chunks based on their usage patterns, and reduces redundant data copying via pre-allocated chunk buffers.
# 2.4.3 Data Organization
Data organization refers to data operations (e.g., content organization in vector-based organization) during the storage stage that are designed to optimize retrieval accuracy and efficiency in RAG systems. When LLM answers questions, issues like hallucination [187] and lack of timeliness often arise. To address these limitations, RAG [228] (e.g., vectorbased retrieval and graph-based retrieval) have been introduced. They provide models with real-time, reliable context during inference. And both retrieval methods are based on the relevant data organization operations (e.g., vector-based organization and graph-based organization).
# Principles
Compared to traditional machine learning, LLMs require RAG knowledge to access real-time information. The main challenge is how to ensure both the efficiency and accuracy of retrieval. Current methods address this through vector-based and graph-based data organization techniques. However, existing RAG systems still fall short of meeting the high-quality retrieval demands at the enterprise level, where the document scale can reach millions of pages.
Vector-Based Organization Vector-based organization refers to converting data into vector form for efficient retrieval. It processes the original data through multiple stages (e.g., Content Organization, Chunking, Embedding, Compression and Storage).
(1) Content Organization. For the source data, organizing the content can enhance its logical structure, thereby facilitating improved efficiency and accuracy in retrieval. Works like Dense x retrieval [97], APS [172] refine text into independent semantic units, which could be described as the minimal sentence that include all the necessary context information from the original text to express its meanings, and Thread [57] reorganizes documents into logical units, with each unit containing prerequisites, headers, body content, linkers (describing possible paths for next step), and metadata, enabling a logical and structured representation of the document’s content, which significantly enhances the system’s logical coherence and processing efficiency especially in complex tasks (e.g., troubleshooting and dynamic operational workflows).
Similarly, [89] organizes the content of scientific papers into a hierarchical tree structure, where the root node of the tree is the paper’s title and child nodes are different sections, such as the introduction and methods. The relationship between parent and child nodes represents the globallocal content relationships, such as the connection between the abstract and introduction. Then it traverses the paths from the root node to the leaf nodes to extract important contextual information.
(2) Chunking. In vector-based retrieval, embedding long texts may reduce retrieval efficiency. Thus, an effective chunking strategy is required to divide the text into appropriately sized segments for encoding. The optimal chunk length needs to balance retaining fine-grained semantics and maintaining sufficient context, since a too long text might suffer from significant semantic compression during embedding, while too short a text would increase processing costs.
Allowing overlap between consecutive chunks ensures that important information at the boundaries is not lost and the continuity of context is maintained. Different from traditional chunking, MoG [480] adopts a dynamic chunking strategy, which chunks data when building the knowledge base, where MoG dynamically determines the optimal granularity (e.g., sentence-level, paragraph-level, or section-level) of the knowledge source based on the input query through a trained router. The router, implemented as an MLP, assigns weights to different granularities to guide snippet selection. MoGG [480] extends MPG by converting reference documents into graphs and redefining granularity as hopping ranges, enabling effective retrieval of dispersed information for complex queries.
(3) Embedding. In vector-based retrieval, the original input (text, images, audio, or other domains) is transformed into dense vector representations using models specifically adjusted for each data type. These representations encapsulate the underlying semantic meaning of the original content, and are then stored in a vector database for storage and retrieval. Various embedding models are used to correctly encode semantic information:
$\bullet$ $B G E$ uses a bilingual joint training framework that combines language-specific subword tokenization and specialized adaptation layers. This design aligns semantic representations across languages, improving cross-lingual retrieval accuracy [94].
$S T E L L A$ features a cross-instance attention aggregation mechanism that explicitly captures inter-sentence dependencies during pretraining. Besides the general embedding model, STELLA offers an extra dialogue model in incomplete query situations where the user input has problems such as semantic omission and reference digestion. This reduces the embedding dimensions and inference latency, making it especially effective for large-scale tasks [24].
$\bullet$ $G T E$ introduces a dual-negative sampling strategy within its contrastive learning paradigm. Though introducing negative samples usually works in series of embedding models, this strategy incorporates more reverse contrastive terms within a fixed batch, strengthening the model’s ability to distinguish subtle semantic differences. [249].
(4) Compression. Vector retrieval in LLMs differs from regular vector retrieval in that semantically similar vectors are often high-dimensional, so dimensionality reduction techniques are needed to reduce storage pressure.
Linear Dimensionality Reduction. Locally-adaptive Vector Quantization (LVQ) [50] centralizes the data and scales each vector individually, calculating the quantization bounds adaptively in a localized manner, fully utilizing the quantization range to compress the vectors. This method is typically suitable for compressing vectors with around 100 dimensions, but it performs poorly when the vector dimension is very large, such as tens of thousands.
LeanVec [380] combines linear dimensionality reduction with LVQ for vector compression. In ID(In-distribution) scenarios, LeanVec uses PCA, while in OOD(Out-of-distribution) scenarios, it introduces the LeanVec-OOD optimization method, which minimizes the square of the inner product between the query vector and the representation error to find the optimal projection subspace for both the dataset and the query set, thereby reducing the vector dimension. However, LeanVec is a simple linear dimensionality reduction method, and its performance may be affected in terms of accuracy when reducing the dimensionality drastically.
LeanVec-Sphering [381] modifies the loss function, transforming the problem of finding the projection matrix into an optimization problem under the Mahalanobis distance, which allows for more effective discovery of the optimal projection matrix, thereby better preserving the similarity structure between vectors when processing high-dimensional vectors.
Non-linear Dimensionality Reduction. GleanVec [381] uses spherical k-means clustering in the data partitioning stage to group vectors based on direction, capturing the data’s structural features. By associating cluster labels with vectors, it narrows the search range and reduces unnecessary calculations during inner product computation. In the local linear dimensionality reduction stage, GleanVec applies the LeanVec-Sphering method to reduce dimensionality within each cluster, preserving the inner-product relationship, which simplifies calculations while maintaining accuracy.
(5) Storage. After the above steps, the data will be stored in vector form in a vector database. During LLM inference, the model vectorizes the input and uses similarity metrics such as cosine similarity or dot product to retrieve the most relevant data from the database.
Faiss [125], when storing vectors, relies on the chosen index type. The Flat Index stores all vectors directly, such as IndexFlatCodes, which stores vectors in a flat array and supports sequential IDs. It is ideal for small datasets with high-precision requirements. The IVF Index clusters vectors with a coarse quantizer and stores them in inverted lists, supporting user ID operations and optionally using a DirectMap for efficient access. This reduces the search range and speeds up retrieval, making it suitable for large datasets. The PQ Index compresses vectors by splitting them into sub-vectors and quantizing them with a k-means quantizer (e.g., PQ6x10), trading accuracy for reduced storage space, making it suitable for high storage demands and lower precision needs.
In the Milvus [26], vector storage differs based on the number of vectors per entity. For single-vector entities, vectors are stored continuously without row IDs. Since vectors are sorted by row ID and have the same length, a vector can be directly accessed using its row ID, reducing storage overhead and improving query access efficiency. For multi-vector entities, vectors are stored in a columnar format. For example, for entities A and B, each with two vectors, the storage format is (A.v1, B.v1, A.v2, B.v2). This columnar storage enables more efficient data processing by vector dimension, facilitating batch operations and improving processing performance.
Weaviate [34] utilizes a graph data model to manage data entities, storing vectors as node attributes linked to these entities. For example, in the case of text data, vectors generated by a text embedding model are associated with their corresponding text entity nodes, enabling efficient graph traversal and multi-hop queries based on vector similarity. Additionally, Weaviate can store vectors alongside structured attributes. For instance, the vectors of e-commerce products, along with structured attributes such as price and category, are stored in the corresponding entity nodes. This allows for hybrid queries that combine vector similarity and structured attribute conditions, enhancing query flexibility and practicality.
LanceDB [25] uses a columnar storage format called Lance to store data. Compared to traditional Parquet formats, Lance introduces the concept of a table schema. A single row in LanceDB can store images, text, audio, video, and any number of vectors corresponding to different parts of the original data, and it can be dynamically updated. This makes LanceDB particularly suitable for storing multi-modal data. Currently, LanceDB is used for handling various RAG tasks.
Graph-Based Organization. Unlike vector-based organization, which helps LLM find knowledge related to a user’s query through fuzzy searching, graph-based data explicitly represents entities and their relationships, enabling the identification of precise matching information in the database. We will introduce graph-based organization from two aspects: indexing and storage.
(1) Indexing. In the indexing phase, it is necessary to establish an efficient indexing architecture to address the issue that directly retrieving raw triples is inefficient for complex queries such as multi-hop reasoning or path search, because the inherent sparsity in the graph structure often leads to significant query latency.
GraphRAG [127] adopts community clustering and hierarchical summarization strategies. It uses the Leiden algorithm to detect tightly connected subgraphs, called communities, in the knowledge graph. Then, it generates hierarchical summaries for each community. Once a certain element in a triple is retrieved, the index collects relevant community summaries and sends them for inference. For example, it can condense hundreds of triples related to ”quantum mechanics” into a semantic summary: ”Quantum mechanics is the fundamental theory describing the behavior of matter and energy at microscopic scales”.
Furthermore, LightRAG [164] integrates deduplication functionality to identify and merge identical entities and relations from different paragraphs. In real-time update scenarios, LightRAG introduces the Delta Index mechanism, which builds local indexes only for newly inserted edges and entities, using background merging threads without the need for community reconstruction, significantly reducing overhead related to community detection compared to GraphRAG.
MiniRAG [136] proposes a semantic-aware heterogeneous graph indexing mechanism, integrating text chunks and named entities into a unified structure, reducing the reliance on large language models for complex semantic understanding. The low semantic calculating requirement while deploying grants MiniRAG a more excellent performance on resourceconstrained devices compared to other methods.
(2) Storage. Graph data is usually stored in graph databases in three models: property graph models [292], RDF (Resource Description Framework) models [65], and multi-model [1].
Neo4j, JanusGraph, and TigerGraph use property graph models [292] to store graph-based data. A property graph model consists of ”nodes” and ”edges,” where both can contain attributes (key-value pairs). This model uses query languages like Cypher and GSQL, designed for relationship modeling and querying, making them highly suitable for complex relationship queries during RAG in LLMs.
Amazon Neptune [65] supports both property graph models and RDF models for graph-based data storage. The RDF model, based on triples (subject, predicate, object), represents entities, attributes, and relationships in a way that enhances knowledge reasoning. By combining these two models, Neptune can meet diverse knowledge storage needs, such as rapid queries and deep reasoning.
ArangoDB [1] uses a multi-model approach to store graphbased data. It supports multiple data models (e.g., document, key-value pair, graph), allowing the selection of appropriate storage and query methods depending on the requirements. This allows ArangoDB to store graph data (relationship information), document data (context or factual information), and key-value pairs (configuration or metadata) in the same database, facilitating LLMs to extract relationships from knowledge graphs while also retrieving document-type data (e.g., specific context information).
# 2.4.4 Data Movement
Data movement refers to the process of moving data from storage nodes to computing nodes. This process can achieve high data movement performance by caching data. Meanwhile, offloading data and operators to multiple nodes for computation can improve the speed of data preprocessing. Additionally, the highest overall performance can be achieved by overlapping data storage and computation operations to jointly schedule storage and computing resources.
# Principles
Compared to traditional machine learning, LLMs involve massive data transfers from storage nodes to compute nodes. The main challenge is how to accelerate the data moving rate. Current methods address this through data caching, compute-storage overlap, and data/operator offloading.
Caching Data in advance can increase the data moving rate. However, if a fixed cache policy is used, in order to meet the IO requirements of training, the configured storage capacity often far exceeds that required for storing the dataset [469]. Therefore, a dynamically adjustable cache policy is needed. Some methods [219], [161], [469] dynamically adjust the cache mechanism by analyzing the characteristics and requirements of LLM jobs in real time.
Quiver [219] optimizes cache sharing strategies based on the following IO characters during model training: (1) data shareability (due to significant overlap in data access within and across jobs), (2) substitutability (the I/O order does not affect job correctness, enabling small caches to improve performance by substituting data and reducing thrashing), and (3) predictability (using mini-batch processing times to estimate job sensitivity to I/O performance for informed cache allocation).
Fluid [161] dynamically adjusts cache capacity according to I/O conditions, optimizing the online training speed for each individual LLM job. Specifically, Fluid uses a coordinator to monitor the processes of LLM jobs. It calculates the number of samples within a specific time window based on the batch sizes fed back by the jobs, and thus obtains the realtime training speed. Subsequently, based on the concept of the TCP congestion control algorithm [315], it adopts a trialand-error approach to dynamically adjust the cache capacity. When the training speed increases, the cache capacity is increased according to the preset scaling-up factor and scaling step. Conversely, when the training speed decreases, the cache capacity is decreased according to the preset scaling-down factor and scaling step.
Meta proposes Tectonic-Shift [469], a hybrid storage architecture that integrates flash memory with the traditional HDD-based distributed file system Tectonic. Tectonic-Shift organizes data segments into buckets for storage in flash memory and determines segment admission and reinsertion by comparing bucket priorities (computed from both historical and predicted future access patterns) against dynamically adjusted thresholds. It also optimizes the segment size (e.g., 256 KB) of CacheLib [9] to improve flash memory utilization.
Data/Operator Offloading refers to offloading data preprocessing operations such as shuffling, sampling, and augmentation, to multiple devices in order to improve processing speed. Currently, data preprocessing pipelines (e.g., tf.data) are typically performed on the CPU, whose efficiency is often lower than the training speed achieved by Machine Learning (ML) accelerators like GPUs and TPUs. So enhancing the efficiency of data preprocessing to match the high-speed processing capabilities of ML accelerators has become a challenge [159].
Some research [158], [67] offload data preprocessing tasks to remote CPU servers. Cachew [158] divides the input dataset of each job into independent subsets for processing by remote CPU nodes. Additionally, users can specify locations for caching and reusing data in the input pipeline. The scheduler makes decisions during runtime based on specific metrics and algorithms through automatic scaling and caching strategies. The automatic scaling strategy adjusts the number of worker nodes according to client-reported metrics. The automatic caching strategy compares the processing times of different cache locations and selects the optimal caching scheme. The tf.data service [67] addresses input data bottlenecks by horizontally scaling CPU nodes and leveraging a coordinated read mechanism to mitigate straggler issues caused by input size variability in distributed training. Specifically, it is comprised of four key components: a dispatcher, a pool of workers, clients, and an orchestrator. The dispatcher manages dataset assignment to workers using various sharding strategies, for example, the OFF strategy performs no sharding, the DYNAMIC strategy applies disjoint first-come-first-served sharding, and several static sharding strategies are also supported. Workers are responsible for actual data processing. Clients issue data processing requests to the workers. Orchestrator deploys the aforementioned three components as containers within the same Borg [384] unit.
Although the above method of offloading to remote CPU servers can alleviate data stalls, the cost of remote CPUs is high, and the resources of ML accelerator nodes are not fully utilized. Pecan [159] introduces two strategies, AutoPlacement and AutoOrder, to alleviate input data preprocessing bottlenecks and reduce training costs. The AutoPlacement strategy dynamically schedules data preprocessing workers across ML accelerator hosts and remote CPU servers. It first establishes a baseline batch processing time for model training, incrementally adds local workers, and then prunes redundant remote workers to determine the optimal combination of local and remote resources. The AutoOrder strategy analyzes the transformation operations within the input data pipeline, reordering them to place data-reducing transformations (such as sampling, filtering, or image cropping) earlier and dataexpanding ones (such as image padding and one-hot encoding) later. While adhering to user-specified ordering constraints, this reorganization improves the preprocessing throughput of individual workers.
Different from the works that are only compatible with a single training framework as mentioned above (e.g., Cachew and tf.data service can only work with TensorFlow). Powered by native composable operators (e.g., data loading, transformation, and filtering functions), Cedar [468] can flexibly support different ML frameworks and libraries, enabling users to effortlessly build data pipelines.
Overlapping of storage and computing means that the data loading and computation processes in LLM training alternate. In LLM training, which proceeds in data batches, ideally the data loading unit can prepare the next batch while the computing unit processes the current one, reducing overall training time. However, if a data isn’t cached locally, its need to load the data through remote I/O bandwidth. When this bandwidth is insufficient, computation pauses to wait for data loading, creating an IO bottleneck. Some researches optimize the pipeline at different training stages (e.g., the pre-training and SFT stage [466], the RL stage [479]).
SiloD [466] leverages the characteristics of the pipelined execution of data loading and computation at the pre-training and SFT stage to build an enhanced performance evaluator. When data loading becomes the bottleneck, it uses a learned model (IOPerf) to quantify the cache and remote I/O demands of different training jobs,providing support for resource allocation in the pipelined of data loading and computation.
Compared with the pre-training and SFT stages, the RL stage requires an additional training of the reward model to evaluate the output of the original model. This leads to a greater amount of computational resources remaining idle (pipeline bubbles) during the RL stage. RLHFuse [479] takes advantage of the independence between the original and reward models during the training stage to break the training task into sub-tasks of micro-batches. In the case of differences in the sizes and parallel strategies of the two models, it first transforms the problem to ensure that each stage of the two models uses the same number of GPU resources, and then uses the simulated annealing algorithm [213] to generate a fused pipeline schedule.
# 2.4.5 Data Fault Tolerance
Data fault tolerance refers to the ability to quickly resume from the point of interruption during model training by storing checkpoints or performing redundant computations in the event of training interruptions.
# Principles
Compared to traditional machine learning, LLMs place greater emphasis on fault tolerance during training due to their large model sizes and the high cost of retraining. The main challenge is how to quickly resume normal training in the event of an interruption. Current methods address this by saving checkpoints or using redundant computation.
Checkpoints. Some methods store the model state as checkpoints to handle training interruptions. However, restoring model states across multiple platforms or frameworks may encounter compatibility issues. At the same time, frequently saving model checkpoints can consume a large amount of storage space, especially during large-scale model training.
For compatibility issues, PaddleNLP [29] has developed a unified model storage technology. It stores model weights, optimizer weights, and other data in a unified safetensors format, eliminating the need to differentiate distributed strategies during checkpoint storage. Specifically, when the distributed training strategy changes (e.g., switching between data parallelism and model parallelism) or the number of machines is adjusted, Unified Checkpoint enables training to resume using only a single complete checkpoint, without requiring separate checkpoints for each configuration.
(1) Asynchronous Storage. Apart from standardized checkpoint storage, for frequently saving model, some researches [291], [194] aim to accelerate checkpoint saving through asynchronous storage without affecting the model’s training speed.
CheckFreq [291] employs a two-stage checkpointing technique designed to capture model state copies in memory for asynchronous storage while ensuring model parameter consistency through pipelining with subsequent iteration computations. Specifically, when idle GPU memory is available, it prioritizes snapshotting on the GPU to reduce costs; otherwise, it stores checkpoints in CPU memory and adjusts the checkpoint frequency accordingly.
In the training of LLMs on the MegaScale system [194], HDFS is used to store the model state. When storing model states, there are problems of balancing the checkpoint frequency and dealing with the HDFS bandwidth bottleneck during model recovery in the training process. To address this, MegaScale adopts a two-phase storage approach: (1) GPU worker nodes quickly write the on-chip state to the host memory and continue training; (2) a background process asynchronously transfers the state to HDFS to reduce interference with training. When resuming training, a worker node in the specified data parallel group reads the shared state partition and broadcasts it to other nodes, reducing the HDFS load and alleviating bandwidth pressure.
(2) Hierarchical Management refers to storing model checkpoints across a multi-level storage system, storing the checkpoints that may be needed in the closer storage nodes, aiming to improve recovery speed. Gemini [403] stores checkpoints in a hierarchical storage system composed of local CPU memory, remote CPU memory, and remote persistent storage. It introduces a near-optimal checkpoint placement strategy for CPU memory. By analyzing the relationship between the number of machines and checkpoint replicas, it flexibly adopts group placement or ring placement to maximize the likelihood of recovery from CPU memory in the event of failures. ByteCheckpoint [389] manages checkpoint files using an architecture combining SSD and HDD storage servers. New checkpoint files are stored as ”hot” data on SSDs for quick access due to evaluation task downloads after creation. Once the evaluation is completed and there are no training anomalies, their access frequency drops, and they become ”cold” data, being migrated to HDDs to free up SSD space and ensure the hot storage can efficiently store currently frequently accessed checkpoint files.
Redundant Computations Unlike checkpoint, some methods [382], [186], [147] are based on parallel computing and redundantly compute the state data of the model, enabling quick recovery of the training state from non-failed nodes in case of failures.
Inspired by the RAID disk redundancy technology [307], Bamboo [382] enables each computing node to perform computations not only on the neural network layers it is responsible for, but also on some layers of its neighboring nodes as redundant computations. When a node is preempted, its predecessor node has all the information required for training, allowing the training to continue without wasting previous computational results.
Unlike Bamboo’s node-based redundant computation, Oobleck [186] uses pipeline templates to define training pipeline execution, specifying node allocation, stage numbers, and model layer-GPU mappings. During training, at least $f + 1$ logically-equivalent yet physically-heterogeneous pipelines are instantiated from these templates, considering the fault tolerance threshold $f$ and batch size. When a pipeline node fails, Oobleck leverages other pipelines’ model state redundancy and reinstantiates the pipeline to resume training.
Unlike Bamboo and Oobleck, which use pre-set redundant computations in standby, ReCycle [147] leverages the computational redundancy inherent in parallel training to reassign the tasks of failed nodes to nodes with the same processing in other data-parallel groups. This unique approach enables quick resumption of training without the need for spare servers.
# 2.4.6 KV Cache
LLMs use auto-regressive generation, where each token depends on prior ones. KV Cache avoids redundant computation by reusing stored key-value pairs, improving efficiency. However, its memory grows with sequence length, making efficient cache management crucial.
# Principles
Compared to traditional machine learning, LLMs require KV cache to accelerate inference. The main challenge lies in efficiently managing the cache as the KV size grows rapidly. Current methods address this by indexing KV, shrinking KV, and managing KV placement or cache space.
Cache Space Management refers to separating the logical structure of the KV cache from its physical storage implementation, which facilitates memory allocation and improves memory utilization. vLLM [220] and vTensor [428] divide the KV cache into fixed-size blocks and store them in a non-contiguous manner. vLLM manages these blocks through a mapping mechanism, while vTensor stores the fixed-size KV cache blocks non-contiguously in physical memory. This decouples the logical and physical KV blocks, utilizing a block table to manage dynamic memory allocation by tracking the mapping relationships and fill states.
KV Placement refers to using a perception strategy to store frequently used KV in faster storage media (such as GPU memory), while storing less frequently used KV in slower storage media (such as SSD), or releasing them directly. RAGCache [197] provides a prefix-aware PGDSF replacement policy that prioritizes cache nodes based on access frequency, size, and recomputation cost. And stores frequently accessed data in fast GPU memory and less frequent data in slower host memory, maximizing cache efficiency. CachedAttention [148] leverages the inference job scheduler to observe the jobs waiting for execution. To improve cache efficiency, the KV cache of a pending job is prefetched into the host memory from disk before execution. Meanwhile, KV caches that are no longer required are evicted, based on the jobs waiting to be executed.
KV Shrinking KV Cache Shrinking refers to trimming or reducing the KV Cache in order to lower memory usage and improve inference efficiency. CacheGen [265] uses a customized tensor encoder to encode the KV cache into a more efficient bitstream, thereby reducing bandwidth usage. It also compresses the KV cache using techniques such as block-based encoding, hierarchical quantization, and arithmetic encoding, while dynamically adjusting the compression level and transmission method based on network conditions to ensure low latency and high generation quality.
Unlike CacheGen, which only considers intra-layer redundancy, MiniCache [255] is based on the similarity of KV cache states in adjacent layers. It decomposes the state vectors into magnitude and direction components, calculates the direction vectors using SLERP [354], and merges the KV caches of adjacent layers to form a merged cache that contains information such as direction vectors, magnitudes, and angles.
Compared with the traditional method of storing the complete KV data, HCache [150] only stores the hidden states (the size of the hidden states is only half that of the KV cache, and recomputing the KV cache from the hidden states can reduce the computational load). When restoring the state, a bubblefree restoration scheduler is used to concurrently execute the transmission of hidden states and the recomputation from hidden states, maximizing the overall resource utilization.
KV Indexing refers to the process of constructing an indexing architecture for the KV Cache to accelerate the query process of the KV Cache. ChunkAttention [440] organizes the KV cache into a prefix tree using a prefix-aware KV cache (PAKV), sharing key-value tensors of common prefixes to accelerate the corresponding KV query process. [478] proposes Prefix Sharing Maximization (PSM): By dynamically reordering data columns and rows, it maximizes prefix sharing among requests to improve cache hit rates. Column Reordering sorts columns based on value frequency and size, prioritizing those with more shared prefixes. Row Sorting groups requests with identical prefixes together, further enhancing cache reuse.
# 2.5 Data Serving for LLM
Data service encompasses data preprocessing operations carried out after data is transferred from storage to computing nodes and before its actual utilization by the LLM, aiming to facilitate more effective data consumption by the LLM. These data preprocessing operations include: data shuffling, data compression, data packing, and data provenance.
# 2.5.1 Data Shuffling
Data shuffling in data serving means that different data needs to be selected and provided to LLMs at various stages (e.g., in different epochs for pretraining). For example, corresponding training data needs to be supplied according to the training requirements during the training stage; during the RAG stage, corresponding knowledge needs to be supplied based on the degree of relevance to the questions.
# Principles
Compared to traditional machine learning, LLM applications are divided into multiple stages, each requiring different types of data to be fed into the model. The main challenge is how to select data that meets the specific requirements of LLMs. In the training stage, current methods provide training data by scoring based on data samples or model states, or by using empirical training strategies. In the RAG stage, data is selected through metrics, rules, or models to supply relevant knowledge to the LLM.
Data Shuffling for Training. As LLMs continuously trained over new tasks, it may begin to lose its ability to retain early task knowledge, a phenomenon known as catastrophic forgetting [287], [286]. To address this, some data supply methods are employed to manage datasets during the training process and provide high-quality data. Meanwhile, some methods, instead of altering the dataset, propose reasonable learning strategies.
(1) Data Pruning. Data pruning refers that during the training process, partial shuffling is carried out on the training dataset, and high-quality data is retained, so that the model is trained on the data that has not been fully learned and is of high quality.
Sample Scoring. Some methods [137], [66] prune datasets by scoring samples, selecting high-scoring samples for subsequent training. [137] applies the EL2N metric to identify important examples in a dataset, written as $\chi ( x _ { i } , y _ { i } ) = \mathbb { E } \| f ( x _ { i } ) - y _ { i } \| _ { 2 }$ , where $f ( x _ { i } )$ is the model’s prediction and $y _ { i }$ is the true sample. Based on the computed EL2N values, it periodically prunes irrelevant data during training. [66] extends the EL2N metric to evaluate sample importance, written as $\hat { \chi } _ { e m a } ( x , y ) \gets \alpha \cdot \hat { \chi } _ { n l u } ( x , y ) + ( 1 - \alpha ) \cdot \hat { \chi } _ { e m a } ( x , y )$ , where $\alpha$ is a smoothing parameter. Based on extended EL2N values, it periodically selects data subsets for training.
Model State Scoring. Unlike the aforementioned approach of scoring samples and prune the dataset, some methods [372], [56], [416], [276] prune the distribution of dataset by scoring the model’s state (such as training loss and learning status).
Moving-one-Sample-out (MoSo) [372] identifies and selects the most informative LLM pre-training samples by assessing the influence of a specific sample on the training loss. The MoSo score measures how the training loss over the dataset $\boldsymbol { S }$ excluding $z$ (i.e., $S \setminus z$ ) would change when the sample $z$ is removed. This approximation measures the agreement between $z$ and $S \backslash z$ , where the sample is considered important and receives a higher score if the gradient of $z$ is consistently aligned with the average gradient.
Similarly, Velocitune [276] is a dynamic domain weight adjustment method based on learning velocity, which is de$\begin{array} { r } { V _ { t } [ i ] = \frac { \ell _ { t } \left. i \right. - \ell _ { \mathrm { t a r g e t } \left[ i \right] } } { \ell _ { \mathrm { i n i t } \left[ i \right] } - \ell _ { \mathrm { t a r g e t } \left[ i \right] } } } \end{array}$ $V _ { t } [ i ]$ velocity for domain $i$ at step $t$ , $\ell _ { t } [ i ]$ is the current loss for domain $i$ , $\ell _ { \mathrm { t a r g e t } [ i ] }$ is the target loss for domain $i$ , predicted by the scaling law [201], $\ell _ { \mathrm { i n i t } [ i ] }$ is the initial loss for domain $i$ , calculated before training starts. The method calculates the learning velocity of each domain and dynamically adjusts the sampling weights, giving more attention to domains with slower learning progress, thereby achieving a balanced learning effect.
Some methods [56], [416] combine reinforcement learning based on scoring the model to adjust the dataset. ODM [56] is based on the multi-armed bandit algorithm. It regards each data domain as an arm and uses classical reinforcement learning methods. By taking the training loss as the reward function, it optimizes the data mixing ratio online to adapt to training dynamics. That is, it dynamically adjusts the sampling weights of each data domain and preferentially selects data with high information gain and large losses.
MOS [416] proposes a scoring network that dynamically adjusts the sampling probabilities of different datasets based on the model’s current learning state, combined with reinforcement learning, to alter the distribution of training data. This adjustment is guided by three reward functions: (i) Transferability for measuring the similarity (e.g, cosine distance) between datasets as the reward. (ii) Learning difficulty for measuring the perplexity changes. (iii) Learning trajectory for smoothing the reward values using Exponential Moving Average (EMA) to more stably optimize the sampling distribution.
(2) Training Strategy. In addition to directly prune the dataset during training, appropriate learning strategies can also alleviate catastrophic forgetting. [123] found that different abilities vary with data volume, with mixed data improving abilities at low resources and causing conflicts at high resources. Thus, DMT [210] is proposed, which first finetunes on a specific dataset and then fine-tunes on mixed data to effectively balance general and specialized abilities and mitigate conflicts and forgetting. It proposes a strategy where training data are sorted based on criteria like input length, attention weights and training loss, allowing the model to gradually learn from simple tasks to more complex ones.
Data Selection for RAG. In the RAG stage, it is necessary to retrieve the stored knowledge (see details in 2.4.3) and provided the retrieved results to the LLM. During this process, it needs to ensure the effectiveness of the retrieved results in order to obtain better answers from the LLM [280]. Currently, the retrieval quality is mainly guaranteed through RAG knowledge filtering and RAG knowledge re-ranking.
(1) RAG Knowledge Filtering. RAG knowledge filtering refers to filtering out documents with poor relevance after retrieval. Some methods [280], [114], [87] use a model as a judge to filter documents. [280] uses small language models (SLMs) as filters, performing preliminary predictions and evaluating difficulty. For easy samples, the SLM’s predictions are used as the final decision; for difficult samples, the top N most likely labels are selected from the SLM’s predictions for subsequent re-ranking. In Chatlaw [114], after retrieving relevant information, the LLM evaluates the retrieved content. Only content that is deemed highly relevant after evaluation is used to generate the final response, effectively reducing interference from irrelevant or incorrect information. MAIN-RAG [87] collaboratively filters and scores retrieved documents by leveraging multiple LLM agents to enhance relevance and reduce noise. The framework adopts a dynamic filtering mechanism that uses score distributions to adjust relevance thresholds, ensuring high recall of relevant documents while minimizing computational overhead.
(2) RAG Knowledge Re-ranking. After filtering, multiple documents may remain, requiring re-ranking of the retrieval results to place the most relevant ones at the top for more accurate model output. Research on [128] shows that using a large model for re-ranking performs better than methods like Maximum Marginal Relevance (MMR) and Cohere reranking. For large model re-ranking, general-purpose large language models (e.g., GPT) can be used directly, or specialized zero-shot re-ranking models such as Cohere rerank [12] or RankVicuna [318] can be employed. The latest ASRank [47] leverages pre-trained LLM to compute the matching probability between document answers and answer cues, scoring and re-ranking the retrieved documents.
# 2.5.2 Data Compression
Data compression refers to compressing the input data for the model. Previous studies have shown that prompts are crucial for triggering LLM domain-specific knowledge, and prompts are typically designed based on specific tasks (including chainof-thought, context learning, and historical dialogues). As the complexity of chain-of-thought, context learning, and RAG increase, longer prompts are required [189]. However, overly long prompts may lead to higher response latency, increased costs, and even exceeding the maximum token limit. Existing methods mainly compress the model inputs in two aspects. Some methods [427], [101], [348], [200], [335] compress the retrieved results in the RAG stage and then put them into the prompt, while other methods compress the entire prompt [189], [190], [303], [293], [102].
# Principles
Compared to traditional machine learning, LLMs often require longer inputs, and in some cases, the input must be compressed to fit into the model. The main challenge is how to compress the input without losing important information. Current methods mainly achieve this through compression based on information entropy, rule-based templates, or model-driven approaches.
RAG Knowledge Compression The retrieved RAG knowledge can be compressed by a model to make small texts carry more information. Techniques like RECOMP [427], CompAct [348], and FAVICOMP [200] adopt rule-based RAG context compression schemes, where predefined rules or templates explicitly guide the model to extract key information and remove redundant content. Alternatively, methods like xRAG [101] and COCOM [335] use soft prompt-based RAG context compression schemes, where learnable parameters (such as the modality projector W in xRAG or the overall model training in COCOM) enable implicit vector learning. These implicit vectors dynamically adjust attention weights when the model processes input, allowing the model to adaptively optimize context representations under context compression.
Prompt Compression. Prompt compression means that after the retrieved knowledge is put into the Prompt, the entire Prompt will be compressed.
(1) Metric-Based Compression. Some studies [189], [190], based on the hypothesis that a vast amount of knowledge is stored in the model parameters, have proposed methods to compress prompts while minimizing information loss. LLMLingua [189] uses a perplexity criterion to remove redundant tokens from the original prompt. By quantifying the negative logarithmic probability (perplexity) of each token through a small model, LLMLingua identifies and removes tokens that can be predicted from the model’s inherent knowledge, thereby shortening the prompt while retaining essential context.
LLMLingua’s extended version, LongLLMLingua [190], uses a dual-granularity compression strategy: (i) Coarsegrained compression initially filters key information at the document level to provide more focused content for finegrained compression; $( i i )$ Fine-grained compression further optimizes at the token level to precisely retain key information. These two strategies work together to improve the quality of the prompt and model performance. LongLLMLingua also assigns different “compression budgets” to documents based on their importance, aiming to achieve the best global compression effect.
(2) Finetuned-Model-Based Compression. Unlike the aforementioned methods that use a small model’s perplexity for compression, some methods [303], [293], [102] directly perform the compression task end-to-end by fine-tuning a model. LLMLingua-2 [303] defines prompt compression as a problem of classifying tokens and trains a dedicated model for compression. It uses a Transformer encoder to capture bidirectional contextual information, ensuring that the compressed prompt is faithful to the original. [293] proposes a technique called ’gisting’, where a language model is trained to condense the prompt into a compact ’gist token’. These tokens encapsulate the core semantic content of the prompt and can be cached for later use. This method achieves a compression rate of up to 26 times. [102] suggests a method to transform pre-trained language models into AutoCompressors. The AutoCompressor compresses long contexts into summary vectors, and training is performed on the model parameters using these summary vectors.
# 2.5.3 Data Packing
Data Packing aims to address the requirement for uniform sequence lengths in LLMs’ training inputs, which combines short texts in an appropriate way to enhance text coherence and reduce the number of padding tokens. In this way, we can avoid the excessive truncation caused by the drawbacks of simple concatenation and splitting methods [116].
Short Sequence Insertion. Some methods [116], [259] involve inserting short sequences into long sequences to minimize padding. The Best-fit Packing [116] first splits long documents according to the model’s context length, then sorts all document blocks in descending order of length. For each document block, it selects the training sequence set with the smallest remaining capacity that can accommodate it. [259] prioritizes long documents and uses a greedy algorithm to fill remaining space with short document segments (sequences), reducing padding and minimizing document concatenation to lower contextual noise.
# Principles
Compared to traditional machine learning, LLMs place higher demands on the semantic quality of training data. Additionally, due to the requirement for uniform input lengths, a key challenge is maintaining semantic integrity without excessive truncation. Existing techniques tackle this through short-sequence insertion, sequence concatenation, and semantic-aware composition. However, it remains crucial to account for the impact of these data packaging operations on overall training efficiency.
Sequence Combination Optimization. Some methods [218], [316] optimize sequence combinations for efficient packing. [218] proposes two efficient sequence packing algorithms: (1) The Shortest Pack First Histogram Packing (SPFHP) uses a sequence length histogram, sorts sequences from long to short, and applies a worst-fit algorithm to prioritize placing the histogram intervals into the remaining largest “packs”, while limiting packing depth to avoid creating excessive small packs, thus improving space utilization. (2) The Non-Negative Least Squares Histogram Packing (NNLSHP) converts the packing problem into a non-negative least squares problem, using dynamic programming to enumerate reasonable sequence combination strategies, constructing a packing matrix to determine the strategy’s repetition count. It also assigns small weights to short sequences’ residuals to reduce long sequence leftovers, achieving efficient packing. [316] splits documents into multiple fixed-length “buckets” based on their length, ensuring that each sequence comes from the same document to avoid cross-document attention issues. Additionally, by combining Variable Sequence Length Curriculum (VSL), different lengths of sequences are dynamically sampled during training to maintain a consistent total token count.
Semantic-Based Packing. Some methods [364], [349] improve data coherence through semantic-based data packing. [349] reorders pretraining data by combining semantically related documents into coherent input contexts, allowing the
LLM to read and reason across document boundaries. Similarly, SPLICE [364] randomly selects a document as the root document, and in a breadth-first manner, uses retrieval methods like BM25 and Contriever (trained from a mix of Wiki and CCNet data) to retrieve $k$ similar documents, adding them to the training sample until the maximum length is reached. Finally, the tree structure is flattened using a specific tree traversal strategy to generate the training example.
# 2.5.4 Data Provenance
Data Provenance is the process of tracking the sources, transformations, and lineage of data, which is increasingly recognized critical in ensuring the reliability, transparency, and accountability of LLM data [54].
# Principles
Compared with traditional machine-learning models, LLMs demand heightened safeguards for output security owing to their powerful generative capabilities. The central challenge is to preserve output integrity without degrading quality. Current solutions embed watermarks or deploy statistical-detection techniques to reveal any tampering.
Embedding Markers. Current data provenance methods [482], [105], [256], [212] generally modify the generation logic to embed covert markers into the text. This is done in a way that does not disrupt the text itself, thereby providing a medium for tracing the origin of the data.
Bileve [482] enhances the traceability and integrity of text by embedding two distinct levels of signals: (1) Statistical signal embedded globally to detect whether the text originates from a specific model. (2) Content-related signature embedded within each generation unit to verify if the text has been tampered with. During detection, the validity of the signature is first verified; if the signature is invalid, a statistical test is then used to determine whether the text comes from the target model.
Unlike Bileve that emphasizes strict traceability after text tampering, [105] focuses on embedding watermarks in a way that preserves the quality of the generated output. It embeds hidden markers that can only be detected by individuals possessing a specific key, while remaining imperceptible to others that the text has been altered. Specifically, the method employs a pseudo-random function (PRF, used to generate seemingly random numbers) to determine the shuffling of each output word, ensuring that the generated text is statistically indistinguishable from the original model’s output. During detection, the presence of hidden markers is ascertained by calculating a score for each word in the text (based on the numbers generated by the pseudo-random function).
Unlike previous approaches, UPV [256] introduces a watermarking method that enables detection without requiring access to the key used during generation, thereby eliminating the risk of key leakage. It employs two independent neural networks for watermarking. During text generation, the watermark generation network utilizes an embedding module and a fully connected classifier to predict watermark signals based on token information within a sliding window, and accordingly adjusts the language model’s output distribution. For detection, an LSTM-based network takes the text sequence as input and identifies the watermark, leveraging shared token embedding parameters with the generation network.
Compared to methods that require specific keys for detection, [131] embeds a special type of watermark into text generated by language models, which can be detected by anyone without the need for any secret information. It selects specific lexical combinations (rejection sampling, ensuring that the embedding of the marker does not affect the naturalness of the text) during text generation, in conjunction with an error correction mechanism (error-correcting codes, allowing the marker to be recovered even after partial modification of the text), to embed an encrypted signature (public key signature, ensuring the non-forgeability of the marker) into the text. During detection, one only needs to extract these specific lexical combinations from the text and verify the validity of the signature to determine whether the text contains the marker.
Statistical Provenance. Unlike the aforementioned methods that rely on detecting special markers for tracing the origin, [212] achieve data provenance through the statistical information of the vocabulary. Specifically, before generating each word, the model randomly divides the vocabulary into two parts (green-listed and red-listed tokens) and tends to favor the shuffling of green-listed tokens during the generation process (green-listed tokens are a randomly selected subset of the vocabulary). By employing statistical tests (a mathematical method used to determine whether text adheres to specific rules), it is possible to detect whether the proportion of greenlisted tokens in the text is abnormal, thereby ascertaining if the text is machine-generated.
# 3 LLM for Data Management
After preparing the LLMs with carefully processed / stored / served data, we next introduce the LLM techniques that can be utilized to enhance data management tasks, including data manipulation, data analysis, and data system optimization.
# 3.1 LLM for Data Manipulation
LLM can be employed to explore and prepare appropriate data for non-LLM-oriented tasks, such as data cleaning for classification tasks, data integration for extracting wellstructured tables from unstructured sources, and data discovery for identifying relevant datasets. Unlike data preparation pipelines designed specifically for LLM applications, these methods focus on enhancing the quality and utility of data for downstream analytical or machine learning tasks.
# 3.1.1 LLM for Data Cleaning
Data cleaning focuses on transforming corrupted or lowquality data into a reliable form suitable for downstream applications (e.g., statistical analysis or training machine learning models). It encompasses a range of tasks such as handling missing values, correcting typos, resolving formatting inconsistencies, and addressing dependency violations. These tasks are typically categorized into data standardization, error detection and correction, and data imputation.
Data Manipulation 1. Data Cleaning 2. Data Integration Data Standardization Entity Matching Prompt-based LLM-GDO Evaporate Prompt-based MatchGPT BATCHER Agent-based CleanAgent AutoDCWorkflow Pipeline Generation Multi-Model COMEM Data Error Processing Collaboration Localized Multi-Task Prompt-based Cocoon Multi-News+ Fine-tuning Jellyfish LLM-based LLMClean LLMError Schema Matching Context Enrichment Bench Fine-tuning-based GIDCL Prompt-based LLMSchemaBench Data Imputation ContexRt-AEGnriched Magneto KG-RAG4SM Prompt-based RetClean Multi-News+ Agent-based Agent-OM Harmonia RAG Assisted RetClean LLMErrorBench Orchestration 3. Data Discovery Data Profiling Data Annotation Prompt-based AutoDDG LEDD Prompt-based CHORUS Goby RAG-Assisted Pneuma RAG-Assisted RACOON Birdie Data Analysis Data System Optimization 1. Structured Data Analysis 1. Configuration Tuning Relational Data Analysis 的 Prompt-based LATuner λ-Tune DB-GPT Multi-Step ExtPrAacCtoHrINCTOAPERA DaRteaCAocdTearble ERnAriGc-hbmasendt GPTuner Andromeda End-to-end TableGPT CABINET TabPedia Alignment DB-GPT E2ETune QA Graph Data Analysis 2. Query Optimization NL2GQL NAT - NL2GQL -NL2GQL Prompt-based GenRewrite LITHE LLM-based Semantic UniKGQA FlexKBQA GraphGPT RAG-based Enrichment R-Bot : Training-Enhanced LLM-QO LLMSteer 2. Semi-structured Data Analysis Improvement Semi-structured SPREADSHEET 3. Anomaly Diagnosis Tables BENCH MiMoTable Prompt-based DB-GPT : 3. Unstructured Data Analysis RAG-based Enrichment D-Bot ByteHTAP Documents UDOP Pix2Struct DocPedia Multi-Agent Collaboration Panda D-Bot PrLoagnragumamgieng RepoFusion CoCoMIC LocalFiizneed-tSupneicnigalized D-Bot
Traditional data cleaning methods depend on rigid rules and constraints (e.g., zip code validation), demanding substantial manual effort and domain expertise (e.g., schema knowledge in financial data) [237], [432]. Additionally, they often require domain-specific training, which restricts their generalizability [63]. Recent studies show that large language models (LLMs) can address these limitations by offering natural language interfaces that reduce manual and programming effort, eliminate the need for complex runtime environments, and support seamless integration of domain knowledge. These methods primarily target the following tasks.
Data Standardization. Data standardization involves converting diverse, inconsistent, or non-conforming values into a consistent format to ensure reliable analysis and effective downstream processing. Existing methods use either structured LLM prompting for specific cleaning operations or LLM agents for automated pipeline generation.
(1) Prompt Based End-to-End Standardization. The first approach constructs well-structured prompts with explicit standardization instructions and employs advanced prompting techniques (e.g., Chain-of-Thought) to improve the effectiveness of LLM-based standardization methods. For example, LLM-GDO [279] utilizes user-defined prompts (UDPs), including in-context learning examples, to implement LLM-based operators that replace traditional user-defined functions (UDFs) across various standardization tasks (e.g., normalizing numerical values). This method simplifies logic implementation and facilitates the seamless integration of domain knowledge. Evaporate [63] employs LLMs to transform semi-structured documents into structured views through two main strategies: (i) Evaporate-Direct, which prompts the LLM to extract values directly, and (ii) Evaporate-Code, which guides the LLM to synthesize extraction code and ensembles multiple candidate functions using weak supervision to improve output quality while maintaining low cost.
(2) Agent Based Operation and Pipeline Generation. To address the inefficiencies of LLM-based solutions, such as the reliance on multi-turn prompts and expert-level prompt engineering, the second method employs LLM agents to automatically generate cleaning operations and orchestrate end-to-end pipelines. For instance, CleanAgent [319] integrates domain-specific APIs with autonomous agents to execute a standardization pipeline that includes API call generation (e.g., clean date(df, ‘‘Admission Date’’, ‘‘MM/DD/YYYY’’)) and iterative code execution. Similarly, AutoDCWorkflow [237] adopts LLM agents to construct pipelines for resolving duplicates and inconsistent formats. The agent performs step-by-step reasoning to identify relevant columns, evaluate data quality, and generate appropriate operations (e.g., upper() and trim()), while leveraging tools such as OpenRefine for execution and feedback.
Data Error Processing. Given a data entry, error processing typically involves two steps: detecting erroneous values and correcting these values. Typical errors include typos, invalid formats, type mismatches, numeric outliers, and dependency violations. Existing methods generally fall into two categories: employing LLMs for direct end-to-end error processing, or enhancing context models to better guide the detection and correction process.
(1) Prompt Based End-to-End Error Processing. To support end-to-end data error processing, the first approach employs prompting techniques to either directly handle data errors or generate the corresponding processing functions. For instance, Multi-News $^ +$ [103] employs Chain-of-Thought (CoT) prompting, majority voting inspired by human annotation practices, and self-consistency checks to enhance classification accuracy and transparency when processing noisy documents. Similarly, Cocoon [461] constructs semantic detection prompts and divides datasets into batches, allowing the LLM to analyze sampled values (e.g., 1,000 entries per column) and identify typos or inconsistencies (e.g., “mapping English” $$ “eng”), thereby supporting batch-wise data cleaning. GIDCL [432] adopts a creator-critic framework in which the LLM iteratively refines lightweight error detection models and generates pseudo-labeled data using handcrafted prompts and in-context examples to produce both detection and correction functions, further enhanced by structural correlation learning with Graph Neural Networks (GNNs).
(2) LLM Based Cleaning Context Enrichment. To address the inefficiencies and limited scalability of manual cleaning context model construction in dynamic environments, the second approach leverages LLMs to enrich data cleaning context models and more effectively capture semantic relationships within the data. For example, LLMClean [78] proposes an automated LLM-based method for generating context models by extracting ontological functional dependencies (OFDs) using both prompt ensembling and fine-tuned LLMs (e.g., Llama-2). The extracted OFDs are then used to identify data errors (e.g., value inconsistencies) and guide LLM-based repairs through iterative feedback from integrated correction tools such as Baran. LLMErrorBench [74] employs LLM agents equipped with Python (via IPython) and prompted with task-specific instructions and contextual hints (e.g., error locations) to explore, modify, and repair datasets iteratively. Corrections (e.g., value replacement, missing data handling) are guided by performance feedback from pre-defined code execution and evaluation pipelines.
(3) Fine-tuning Based End-to-End Error Processing. To improve error correction accuracy while preserving computational efficiency and model adaptability, the third approach fine-tunes LLMs to capture dataset-specific patterns and dependencies that are typically difficult to model through prompting alone. For example, GIDCL [432] fine-tunes a local LLM (e.g., Mistral-7B) using Low-Rank Adaptation (LoRA) to optimize error correction, constructing training data from labeled tuples and pseudo-labeled tuples generated via LLMbased augmentation, with each training instance formatted as a context-enriched prompt comprising: (i) an instruction (e.g., “Correct the ProviderID to a valid numeric format”), (ii) a serialized erroneous cell with row and column context (e.g., “<COL>ProviderID<VAL>1x1303...”), (iii) in-context learning demonstrations (e.g., “bxrmxngham $$ birmingham”), and (iv) retrieval-augmented examples from the same cluster (e.g., clean tuples via k-means).
Data Imputation. Given a data entry with missing attribute values (e.g., NULL), data imputation aims to infer the missing values using available contextual information accurately. Existing methods either (i) use structured prompts to convey contextual hints to LLM, or (ii) apply retrieval-augmented generation (RAG) to integrate relevant external data.
(1) Prompt Based End-to-End Imputation. To incorporate contextual information for imputing missing values, the first approach constructs structured prompts. For example, RetClean [129] enhances LLM effectiveness by serializing each tuple into a formatted representation (e.g., “[Name: John; Age: 25; Gender: NULL]”) and pairing it with a targeted question such as “What is the correct value for Gender?”. This prompt design enables the LLM to generate accurate, context-aware missing values.
(2) RAG Assisted Localized Imputation. To enable online LLMs in handling unseen, domain-specific, or private datasets, the second approach adopts the retrieval-augmented generation (RAG) paradigm. For example, RetClean [129] introduces a retrieval-based data cleaning framework that indexes a data lake using both syntactic (Elasticsearch) and semantic (Faiss/Qdrant) methods. It retrieves the top- $k$ relevant tuples, reranks them (e.g., using ColBERT), and then leverages an LLM to infer missing values, while maintaining lineage tracking for transparency and traceability.
# 3.1.2 LLM for Data Integration
Data integration aims to align elements across heterogeneous datasets to enable unified access, analysis, and knowledge extraction. For instance, it includes identifying tables or records that correspond to the same real-world entity. Moreover, it facilitates downstream tasks such as data augmentation by establishing semantic relationships across sources.
Traditional integration methods often struggle with semantic ambiguities and conflicts, particularly in complex integration scenarios without domain-specific knowledge [277].
Furthermore, classical models (e.g., pretrained models) generally require large amounts of task-specific training data and tend to degrade in performance when encountering out-ofdistribution entities [308]. In contrast, recent studies have shown that LLMs possess strong semantic understanding, enabling them to uncover correlations across datasets and incorporate domain-specific knowledge, thereby offering robust generalization across diverse integration tasks.
Entity Matching. The goal of entity matching is to determine whether two entries refer to the same real-world entity. Existing methods leverage LLMs through well-structured prompts and advanced reasoning mechanisms, incorporate multiple models for collaborative matching, and apply multitask fine-tuning to further enhance performance.
(1) Prompt Based End-to-End Matching. To improve LLM’s effectiveness on matching tasks, the first approach crafts well-structured prompts and integrates auxiliary mechanisms to strengthen the robustness of the reasoning process. Manually-Crafted Prompt. This method incorporates detailed instructions and illustrative examples into the prompts to guide LLM in performing entity matching more effectively. For example, MatchGPT [308] evaluates the performance of both open-source and closed-source LLMs (e.g., Llama 3.1 and GPT-4o mini) with (i) different prompt designs, (ii) the selection of in-context demonstrations, (iii) automatic generation of matching rules, and (iv) fine-tuning LLMs using a shared pool of training data. To reduce inference costs, BATCHER [134] introduces a batch prompting method that allows multiple entity pairs to be processed simultaneously. It optimizes in-context learning by (i) grouping entity pairs into a single prompt and (ii) applying a greedy cover-based strategy to select demonstrations such that each query in the batch is semantically close to at least one example.
Pseudo-Code Guided Reasoning. To mitigate hallucinations arising from over-reliance on an LLM’s internal knowledge, this method integrates external formalized representations to enhance the robustness and reliability of the reasoning process. For example, KcMF [430] guides LLMs using expertdesigned pseudo-code instructions structured as a sequence of if-then-else logical conditions, combined with external domain knowledge (e.g., datasets and examples). It further adopts an ensemble strategy by generating outputs from different knowledge sources (e.g., Wikidata and domain-specific datasets) and applies a voting mechanism to aggregate results, improving consistency and accuracy.
(2) End-to-End Matching with Multi-Model Collaboration. To leverage the strengths of different models across tasks, the second approach employs collaborative entity matching using models of varying sizes. For example, COMEM [400] introduces a compound entity matching framework that combines multiple strategies with LLM collaboration to address global consistency, which is often ignored in binary matching. It employs (i) a local strategy using a medium-sized LLM (3B-11B) as a matcher or comparator to rank top- $k$ candidates via bubble sort, reducing position bias and context length dependency; and (ii) a global selection strategy using a stronger LLM (e.g., GPT-4o) to refine top- $k$ candidates by modeling inter-record interactions.
(3) Localized LLM Fine-tuning of Multi-Task Learning. To enhance the generalization capability of local LLMs, the last approach integrates multiple task-specific datasets within a unified multi-task instruction tuning framework. For example, Jellyfish [454] applies parameter-efficient instruction tuning to locally deployed LLMs (7B-13B) across diverse data processing tasks. It employs techniques such as chainof-thought prompting over task-specific serialized data and reasoning data distillation, using explanation traces generated by a larger mixture-of-experts model (Mixtral-8x7B-Instruct) to guide the learning process.
Schema Matching. The objective of schema matching is to identify correspondences between elements of different database schemas (e.g., matching attribute names “employee ID” and “staff number”). Existing approaches directly apply prompting techniques to enable LLMs to perform end-to-end matching, utilize retrieval-augmented generation (RAG) to enhance contextual understanding, and employ LLM agents to orchestrate the overall matching workflow.
(1) Prompt Based End-to-End Matching. To facilitate schema matching without requiring rigid code implementations, the first method employs various prompting techniques to guide LLM in identifying the desired mappings. For example, LLMSchemaBench [304] applies prompt engineering techniques to interact with LLMs, defining four task scopes that differ in the level of contextual information included in the prompts. The prompts are constructed using established design patterns: the persona pattern (e.g., instructing the LLM to act as a schema matcher), meta language creation (e.g., explicitly defining valid match criteria), Chain-of-Thought reasoning, and the output automater (e.g., generating structured JSON outputs for downstream automation).
(2) End-to-End Matching via Context-Enriched RAG. To enrich the matching context and improve accuracy, the second method integrates retrieval-augmented generation (RAG) with various strategies. For example, Magneto [267] employs a retrieve-rerank framework that combines small pre-trained language models (SLMs) with LLMs to deliver cost-effective and generalizable schema matching. SLMs serve as candidate retrievers, generating an initial ranked list of potential matches from the target table for each input column, which is then refined by LLMs acting as rerankers to improve accuracy. KG-RAG4SM [277] incorporates multiple retrieval strategies, including vector-based, graph traversal-based, and query-based, to extract relevant subgraphs from knowledge graphs (KGs). These subgraphs are further refined through ranking mechanisms and used to augment LLM prompts, thereby improving schema matching performance through enriched contextual input.
(3) Agent-Based Matching Workflow Orchestration. To address complex matching patterns, the final approach leverages LLM-based agents to orchestrate the end-to-end matching workflow. For example, Agent-OM [320] employs two LLM agents (i.e., Retrieval Agent and Matching Agent) to control the workflow by decomposing tasks via Chain-ofThought (CoT) prompting, invoking specialized tools (e.g., syntactic/lexical/semantic retrievers and matchers), and accessing a hybrid database (relational $^ +$ vector) for memory storage and retrieval. Harmonia [340] leverages LLM-based agents to orchestrate data harmonization tasks, combining predefined data integration primitives (e.g., schema matching, value matching) with on-demand code generation when the primitives are insufficient. In addition, it employs techniques like ReAct for reasoning and action planning, interactive user feedback for error correction, and declarative pipeline specifications for reproducibility.
# 3.1.3 LLM for Data Discovery
Data discovery focuses on identifying relationships within datasets through tasks like data annotation (e.g., column type classification) and profiling (e.g., metadata generation). Unlike data analysis, which emphasizes statistical computations or factual answer generation, data discovery enables deeper semantic understanding critical for downstream applications such as integration, search, and recommendation.
Existing data discovery methods face two limitations. First, they typically consider limited interaction between queries and tables [163]. Second, many of these approaches rely heavily on large training datasets, struggle with distribution shifts, and fail to generalize to rare or domain-specific data [143], [217]. Recent studies have shown that LLMs can effectively address these challenges by generating high-quality metadata, enriching dataset context, and supporting natural language interfaces for data discovery tasks.
Data Profiling. Data profiling typically involves characterizing a given dataset by generating additional information (e.g., dataset descriptions). Recent methods often employ prompting techniques to guide LLM in generating such metadata by leveraging their pretrained knowledge and contextual understanding.
(1) Manually Crafted Profiling Prompt Engineering. To profile different aspects of a dataset without extensive manual effort or code implementation, the first approach relies on a set of manually crafted profiling prompts. For example, AutoDDG [456] utilizes LLM with carefully designed prompts to generate two types of descriptions (i.e., User-Focused Descriptions (UFDs) for readability and Search-Focused Descriptions (SFDs) for search optimization) tailored to the dataset’s content and intended usage. LEDD [58] employs carefully crafted prompts to support core data discovery tasks in data lakes. For hierarchical cataloging, prompts instruct LLM to summarize data clusters into semantically meaningful categories. For semantic search, prompts refine natural language queries before embedding and retrieval. For real-time relation analysis, prompts guide LLM in comparing expanded graph nodes and describing inter-table relationships.
(2) RAG Assisted Context Enrichment. To enhance retrieval effectiveness across diverse query types, the second method adopts a hybrid approach that integrates diverse retrieval techniques. For example, Pneuma [72] adopts a RAG framework to retrieve relevant tables from databases, data lakes, or repositories based on natural language queries. It combines LLMs with traditional retrieval techniques, such as full-text and vector search, using LLMs for both schema narration (i.e., generating meaningful column descriptions) and as judges to refine and rerank retrieved results.
Data Annotation. Data annotation involves assigning semantic or structural labels to data elements, such as identifying column types (e.g., Manufacturer or birthDate from the DBPedia ontology). Recent methods leveraging LLM typically design prompts with task-specific annotation instructions. Additionally, some approaches employ retrievalaugmented generation (RAG) techniques and the contextual reasoning capabilities of LLMs to further enrich the annotation context and improve performance.
(1) Task-Specific Annotation Prompt Engineering. To flexibly support diverse annotation tasks, the first approach encodes task-specific instructions and requirements within carefully crafted prompt templates. For example, CHORUS [203] integrates LLMs into the annotation pipeline using task-specific prompts that incorporate instructions, demonstrations, data samples, metadata, domain knowledge, and output formatting guidance. Goby [204] explores the use of LLMs for semantic column type annotation in a domain-specific enterprise setting by crafting a set of tailored prompts. It proposes several techniques to improve performance, including tree serialization (providing the full ontology as prompt context), grammar-constrained decoding (enforcing hierarchical structure during generation), and step-bystep prompting (Chain-of-Thought strategy to guide ontology navigation). LLMCTA [217] evaluates diverse LLMs for generating and refining label definitions by employing methods like knowledge generation prompting (e.g., producing initial demonstrations), self-refinement (error-based definition improvement), and self-correction (two-step pipeline featuring a reviewer model).
(2) RAG Assisted Annotation Context Enrichment. To supply LLM with relevant annotation context, the second approach utilizes diverse retrieval strategies within retrievalaugmented generation (RAG) frameworks to enrich the input. Classical Retrieval Technique. To mitigate the shortcomings of vanilla LLM-based annotation, such as outdated knowledge, this method augments the context with retrieved external knowledge. For example, RACOON [408] performs semantic type annotation by leveraging a Knowledge Graph (KG) to retrieve entity-related information (e.g., labels and triples) associated with column cells. This information is then processed into concise contextual representations and incorporated into LLM prompts to improve annotation accuracy.
$\bullet$ LLM Based Generation. To fully leverage LLM’s internal knowledge, this method relies on the model itself to generate relevant contextual information. For example, Birdie [163] leverages LLMs to automatically generate natural language queries for training a differentiable search index (DSI), which facilitates linking relational tables to queryable knowledge by enriching them with contextual semantics. It supports scalable structured data annotation, using prompts composed of structured markdown tables comprising captions, headers, and sample rows alongside explicit task instructions.
# 3.2 LLM for Data Analysis
Apart from data manipulation, LLMs hold the potential to revolutionize traditional data analysis paradigms by supporting natural language interfaces and enabling advanced, semantic-aware analysis tasks that typically require human involvement. In this section, we discuss the challenges and techniques of LLM-based data analysis, including structured data analysis, semi-structured data analysis, and unstructured data analysis.
# 3.2.1 LLM for Structured Data Analysis
Structured data refers to data with well-defined schemas like relational (tabular) data [107] and graph data [60].
NL2SQL [452], [247], [370], [234], LLM as [229], [317], [234] Relational Data NL-Interface NL2Code [443], [104], [176], [171] Multi-Step QA [494], [226], [475], [464], [404] SemanticStructured Aware End-to-End QA [240], [365], [306], [82], [477], [471] LLM as NL2GQL [252], [493] NL-Interface Graph Data Retrieval-ThenReasoning [458], [193] + Execution-ThenReasoning [424], [246] SemanticAware Fine-Tuning Based [441], [397], [375] Data Analysis Agent Based [192], [100] Semi-Structured Markup Language ? Semi-Structured Tables [165], [281], [245] Document DOeCpRe-ndent [376], [62] Text Masked Learning [225], [49] OCR-Free Unstructured Visual Embedded Learning [174], [138] Program Analysis Based [271], [457] Program Language Vulnerability Detection Case-driven Prompt Engineering [270], [492] Code Summarization [154], [51], [284] SemanticAware Code Completion [357], [118], [413]
# 3.2.1.1 Relational Data Analysis
LLM for Natural Language Interfaces. Basic analysis jobs for relational data are typically characterized by welldefined operations. These include basic calculations (e.g., summation, averaging, counting, ranking), statistical analysis (e.g., regression, K-means clustering), and data quality assurance processes (e.g., constraint validation, outlier detection). Such tasks can generally be supported by tools like SQL or Python libraries (e.g., Pandas).
(1) NL2SQL. With the help of LLM, users can directly perform operations using natural language. NL2SQL focuses on translating natural language queries into SQL commands by leveraging techniques such as (i) schema linking, which aligns user intents with database schema to resolve ambiguities [452], [247], (ii) content retrieval, which dynamically extracts relevant information from the database to refine query generation [370], [234], and (iii) SQL generation strategies such as multi-step generation, intermediate SQL representation, and different decoding strategies [229], [317], [234], [483], [484].
(2) NL2Code. Different from NL2SQL, NL2Code approaches
Iterative Question Processor
(a) PSytQhLon TLoLoMls OIuntppuutt 用 ITnatbelremediate 屈 Table Answer LLM End-to-End (1) Pre-Train Image Caption Question
(b) LLM / (For MLLM) Table Recognition 国 Table MLLM (2) Fine-Tuning Fact Verification Answer (For LLM / MLLM) Table QA
emphasize enhancing relational data analysis through generating Python code (Eenc.ogd.e, Pandas, NumPy), which includes a vast number of library APIs characVtiseiornized by high variability and complexQiuetsyt,i nand Toefxten requirinDge tdhere handling of complex chain operations. Recent advancements address these issues to some extent.
Model Finetuning: PACHINCO [443] fine-tunes a 62B parameter PALM [104] model in two stages (i.e., separately using a Python source code corpus with 64B tokens and a Jupyter notebook corpus with 9.6B tokens) so as to improve model performance on analysis-related tasks (e.g., calculate the amount of games added in each year for each month). DataCoder [176] utilizes different types of contexts (e.g., code, text, and data) by employing dual encoders (e.g., data encoder and code $^ +$ text encoder) and one general decoder to generate code in notebooks.
$\bullet$ LLM Based Analysis Agent: Data Interpreter [171], on the other hand, leverages LLMs through APIs to generate task and action graphs. Specifically, they utilize LLM’s semantic reasoning ability to accurately decompose complex user queries into subproblems (e.g., correlation analysis, data exploration, and anomaly detection), and refine and verify each subproblem to improve code generation results for data science tasks.
LLM for Semantic Analysis. Moreover, some jobs require LLM-based analysis, such as those that involve semantic understanding or demand outputs in natural language format (e.g., table summarization). These challenges call for methodologies like (1) multi-step question answering (QA) with diverse decomposition strategies and (2) end-to-end QA leveraging specifically optimized LLMs.
$\bullet$ Multi-Step QA. Multi-step question answering (QA) refers to decomposing complex queries into a sequence of subquestions to facilitate step-by-step reasoning. According to the question decomposition mechanisms, existing methods can be categorized into two types: (1) static decomposition, which follows predefined and fixed processing steps (e.g., retrieve-select-reason), and (2) LLM-driven iterative decomposition, in which the LLM dynamically determines the next operation based on the contextual history of the reasoning process.
(1) Static Decomposition. The static decomposition includes Retriever-Selector-Reasoner frameworks and the variants, which partition tasks into modular components for better multi-step inference and enhanced interpretability. The Extractor-Reasoner-Executor paradigm [494] extracts the relevant segments from the context, generates the logic rules or equations, and performs the rules or executes the equations to get the final answer through LLM prompting. S3HQA [226] trains a retriever which aims to perform initial filtering of heterogeneous resources, utilizes a selector to select the most relevant factual knowledge, and a generation-based reasoner to obtain final answers.
(2) Iterative Decomposition. However, static decomposition paradigm performs poorly on multi-hop queries, while LLMdriven iterative decomposition, which dynamically refines subtasks through recursive reasoning, could effectively address the issue.
TAPERA [475] introduces the query decomposition step into the question answering process by adopting the LLMdriven approach. The Planner decomposes the query into subqueries, forming an initial plan. The Reasoner then generates executable programs for each sub-query, while the Answer Generator derives answers based on the program outputs to fulfill the plan. Finally, the Planner updates or finalizes the plan as needed.
Similarly, ReAcTable [464] and CHAIN-OF-TABLE [404] iteratively generate operations and update the table to present a reasoning chain as a proxy for intermediate thoughts through prompting LLMs and in-context learning.
End-to-End QA. End-to-End Question Answering (QA) refers to approaches in which the answer-generating LLM directly produces the final response without intermediate steps or iterative refinement. Based on the data representation and processing mechanisms, the relevant methods can be classified into table-specific LLM fine-tuning, table content retrieval, and table-as-image analysis.
(1) Table-Specific LLM Fine-Tuning. Fine-tuning LLMs on task-specific table datasets enables them to internalize analytical knowledge directly within their parameters. TableGPT [240] fine-tunes LLMs like GPT-3.5 using a diverse set of table tasks synthesized from real-world tables. Building on Qwen2.5 [324], TableGPT2 [365] introduces a table encoder to generate a hybrid table representation, an adapter to generate query representations, and a LLM decoder generates an agent workflow (i.e., the tool execution pipeline) to derive the final answer. The TableGPT2 model is pre-trained on 593.8K tables and fine-tuned 2.36M question-answer pairs.
(2) Table Content Retrieval. Instead of embedding the whole table, table content retrieval enhances model performance by eliminating noisy parts of the table while retaining information relevant to question answering. CABINET [306] employs a weakly supervised component to produce a parsing statement that defines the criteria for selecting relevant rows and columns, emphasizing the corresponding table cell content. TableMaster [82] constructs a refined subtable through row and column lookup. By leveraging carefully designed LLM prompts (e.g., provide objective, table definition, table information, question, instructions, and response format), it ranks all candidate columns, selects a relevant subset based on the query, and then instructs the LLM to generate an SQL query for extracting the most relevant rows.
(3) Table-As-Image Analysis. Due to the limitations of (textonly) LLMs in understanding table structures, the Table-asImage approach has been proposed, converting tables into images for analysis using multimodal LLMs. Table-LLaVA [477] applies incremental pretraining to LLaVA-7B [258] on 150K table recognition samples (e.g., input a table image and output table representations in HTML, Markdown, or LaTeX), enabling the model to align table structures and elements with textual modality. It is further fine-tuned on 232K samples on question answering, text generation, fact verification, and structure understanding tasks to enhance its instructionfollowing ability. To enable a single model to perform various analytical tasks, TabPedia [471] introduces the concept synergy mechanism, abstracting all table analysis tasks into concepts. Built on Vicuna-7B [476], it appends meditative tokens to the input of the LLM decoder, which adaptively activates different regions of visual tokens and helps the model interpret the intent behind specific task questions. However, such methods face limitations when processing twisted or distorted tables, and their performance degrades significantly when directly handling document images.
# 3.2.1.2 Graph Data Analysis
Different from relational data, graph data represents entities (vertices) and their inter-dependencies (relationships) to explicit model of complex network semantics (e.g., social networks and knowledge graphs) beyond rigid tabular schema, which presents unique challenges due to the vast search space and complex path reasoning in multi-hop queries [59]. Compared with relational data analysis, graph data analysis involves more complex jobs like summarization based on the multi-hop relations across the graph vertices and reasoning over text-attributed graphs whose nodes and edges are associated with text [252], [493]. Graph data can not only be stored in relational databases, but also be stored and queried in knowledge graphs and accessed through SPARQL in RDF databases (e.g., Blazegraph [8] and GraphDB [21]) or Cypher in Neo4j [17].
Traditional graph analysis (e.g., statistical methods, graph neural network (GNN) based methods) encompasses a spectrum of tasks, including node classification (e.g., categorizing academic papers into research domains), graph classification (e.g., predicting node properties over molecular graphs), link prediction (i.e., inferring latent relationships between graph nodes), community detection (i.e., identifying densely connected subgraphs), anomaly detection (i.e., identifying deviations from expected patterns), graph clustering, and etc. However, these methods have their own limitations. Statisticsbased methods fail to handle complex semantic information (e.g., query can be extremely complex and requires human expertise), while graph neural networks (GNNs) exhibit limited generalization capabilities, necessitating task-specific retraining on different tasks.
In contrast, the advent of LLMs offers transformative potential by leveraging their advanced reasoning capacities and cross-domain generalization abilities, which can (1) simplify the query writing costs (e.g., NL interfaces) and (2) achieve semantic-aware analysis unsupported in traditional ones.
Natural Language To Graph Analysis Query. Different from NL2SQL, the syntax of graph query language generation is more complex (i.e., MATCH, LOOKUP, GET and other operations unique to graph data manipulation) and there exist two operation objects (i.e., vertex and edge) [493]. By integrating natural language interfaces with graph data, LLMs facilitate flexible and efficient query generation without the need for specialized model architectures.
To enhance LLMs’ comprehension of the complex syntax of Graph Query Language (GQL), $R ^ { 3 }$ -NL2GQL [493] proposes a hybrid approach leveraging relatively small LLM (e.g., LLaMA3-7B) as a selector and GQL rewriter, while employing a larger LLM (e.g., GPT-4) as a reasoner. The selector identifies the necessary CRUD functions, clauses, and schema, while the rewriter refines the query by aligning it with the relevant graph data retrieved by minimum edit distance and semantic similarity calculation. The LLM then synthesizes the aligned question, selected operations, and schema to generate the final GQL query.
To address the limitations of LLMs in planning and collaborating with other LLMs, NAT-NL2GQL [252] introduces a three-agent framework. The Preprocessor agent constructs context information, including query rewriting, path linking, and the extraction of query-relevant schemas. The Generator agent, an LLM fine-tuned with NL-GQL data, generates GQL statements based on the rewritten queries and extracted schemas. The Refiner agent iteratively enhances the GQL or contextual information by leveraging error feedback from GQL execution results.
Note that, within the context of AI for Science (AI4Science), the integration of LLMs with graph data analysis has also shown significant potential and wide-ranging applications (e.g., treat polymers as graphs and predict their properties [242], [309]), which is not the primary focus of this survey.
LLM-based Semantic Analysis. Furthermore, certain jobs necessitate semantic-aware analysis, such as summarizing textual paragraphs embedded within graph nodes. Based on the adopted LLM strategies, we classify the relevant methods into retrieval-then-reasoning methods, execution-thenreasoning methods, graph task based fine-tuning methods, and agent based methods.
Retrieval-Then-Reasoning. Retrieval-then-reasoning first extracts a question-specific subgraph from the graph to identify the most relevant entities and then generates answers using LLMs. To address the challenge of a vast search space, [458] introduces a two-stage approach. First, a trainable and decoupled subgraph retriever selects a relevant subgraph based on the query. Then, reasoning is performed over the retrieved subgraph to derive the final answer. UniKGQA [193] integrates retrieval and reasoning within a unified model architecture. It comprises a semantic matching module, leveraging a pre-trained RoBERTa [266] for the semantic alignment between questions and relations in graphs, and a matching information propagation module that propagates matching signals along directed edges in graphs.
• Execution-Then-Reasoning. Execution-then-reasoning refers to the process of parsing natural language queries into executable logical forms (e.g., SPARQL) that align with the graph data, followed by reasoning based on the output of the executed program. Interactive-KBQA [424] introduces an interactive LLM QA framework with a unified SPARQL-based toolset (e.g., entity search, graph pattern search, SPARQL execution, etc.) designed to address complex queries. FlexKBQA [246] addresses the challenge of lacking high-quality annotated data in real-world scenarios. By prompting LLMs as program translators, it samples program-answer pairs from the knowledge base and generates corresponding natural language questions. The synthetic question-program-answer dataset is used to train lightweight models through execution-guided self-training, which are subsequently employed to annotate real user queries. This approach addresses the distribution shifts between synthetic and actual data, leading to significant improvements in few-shot learning scenarios.
Graph Task Based Fine-tuning Methods. InstructGLM [441] enables generative graph learning by fine-tuning an LLM and leveraging natural language descriptions of graph structures (e.g., offer the first node and the 1-/2-/3- hop neighbors’ information). InstructGraph [397] introduces a stricter code-like graph representation format which constructs entities and triples in the form of list, whose backbone LLM (LLaMA2-7B) is fine-tuned on a graph-centric corpus comprising 1.6 million instances. To mitigate the issue of hallucination, it incorporates Direct Preference Optimization (DPO) algorithm [329] for preference alignment. GraphGPT [375] enhances model performance in zero-shot scenarios by incorporating a structural information encoding module based on Graph-SAGE [166] and GCN [211]. It finetunes the projector bridging the graph encoder and the LLM decoder to align the language capabilities of the foundation LLM (Vicuna-7B) with the graph learning tasks.
Agent Based Methods. Agent-based methods involve leveraging LLM-based agents with predefined tools (e.g., human-written interfaces or graph processing library APIs) that iteratively interact with the graph data to retrieve, refine, and operate information. StructGPT [192] introduces an iterative reading-then-reasoning framework, leveraging specialized interfaces to operate on graph data. It repeatedly applies an invoke-linearize-generate procedure to derive query results. Another approach is to generate an entire reasoning path based on the query and refine it only when necessary. Readi [100] initially constructs a reasoning path and instantiates it on the graph. When execution errors occur, it collects error messages and invokes an LLM to revise the path. The final answer is inferred from the instantiated graphs.
# 3.2.2 LLM for Semi-Structured Data Analysis
Semi-structured data refers to data that are neither with strictly predefined schema like relational models nor raw data (e.g., plain text or images) [48]. Meanwhile, they still maintain part of organizational properties (e.g., tags, headers) and have hierarchical or nested representation (e.g., County - Province - City in a nested JSON).
# 3.2.2.1 Markup Language
Markup languages (e.g., XML, JSON, and HTML) are widely used for structuring and exchanging data across systems. Traditional approaches for processing these formats typically involve transforming them into structured tables or representing them as hierarchical tree structures. Leveraging the reasoning capabilities of LLMs, it becomes possible to directly extract and interpret hierarchical relationships, attributes, and nested structures from data without the need for intermediate transformations.
# 3.2.2.2 Semi-Structured Tables
Compared to structured relational data, semi-structured tables exhibit a more complex structural organization characterized by merged cells. This inherent complexity presents a significant challenge in aligning queries with the table content and structure in query answering tasks. The lack of efficient tools (usually using the openpyxl library) and representation methods (usually stored in Excel or HTML files) for handling semi-structured tables makes it more difficult to process such data.
Although research on semi-structured table analysis is limited, several studies have compiled various semi-structured table reasoning datasets, providing valuable data support. TEMPTABQA [165] consists of 11,454 question-answer pairs focused on temporal queries, while SPREADSHEETBENCH [281] presents a challenging benchmark for spreadsheet manipulation, with 912 questions derived from realworld scenarios. MiMoTable [245] incorporates reasoning across multiple sheets and files, containing 1,719 queries within 428 spreadsheets. Evaluation results on these benchmarks highlight a significant performance gap (ranging from $2 0 \%$ to $5 0 \%$ ) between state-of-the-art models and human performance, calling for further exploration in this area.
# 3.2.3 LLM for Unstructured Data Analysis
Unstructured data refers to data that lacks explicit structure, as it does not adhere to a predefined schema. Additionally, it exhibits high variability in format, length, and modality, which further complicates its processing and analysis.
# 3.2.3.1 Documents
Documents exhibit complex layouts and styles with diverse elements, including a hybrid of images, tables, charts, plain text, and formulas.
OCR-Dependent Methods. OCR-based methods refer to approaches that involve performing Optical Character Recognition on document images, followed by the integration of textual, layout, and visual features for reasoning. UDOP [376] integrates text and layout modalities within a unified encoder, dynamically fusing image patch tokens and text tokens based on their spatial information. Specifically, when the center of a text token’s bounding box falls within an image patch, the corresponding image patch embedding is added to the text token embedding, enabling a more cohesive representation of document structure. DocFormerV2 [62] preserves the integrity of layout information by employing a visual encoder. Image patches and text bounding box positions are embedded through a linear layer and added to the corresponding token embeddings as input to the T5 [331] encoder. To achieve local feature semantic alignment, the model undergoes pretraining on token-to-line (i.e., predict whether a key-value pair is on the same line or adjacent lines) and token-to-grid (i.e., predict each token located in which image grid) tasks. The T5 decoder is then incorporated to fine-tune the whole model on downstream tasks.
OCR Free Methods. However, the OCR step often introduces semantic errors, resulting in suboptimal performance. To fill this gap, OCR-free methods have emerged, directly generating the target token sequences with end-to-end multimodal LLMs [257], [407]. Based on different approaches to enhancing model understanding of textual semantics, related works can be categorized into text masked learning and visual embedded learning.
(1) Text Masked Learning. Text Masked Learning involves masking textual content within a document and training the model to predict the missing text. Pix2Struct [225] is a typical vision-encoder-text-decoder pre-trained image-to-text model designed for visual language understanding based on ViT [124]. It is pretrained to parse masked web pages into simplified HTML. The model introduces a variable-resolution input representation, rescaling input images to maximize the number of patches that can fit within the given sequence length, to prevent aspect ratio distortion. DUBLIN [49] designed multiple fine-tuning tasks (i.e., bounding box prediction based on given text, text prediction based on given bounding box, masked text generation, and query answering) to improve the generalization ability.
(2) Visual Embedded Learning. In Visual Embedded Learning, there are no specially designed training objectives. Instead, the model is directly fine-tuned on downstream tasks to enhance its understanding of textual content within images. mPLUG-DocOwl1.5 [174] introduces a spatial-aware visionto-text module designed for representing high-resolution, text-rich images. This module preserves structural information while reducing the length of visual features. It consists of a convolution layer to shorten the sequence length and a fully connected layer that projects visual features into the language embedding space. Unlike most methods that crop or resize the initial image before feeding it into a vision encoder, DocPedia [138] directly processes visual input in the frequency domain. It utilizes JPEG DCT [388] extraction to obtain DCT coefficients, which are then processed using a frequency adapter before being input into the vision encoder. This approach allows the model to capture more visual and textual information while using a limited number of tokens. The performance improvement observed in the experiment suggests that this method offers a novel approach for processing high-resolution images.
# 3.2.3.2 Program Language Analysis
Programming language analysis involves multiple levels of abstraction, including lexical analysis, parsing, and semantic analysis, each requiring distinct techniques to process source code effectively. Additionally, it must handle both local and global information, such as variable scopes, function call chains, and complex dependencies, which pose significant challenges for accurate program understanding.
LLM as Program Vulnerability Detection Tools. Recent advancements in LLMs have opened new avenues for improving vulnerability detection tools. Training LLMs based on program analysis techniques enhances their ability to understand programs at both the lexical and syntactic levels. Leveraging in-context learning through case-driven prompt engineering enhances the model’s accuracy by providing relevant examples.
Program Analysis based Training. Static and dynamic program analysis are commonly used methods for detecting vulnerabilities in programs. By assisting these processes, LLMs improve the accuracy of vulnerability detection. PDBER [271] is a model fine-tuned on CodeBERT [141] through three tasks (i.e., Predicting Masked Tokens, Predicting Statement-Level Control Dependencies, and Predicting Token-Level Data Dependencies). This enables more finegrained vulnerability analysis at the statement level. To reduce the impact of irrelevant information, [457] decomposes the control flow graph (CFG) into multiple execution paths from the entry node to the exit node. CodeBERT and a CNN are employed to capture intra-path and inter-path representations, respectively. The extracted feature vectors are then combined as a unified program representation, which serves as input to a MLP classifier for vulnerability detection.
Case-driven Prompt Engineering. Leveraging the incontext learning and few-shot learning capabilities of LLMs can significantly improve their accuracy in vulnerability detection. VUL-GPT [270] uses GPT-3.5 to generate analysis content (i.e., the program interpretation) for the input code and retrieves similar code snippets and corresponding vulnerability information through BM25 [338] or TF-IDF. The retrieved information, along with the original code and analysis, is then input into GPT to detect vulnerabilities. [492] designs various prompts, such as random code samples and retrieve-based code samples, and demonstrates that GPT-4 outperforms state-of-the-art models in vulnerability detection.
LLM-based Semantic-aware Analysis. Traditional semantic-aware tasks convert programs into ASTs [362] or graph structures [151] and train Seq2Seq models to learn program syntax, dependencies, and semantics. However, these approaches lack general knowledge, leading to limited generalization ability. By leveraging the world knowledge and few-shot learning capabilities of LLMs, the performance of tasks such as code summarization and code completion has been significantly improved.
LLM as Code Summarizer. Recent advancements in LLM-powered code summarization focus on retrieving similar code snippets and leverage LLMs’ few-shot learning capability to enhance performance. [154] retrieves similar code examples by measuring token overlap and the cosine distance between embedding vectors of code snippets. In contrast, [51] employs the BM25 algorithm and incorporates repository information, data flow information, and variable information to construct three-shot prompts. SCLA [284] further enhances code semantics in LLM prompts by preprocessing the code sample pool to extract semantic information. By simultaneously leveraging few-shot learning, it achieves state-of-the-art performance based on Gemini-1.5-Pro.
$\bullet$ LLM as Repository-Level Code Completer. Repository context (e.g., imports, related classes, etc.) plays a crucial role in code completion. Given the strong semantic understanding and generative capabilities of LLMs, how to integrate contextual information into code completion has become a key research focus. RepoFusion [357] appends the surrounding text of the target code to the repository context retrieved based on BM25, encoding and concatenating them as input to the decoder for code generation. This approach enables the model to produce context-aware code completions by leveraging both local and repository-level information. CoCoMIC [118] proposes a more robust retrieval method based on program dependency graphs. Given an incomplete program, it retrieves the most relevant context by analyzing file imports within the constructed graph. By defining the relevant context as files within a two-hop neighborhood, this approach mitigates the risk of excluding vital dependencies while avoiding the inclusion of irrelevant information. However, some researchers have found that simple retrieval methods fail to improve performance in up to 80% of cases and may even degrade performance due to the inclusion of irrelevant information [413].
As a result, Repoformer introduces a self-supervised learning approach to enable the model to accurately judge whether retrieval can improve its output quality. A new $< e o f >$ token is introduced to guide the model in determining whether context retrieval is necessary. Based on the output after $< e o f >$ token, it decides whether to generate the output directly or to perform retrieval first.
# 3.3 LLM for Data System Optimization
This section presents the application of LLM to optimize the performance of different data systems across three key tasks: (1) Configuration Tuning: selecting effective system configurations, such as database knobs and indexes; (2) Query Optimization: accelerating input SQL queries through logical rewrites and physical plan selection; (3) Anomaly Diagnosis: addressing system anomalies, such as spikes in the usage of specific system resources.
# 3.3.1 LLM for Configuration Tuning
Configuration tuning aims to identify effective configurations, such as database knobs [231], [474] and indexes [485], [487], [486], to optimize the system performance. Traditional tuning approaches, including rule-based methods and learning-based techniques with classical machine learning models, often require extensive explorations without a promising starting point [231]. Furthermore, they might result in sub-optimal configurations, despite using advanced techniques such as transfer learning [463], [402].
A key limitation of these methods is the failure to incorporate extensive domain knowledge (e.g., information from system manuals and public forum discussions) into the tuning process, relying solely on runtime feedback from benchmark evaluations to guide optimization. To address this issue, recent approaches utilize LLM with large-scale domain knowledge to enhance the tuning process via the following methods.
Tuning Task-Aware Prompt Engineering. The first method manually designs prompts with informative details (e.g., system status) to assist LLM in configuration tuning (e.g., database knobs and indexes). Some approaches further enhance this by introducing automatic prompt generation techniques or by formulating it as an optimization problem.
(1) Manually-Crafted Tuning Prompt. Existing methods design prompts that incorporate essential details (e.g., system status) tailored to the characteristics of specific tasks. In particular, the constructed prompts typically consist of the following components.
Configuration Task Instruction. To convey the overall tuning objective, existing methods specify task instructions in the prompts using chain-of-thought (CoT) and role-playbased guidance. For instance, LLMBench [243] explicitly defines the goals of three key subtasks in knob tuning: (i) knob pruning to retain the most influential knobs, (ii) model initialization to select promising knobs for warm-starting bayesian optimization, and (iii) knob recommendation to return optimal configurations for specific workloads. Similarly, LATuner [132] instructs LLM to identify critical knobs for warm-starting the tuning process and select promising knobs as training samples for boosting the sampling procedure.
Input Tuning Context. To enable LLM to effectively support the tuning process for specific workloads, existing methods enrich the tuning context with detailed information. Specifically, prompts are carefully structured to include: (i) Configuration Specifications: list of tunable knobs (e.g., names and allowable value ranges) and usage descriptions, including fixed-task demonstrations (e.g., LLMBench [243], LATuner [132]); (ii) Environment Information: covering workload and database characteristics (e.g., compressed SQL snippets with join conditions in $\lambda$ -T une [156]), as well as hardware settings (e.g., memory size and CPU core count).
• Output Tuning Requirement. To ensure accurate parsing and interpretation of configurations generated by LLM, output formats are explicitly specified in the prompt. For instance, LLMBench [243] requires that recommended knob values be returned in JSON format, while LATuner [132] enforces constraints such as excluding the use of the “None” value in the configuration output.
(2) Automatic Tuning Prompt Generation. To improve the efficiency of prompt generation for different workloads, existing methods propose the following techniques to automate the process of identifying effective prompts.
Input Specific Prompt Generation. To identify the most suitable prompts for varying tasks, existing methods automatically tailor prompt generation based on specific inputs. For example, DB-GPT [491] introduces an automatic prompt generation framework that leverages LLM to produce multiple instruction candidates, selecting the optimal ones using scoring functions associated with the performance improvement. Additionally, DB-GPT [491] and LLMIdxAdvis [473] select demonstration examples in the prompts based on semantic similarity between candidate examples and input queries, as computed by a model-based encoder.
$\bullet$ Optimization Problem Formulation. To reduce token usage and convey the most relevant context to the LLM, some methods formulate prompt generation as a cost-based optimization problem. For instance, $\lambda$ -T une [156] compresses workload representations by modeling the selection of join conditions as an integer linear programming problem, introducing binary decision variables to capture the positional relationships of different columns.
RAG Based Tuning Experience Enrichment. The second method builds an offline knowledge base from diverse external sources and performs online retrieval to provide LLM with context-specific knowledge (e.g., similar historical tuning cases). This approach addresses the limitations of direct prompting, which often yields overly generic responses lacking concrete commands and effective configurations [96].
(1) LLM Based Tuning Experience Preparation. Given that existing tuning knowledge is distributed across heterogeneous formats, LLMs are employed to construct a knowledge base by processing and integrating multi-source external experience in an offline manner. For example, GPTuner [223] prompts LLM to extract implicit knowledge, remove noisy content, and summarize relevant information from multiple sources. Additionally, it introduces a prompt ensemble algorithm that generates multiple prompts by varying the demonstration examples, aiming to mitigate hallucination issues.
(2) Semantic Based Tuning Experience Retrieval. To improve the accuracy of relevant experience retrieval, existing methods employ model-based encoders to capture semantic relationships (e.g., documents conveying similar meanings with different expressions). For instance, Andromeda [96] utilizes a Sentence-BERT encoder trained with contrastive learning to generate embeddings, which are then used to perform similarity searches across various sources, including historical queries and troubleshooting manuals.
Training Enhanced Tuning Goal Alignment. The third method introduces additional training to further refine LLMs, improving their alignment with tuning objectives. For example, DB-GPT [491] proposes techniques to facilitate effective fine-tuning, including: (i) heuristic statistical data embedding, (ii) LLM-assisted annotation of high-quality samples, (iii) contrastive learning of supplementary training data generation, and (iv) delta tuning to minimize trainable parameters while maintaining performance. Similarly, E2ETune [177] fine-tunes LLMs (e.g., Mistral-7B) using training data comprising ${ } ^ { \cdot } ( w o r k l o a d ) \to ( c o n f i g u r a t i o n ) ^ { \prime }$ pairs, where diverse workloads are generated via GPT-4 prompting and optimal configurations are identified using the HEBO algorithm [112].
# 3.3.2 LLM for Query Optimization
Query optimization aims to accelerate SQL execution through logical (e.g., query rewriting) and physical (e.g., join order and plan selection) enhancements. Traditional logical optimization relies on predefined rewrite rules or learning-based approaches to determine rule application order, while physical optimization employs heuristic algorithms using statistical data or learning-based techniques leveraging query plan features. However, these approaches often overlook external SQL optimization knowledge, limiting their effectiveness and generalizability across diverse SQL patterns.
To address these limitations, recent studies investigate the use of LLM to directly rewrite input SQL queries or determine optimal rule application sequences for logical optimization. They also explore leveraging LLM to select optimal query execution plans for physical optimization, drawing on the extensive SQL optimization knowledge encoded within the model. These methods can be broadly categorized as follows.
Optimization-Aware Prompt Engineering. The first method directly employs LLMs to perform query optimization using well-structured prompts composed of two key components: (i) manually crafted templates enriched with taskspecific details (e.g., explicit task instructions), and (ii) relevant optimization examples automatically selected to more effectively guide the optimization process.
(1) Manually-Crafted Optimization Prompt. Existing methods construct prompts with the following components to facilitate the query optimization task.
Optimization Task Instruction. To clarify the optimization objective and guide LLMs to produce specific optimization actions, detailed task instructions are included in the prompts. For logical query optimization, some methods instruct LLMs to directly generate equivalent rewritten queries with improved performance (e.g., DB-GPT [491], GenRewrite [261], and LITHE [363]), while others ask them to determine the optimal sequence of rewrite rule applications for a given query (e.g., LLM- $R ^ { 2 }$ [248] and R-Bot[369]). For physical query optimization, some approaches prompt LLMs to generate complete query plans with specified operators and join orders (e.g., LLM-QO [196]), while others instruct LLMs to generate optimization hints or select the most effective plan from a set of candidates (e.g., LLMOpt [438]).
• Input Optimization Context. To enable effective query optimization for specific workloads, existing methods augment prompts with additional contextual information to better inform LLMs. This includes: (i) Database Statistics: column selectivity [363], histograms, distinct value counts, and estimated cardinalities [196]; (ii) Rule Specifications: a list of applicable rewrite rules accompanied by usage descriptions (e.g., GenRewrite [261] presents natural language hints as the rules) and illustrative examples [248].
Output Optimization Requirement. To ensure that the optimizations produced by LLMs are valid and easily processed for downstream use, some methods explicitly define output formatting requirements within the prompts. For example, LLM- $R ^ { 2 }$ enforces that selected rewrite rules be returned in the format “rules selected: [rule names]” [248], while LLM-QO specifies that the generated query plan should follow the “join operator(table1, table2)” format [196].
(2) In-Context Learning with Optimization Example. Rather than relying on fixed examples to illustrate how LLM should perform optimization, some methods automatically retrieve examples that are semantically similar to the input query to provide more effective guidance. For instance, LLM- $R ^ { 2 }$ [248] introduces a contrastive representation model to encode query plans based on features such as operators, cardinalities, and costs, and retrieves a set of high-quality demonstrations, i.e., successfully optimized rewritten queries. RAG Based Optimization Experience Enrichment. The second method adopts the retrieval-augmented generation (RAG) paradigm to equip LLM with relevant contextual information for targeted optimization of specific queries. It constructs and retrieves optimization knowledge from multiple sources that are semantically related to the input query.
(1) LLM Based Optimization Experience Preparation. To consolidate optimization experience from multiple sources, existing methods introduce an offline preparation pipeline that leverages LLM to process and integrate data into a unified format. For example, R-Bot [369] employs LLM to generate rewrite rule specifications by (i) summarizing rule code within a hierarchical structure and (ii) extracting information from structured documentation blocks. It further uses LLM to standardize the resulting specifications, explicitly outlining application conditions and detailed rewrite transformations.
(2) Hybrid Optimization Experience Retrieval. To more accurately identify relevant optimization experiences, both structural and semantic characteristics of the input queries are considered during similarity search. For instance, R-Bot [369] introduces a hybrid retrieval approach that computes similarity using concatenated embeddings capturing structural features (e.g., rewrite rule explanations) and semantic representations (e.g., query template structures). Based on the retrieved experience, R-Bot employs a step-by-step LLM-driven rewrite process, further enhanced through a self-reflection mechanism to improve rewrite quality.
Training Enhanced Optimization Improvement. The third method either uses LLM outputs to train smaller models or fine-tunes LLMs on task-specific data to support various query optimization tasks (e.g., query plan generation). For instance, LLMSteer [53] uses LLM-generated embeddings to train a classifier for selecting optimal hints of the input SQL. LLM-QO [196] fine-tunes LLMs to generate execution plans directly through a two-stage pipeline: (i) Query Instruction
Tuning (QIT) for producing valid plans; (ii) Query Direct Preference Optimization (QDPO) for distinguishing highquality plans. The fine-tuning data is structured as “(query, task instruction, auxiliary information such as schema and statistics, demonstration)” paired with the corresponding efficient execution plan. LLMOpt [438] fine-tunes two models: (i) LLMOpt(G), which generates candidate hints, and (ii) LLMOpt(S), which selects the optimal hint as a list-wise cost model. The fine-tuning data is structured as “(query, statistics such as histograms) $$ (optimal hint)” for LLMOpt(G) and “(query, statistics such as histograms, candidate hints) (index of optimal hint)” for LLMOpt(S).
# 3.3.3 LLM for Anomaly Diagnosis
Anomaly diagnosis focuses on analyzing root causes and identifying recovery solutions for anomalies (e.g., spikes in system resource usage) during the system runtime, such as databases. Traditional rule-based methods often fail to accurately identify root causes across diverse scenarios, while classical machine learning models (e.g., random forests) cannot generate comprehensive reports with detailed recovery solutions.
Recent studies demonstrate that LLMs, with their advanced textual understanding and reasoning capabilities, can effectively pinpoint root causes and generate detailed diagnosis reports with recovery solutions in various formats. These LLM-based approaches can be categorized as follows.
Manually Crafted Prompts for Anomaly Diagnosis. The first method emulates the reasoning process of a human DBA, which involves referencing essential statistical information and conducting an in-depth analysis during diagnosis. The information is incorporated into well-structured prompts to enhance diagnosis accuracy. For example, DBG-PT [155] utilizes LLM to detect query execution slowdowns caused by changes in query plans, using prompts that include: (i) a summary of plan differences, (ii) a request for feasible configuration recommendations, and (iii) a specification of the reasoning process with output formatted in JSON format.
RAG Based Diagnosis Experience Enrichment. The second method adopts retrieval-augmented generation (RAG) paradigm to provide LLM with relevant diagnosis knowledge, leveraging two key components: a knowledge base and a retriever. For instance, D-Bot [490], [489] enhances database anomaly diagnosis by preparing a corpus of documents and tools considering the hierarchical document structure, then using a fine-tuned Sentence-BERT encoder to retrieve relevant materials and guide LLM via prompts enriched with the retrieved content. ByteHTAP [425] supports LLM-based diagnosis of query performance regressions in HTAP systems by first constructing a knowledge base of historical queries and their associated performance explanations. It then employs an enhanced tree-CNN classifier to encode and retrieve relevant plan pairs. The retrieved information is incorporated into prompts that include: (i) background information (e.g., key differences among HTAP system engines), (ii) a task description (e.g., retrieved diagnosis knowledge with explicit inputoutput specifications), and (iii) additional user-provided context (e.g., recent index changes).
Multi-Agent Mechanism for Collaborative Diagnosis. The third method adopts an agent-based diagnosis framework, where specialized agents with distinct responsibilities collaborate to improve diagnosis accuracy and efficiency. For example, D-Bot [490], [489] orchestrates multiple domainspecific LLM agents, each aligned with a cluster of preprocessed diagnosis knowledge, to support precise anomaly diagnosis in databases. These agents, coordinated by a chief agent, conduct multi-step root cause analysis via a treesearch algorithm. Similarly, Panda [359] emulates experienced database engineers by leveraging LLM agents across five functional components: (i) question verification to eliminate irrelevant queries, (ii) grounding to provide necessary input query context, (iii) verification to ensure diagnosis accuracy and source attribution, (iv) feedback integration to incorporate user input, and (v) affordance assessment to estimate the performance impact of generated solutions.
Localized LLM Enhancement via Specialized FineTuning. The last method employs specialized fine-tuning strategies for localized LLMs of modest scale (e.g., 6B-14B), leveraging distilled knowledge to approximate the outputs of larger models while achieving comparable performance. For instance, D-Bot [490] applies multi-task fine-tuning to improve the diagnosis capabilities of localized LLMs. Specifically, three models (i.e., Llama2-13B, CodeLlama-13B, and Baichuan2-13B) are fine-tuned to replicate the diagnosis results generated by the GPT-4-powered D-Bot. The fine-tuning dataset consists of samples covering D-Bot diagnosis workflows across five sub-tasks (e.g., tool invocation), along with associated prompts and historical dialogue messages.
# Practices of LLMs for Data Management
Alibaba Cloud [5] has integrated Text-to-SQL features into its BI platform, facilitating NL queries over structured datasets. Amazon Nova [3] employs automated document processing to extract structured information from diverse unstructured sources. In terms of data systems, PawSQL [41], an advanced query optimization platform, offers both SQL rewriting and index recommendation capabilities, adopted by over 10,000 professionals. Database diagnosis also thrives on a robust ecosystem. For instance, DBDoctor [35], compatible with mainstream databases, delivers kernel-level performance diagnostics for comprehensive system analysis and optimization.
# 4 Challenges and Future Directions 4.1 Data Management for LLM
# 4.1.1 Task-Specific Data Selection for Efficient Pretraining
In LLM pre-training, vast amounts of general data are typically used, but much of this data may not be relevant to the target task. The inclusion of irrelevant data not only increases training time but also impedes the model’s adaptability to specific tasks. For instance, when training a model for the medical domain, unrelated data sources such as news articles and social media posts may hinder the learning of domain-specific knowledge. Consequently, the challenge lies in automatically selecting task-relevant data while discarding irrelevant information during pretraining. Currently, most approaches rely on hand-crafted filtering rules or fixed labeled datasets for data selection, lacking dynamic strategies that adapt to the model’s evolving task-specific needs. Exploring methods to automatically select relevant data and discard irrelevant data during pre-training represents a promising avenue for improving task adaptability and training efficiency.
# 4.1.2 Optimizing Data Processing Pipelines
Currently, the construction of data processing pipelines for LLMs relies heavily on experience and experimentation. For instance, in building the FineWeb dataset, decisions such as whether to use the WET or WARC format for text extraction from CommonCrawl, or whether to apply a global MinHash approach for deduplication or perform it separately for each snapshot, are made only after training models and benchmarking their performance. However, this experimental methodology is resource-intensive. In the case of FineWeb, over 70 models with 1 billion parameters were trained, consuming a total of 80,000 H100 GPU hours. To improve the efficiency of these pipelines, future research should focus on developing data-driven methods that can predict optimal preprocessing configurations. in advance, reducing the reliance on costly trial-and-error approaches. This would not only minimize computational costs but also accelerate the development of high-quality datasets for LLMs.
# 4.1.3 LLM Knowledge Update and Version Control
In fast-evolving domains (e.g., healthcare, finance, law), knowledge is constantly updated. To ensure the reliability of LLMs, the data used for training and fine-tuning must be up-to-date. Delays in incorporating the latest knowledge can result in outdated or harmful outputs, particularly in fields like medicine where guidelines frequently change. While there have been various approaches to data synthesis and augmentation, little attention has been given to efficiently managing rapid knowledge updates or resolving contradictions when new information conflicts with older data. Existing systems often rely on static datasets, which are problematic in dynamic sectors. Although platforms like ChatGPT and Deepseek allow LLMs to search the web, this approach may not always guarantee accuracy or relevance, leading to suboptimal results. A more effective solution would involve a platform that facilitates the creation, sharing, and version control of datasets with real-time knowledge updates. By leveraging community-driven contributions, this platform could enable users to synthesize and share datasets using customizable methods, such as LLM-generated prompts from documents or websites, offering continuous, high-quality updates and improving the overall accuracy and reliability of LLMs.
# 4.1.4 Comprehensive Dataset Evaluation
The performance enhancement of models is closely tied to the use of ’high-quality’ datasets. However, determining what constitutes a high-quality dataset remains a challenge. Typically, the quality of a dataset can only be inferred after training and evaluating a model, which makes the process indirect and resource-intensive. When a dataset’s quality is subpar, it can lead to significant computational overhead and inefficiencies. While existing research [393] has proposed a modelagnostic method for evaluating datasets across three aspects: reliability, difficulty, and validity. These dimensions alone do not fully capture a dataset’s quality. The current framework falls short of providing a comprehensive evaluation that aligns with the model’s capabilities and performance improvements. Therefore, a promising direction for future research is the development of a robust dataset evaluation system that does not rely on model training. This system should provide consistent quality scores that directly correlate with model performance enhancements, enabling more efficient dataset selection and use without the need for exhaustive training cycles.
# 4.1.5 Hybrid RAG Indexing and Retrieval
Currently, there lacks a single database that integrates fulltext, vector, knowledge graph, and structured search interfaces into a cohesive indexing and retrieval engine for Retrieval-Augmented Generation (RAG) training. While systems like Elasticsearch [36] excel in full-text and vector search, and LightRAG [164] has introduced advanced vector and graph processing, these solutions remain siloed. They lack a unified platform designed specifically for hybrid RAG, where multiple indexing and search mechanisms coexist to support efficient downstream applications. Although emerging platforms like AutoRAG [209] provide frameworks for constructing RAG pipelines, they focus on workflow management, model integration, and automation rather than offering a fully integrated database with indexing and retrieval engines. A promising direction for future RAG data serving is the development of an integrated platform that provides seamless indexing and retrieval for diverse data types, while also integrating data serving features such as knowledge filtering and re-ranking [47], thereby improving the efficiency and flexibility of RAG applications.
# 4.2 LLM for Data Management
# 4.2.1 Unified Data Analysis System
One of the major challenges in LLM for Data Analysis is the absence of a unified system capable of handling diverse data types. Currently, analyzing different data formats often requires designing task-specific models separately. The most straightforward approach to enabling a system to process all types of data is to integrate these models into a single framework. However, this leads to prohibitively high deployment and maintenance costs due to the need to manage multiple models simultaneously. A more promising direction is to develop a model that can flexibly accommodate various data inputs and user requirements while supporting the analysis of structured, semi-structured, and unstructured data. Such a system would establish a paradigm for LLM for Data Analysis at the system level and offer a generalized capability for analyzing data across different structural types, thereby facilitating data automation.
# 4.2.2 Data Analysis with Private Domain Knowledge
Another challenge in leveraging LLMs for data analysis is the effective utilization of private domain knowledge. Current approaches primarily rely on RAG to retrieve relevant knowledge or fine-tune models on domain-specific datasets. However, these methods struggle when dealing with novel or highly complex domain knowledge. For example, in Text-to-SQL tasks involving large-scale databases with 10,000 columns and
1,000,000 rows, where each column is associated with specific domain knowledge, existing techniques often fail to generalize effectively. The lack of datasets that explicitly incorporate domain knowledge further exacerbates this issue, making it difficult to meet the demands of real-world industrial applications. Consequently, developing more advanced mechanisms for integrating domain knowledge into LLMs remains a critical open research problem.
# 4.2.3 Representing Non-Sequential and Non-Textual Data
Current LLM-based approaches typically transform nonsequential and non-textual data into serialized textual formats to align with the input requirements of LLMs [129], [196], [438]. While this enables basic compatibility, it overlooks the original structural semantics of the data and can lead to significant information loss in downstream tasks. For instance, in data manipulation and analysis, relational tables (originally structured as two-dimensional matrices) are typically flattened into multiple serialized sequences, obscuring inherent row-column relationships [78], [74], [319]. Similarly, in system optimization tasks, crucial statistical signals such as column selectivities and histograms are either omitted or naively encoded as plain texts, limiting their utility in guiding optimization decisions [156], [132]. Consequently, a promising future direction is to develop more expressive and task-aware representations that preserve the structural and statistical integrity of such data. This includes leveraging multi-modal LLMs or designing tailored encoding strategies that maintain the uniqueness of these data types, thereby enabling more effective and semantically informed LLM applications.
# 4.2.4 Efficient LLM Utilization Under Budget Constraints
While LLMs have shown strong potential across data manipulation, analysis, and system optimization tasks, their high computational cost and latency pose challenges for real-time or large-scale applications [196], [53]. For example, relying solely on LLMs is impractical for processing tens of millions of rows in relational table analysis due to prohibitive resource demands [432], [304]. Similarly, current LLM-based query optimizers often require minutes per query, far exceeding the millisecond-level efficiency of traditional statistical methods [369], [248]. Therefore, a promising direction is to develop hybrid strategies that integrate LLMs with traditional techniques or to devise scheduling mechanisms that allocate tasks across multiple LLMs based on cost-performance trade-offs. Such approaches can enhance the practicality and scalability of LLM-based systems under real-world budget constraints. | The integration of large language model (LLM) and data management (DATA) is
rapidly redefining both domains. In this survey, we comprehensively review the
bidirectional relationships. On the one hand, DATA4LLM, spanning large-scale
data processing, storage, and serving, feeds LLMs with high quality, diversity,
and timeliness of data required for stages like pre-training, post-training,
retrieval-augmented generation, and agentic workflows: (i) Data processing for
LLMs includes scalable acquisition, deduplication, filtering, selection, domain
mixing, and synthetic augmentation; (ii) Data Storage for LLMs focuses on
efficient data and model formats, distributed and heterogeneous storage
hierarchies, KV-cache management, and fault-tolerant checkpointing; (iii) Data
serving for LLMs tackles challenges in RAG (e.g., knowledge post-processing),
LLM inference (e.g., prompt compression, data provenance), and training
strategies (e.g., data packing and shuffling). On the other hand, in LLM4DATA,
LLMs are emerging as general-purpose engines for data management. We review
recent advances in (i) data manipulation, including automatic data cleaning,
integration, discovery; (ii) data analysis, covering reasoning over structured,
semi-structured, and unstructured data, and (iii) system optimization (e.g.,
configuration tuning, query rewriting, anomaly diagnosis), powered by LLM
techniques like retrieval-augmented prompting, task-specialized fine-tuning,
and multi-agent collaboration. | [
"cs.DB",
"cs.AI",
"cs.CL",
"cs.IR",
"cs.LG"
] |
1 Introduction 3
2 Related Work 4
# 3 Method 5
# 3.1 Supervised Fine-Tuning 5
3.1.1 Prompt collection and filtering 5
3.1.2 Scaling of SFT Data . 5
3.2 Reinforcement Learning 6
3.2.1 Overview 6
3.2.2 Data curation 6
3.2.3 Training process 7
# 4 Evaluation 8
# 4.1 Benchmark 8
4.2 Baselines 8
4.3 Main Results 9
4.4 SFT Analyses 9
4.4.1 Scaling of SFT data consistently improves performance 9
4.4.2 Which data scaling factor has larger impact 10
4.4.3 Performance improves progressively over epochs 10
# 4.5 RL Analyses 11
4.5.1 RL starting from different SFT models 11
4.5.2 How training temperature affects the progress of RL 11
4.5.3 At which stage should we apply overlong filtering? 12
4.5.4 Importance of Stage-1 (8K) 13
4.5.5 How long should we train Stage-1 14
4.5.6 Math-only RL significantly improves code reasoning 15
4.5.7 RL improves upon the SFT model in terms of pass $@ \mathrm { K }$ even when K is large 15
4.5.8 RL improves over strong SFT model by solving hard problems 15 | In this work, we investigate the synergy between supervised fine-tuning (SFT)
and reinforcement learning (RL) in developing strong reasoning models. We begin
by curating the SFT training data through two scaling strategies: increasing
the number of collected prompts and the number of generated responses per
prompt. Both approaches yield notable improvements in reasoning performance,
with scaling the number of prompts resulting in more substantial gains. We then
explore the following questions regarding the synergy between SFT and RL: (i)
Does a stronger SFT model consistently lead to better final performance after
large-scale RL training? (ii) How can we determine an appropriate sampling
temperature during RL training to effectively balance exploration and
exploitation for a given SFT initialization? Our findings suggest that (i)
holds true, provided effective RL training is conducted, particularly when the
sampling temperature is carefully chosen to maintain the temperature-adjusted
entropy around 0.3, a setting that strikes a good balance between exploration
and exploitation. Notably, the performance gap between initial SFT models
narrows significantly throughout the RL process. Leveraging a strong SFT
foundation and insights into the synergistic interplay between SFT and RL, our
AceReason-Nemotron-1.1 7B model significantly outperforms
AceReason-Nemotron-1.0 and achieves new state-of-the-art performance among
Qwen2.5-7B-based reasoning models on challenging math and code benchmarks,
thereby demonstrating the effectiveness of our post-training recipe. We release
the model and data at: https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
# 1. Introduction
Earth observation satellites capture our planet through different lenses, each offering distinct spectral, spatial, and temporal configurations. Many satellites are being launched with diverse spectral, spatial and temporal resolutions. While the first Earth observation satellite has been launched in 1972, more than half of them have been launched since 2020 (with 105 planned launches in 2025)[1]. Some satellites prioritize rich spectral information across dozens of bands but sacrifice spatial detail (e.g., MODIS), while others offer fine-grained spatial resolution with fewer spectral bands (e.g., Sentinel-2). These particularities are double-edged: they provide complementary insights for applications such as drought monitoring [2], where MODIS’s frequent coverage is combined with Sentinel-2’s spatial detail to detect field-scale crop stress, as well as plant species classification [3] and crop mapping [4]. Yet traditional vision models, designed for uniform input formats, struggle to accommodate this heterogeneity. Recent work has begun to incorporate prior knowledge such as resolution or wavelengths into learning pipelines [5, 6]. These methods often embed spatial resolution and spectral information through specialized modules within their model architectures. While these methods are suitable for some aspects of heterogeneity, most of them still rely on modality-specific encoders and patch representations, limiting their flexibility to unseen sensors configurations, often requiring costly pre-training steps [7].
We propose a different solution: instead of proposing a specific module for each type of input modality, we adapt the representation itself. Atomizer introduces a unified token-based framework that decomposes each observation into its most elementary unit, the reflectance of the band of a given pixel. Each token is enriched with acquisition-specific metadata, including spatial resolution, central wavelength, and spectral bandwidth. This atomic representation preserves all contextual information while enabling a simple, consistent framework which can be applied to a new optical modality that has a different combination of resolution, image size and spectral bands than those seen at training time. Atomizer builds on Perceiver [8], taking the core idea, the breaking down of images to a set of pixels, to the extreme, by mapping the set of scalars that compose an image into a compact latent space via cross-attention. This enables the model to handle inputs of arbitrary spatial size and channel depth. Metadata is encoded using structured strategies: Fourier features [9] for positional encodings, and radial basis functions for spectral properties. This flexible framework allows each type of prior knowledge to be explicitly represented, making it straightforward to incorporate expert domain insights into the encoding process. As a result, Atomizer can reason across heterogeneous satellite observations without requiring architectural changes or retraining for new modalities. To test Atomizer’s generalization capabilities, we design experiments that simulate real-world scenarios where models must handle data from entirely different satellites than those used during training. Our modality-disjoint protocol ensures that training and test sets contain observations from distinct satellite configurations with different spatial dimensions, resolutions, and spectral band combinations. Under these conditions, conventional architectures show significant performance degradation, while Atomizer maintains consistent accuracy even when processing previously unseen modality combinations. This resilience to varying input characteristics demonstrates Atomizer’s potential for operational Earth observation systems where sensor configurations continuously evolve. Our main contributions are:
• We introduce Atomizer, a token-based architecture that represents remote sensing observations at their native scale and structure, eliminating the need for retraining as new satellite missions emerge.
• We design a comprehensive metadata encoding scheme using Fourier features, positional encodings, and non-uniform RBFs to capture acquisition-specific context.
• We establish a rigorous modality-disjoint evaluation protocol, demonstrating Atomizer’s ability to generalize across unseen sensor types and configurations.
# 2. Related Work
Addressing Resolution Variability. A limitation of most computer vision approaches for Earth observation, such as those based on convolutional neural networks (CNN) or Vision Transformers (ViT) [10], is their inability to effectively handle varying Ground Sample Distances (GSDs) at inference time. The fixed patch size selected during pre-training forces a trade-off. In highresolution images, small patches lead to significant computational overhead, while in low-resolution, larger patches risk discarding critical fine-grained details. Several recent methods have attempted to address this resolution variability by incorporating GSD as prior knowledge. SenPa-MAE [11] incorporates resolution as an explicit feature vector generated through multi-layer perceptrons, which is then added to the patch embeddings. Other approaches encode resolution information directly within the positional embeddings associated with each patch; this approach is taken by ScaleMAE [6, 12, 5], allowing these models to adapt feature extraction based on input resolution. FlexiMo [7] takes a different approach by applying bilinear interpolation to both raw image patches and patch embedding weights, enabling dynamic adjustment to varying spatial resolutions. While these methods improve performance across different GSD configurations, they remain constrained by their reliance on rigid patch-based tokenization.
Incorporating Spectral Heterogeneity. Unlike natural images, satellite data contains spectral information that varies significantly across sensors [13], with different bands capturing distinct portions of the electromagnetic spectrum defined by their central wavelengths and bandwidths. Therefore, incorporating spectral configuration as prior knowledge is essential for effectively processing heterogeneous remote sensing data. Several approaches have emerged to address this challenge. FlexiMo [7] incorporates spectral information by using central wavelength to generate convolution kernels dynamically, enabling adaptation to varying channel configurations. Galileo [12] splits input bands into semantically cohesive groups (e.g., RGB bands in Sentinel-2), enabling the network to model relationships between spectrally related channels more effectively. However, this approach introduces additional design complexity in determining optimal channel groupings and lacks flexibility when confronted with novel spectral configurations not encountered during training. SenPa-MAE [11] embeds spectral responses as feature vectors added to patch embeddings, akin to positional encodings. While these methods improve handling of spectral variability, they typically require complex architectural modifications and still struggle to process truly heterogeneous observations with arbitrary band combinations without retraining.
Handling Variable Spatial Dimensions. Beyond resolution and spectral variations, remote sensing models must also handle input images of varying spatial dimensions. Traditional vision transformers require fixed-size inputs, necessitating resizing or cropping that can distort spatial relationships or lose information. Several approaches have been proposed to address this limitation. FlexiMo and Galileo [7, 12] simply resize the patch embedding weights to fit with the dimensions of the input. AnySat [5] takes a different approach by using fixed patch sizes that represent consistent ground distances in meters rather than pixel counts, allowing the model to maintain physical scale awareness across different resolution images. As a more significant departure from patch-based paradigms, Presto [14] eliminates patches entirely by encoding each pixel individually. While these approaches show strong results for handling varying input shapes, they typically require complex architectural modifications or introduce computational inefficiencies when processing high-resolution inputs.
Towards Modality-Agnostic Architectures. While the approaches described above make progress in handling specific aspects of satellite data, most of them rely on complex modules added within the ViT framework, gradually increasing architectural complexity with each new modality characteristic. This approach contradicts the principle of Occam’s razor that simpler solutions are preferable. Instead of further complicating existing architectures, we draw inspiration from Perceiver [8], a model explicitly designed to process any modality: audio, video, images, through a unified framework. Perceiver addresses the computational scaling challenges of attention mechanisms by mapping arbitrary-sized inputs to a compact latent representation through cross-attention, enabling it to handle diverse input types without specialized encoders. By enriching inputs with modality-specific metadata through Fourier features [9] rather than architectural modifications, Perceiver maintains a consistent architecture across modalities. Our approach leverages this insight while introducing a tokenization scheme specifically designed for the unique challenges of remote sensing data.
# 3. Methodology
Rather than developing increasingly complex architectural adaptations to accommodate heterogeneous satellite data, we propose a different approach that rethinks the input representation itself. Inspired by the Perceiver model [8], we bypass the need for fixed input formats by representing each observation as a fine-grained token. Specifically, we construct a token for each band of each pixel. These tokens include not only the raw band value, but also metadata such as spatial resolution, wavelength, and bandwidth, capturing both the content and context of each observation. To encode these attributes, we employ two different basis functions: (i) Fourier features for the band of value (e.g. reflectance) and the positional encoding; (ii) Radial basis functions for spectral properties such as wavelength and bandwidth. Our token-based design decouples the model from constraints such as fixed input shapes or specific acquisition schedules. By feeding the model a flat list of tokens, we can process input images of arbitrary resolution, spatial extent and spectral composition. This enables training a single encoder across diverse remote sensing datasets without requiring resampling, interpolation, or modality-specific adaptations.
Figure 1. Top: Atomizer’s token-based representation. Each spectral band of each pixel is decomposed into a token that encodes multiple attributes: reflectance (band value), wavelength, bandwidth, resolution, and position. Bottom: Diagram of the used architecture.
# 3.1 Token Construction
To represent the highly heterogeneous nature of remote sensing data, we build a token for every spectral band of every pixel. Each token is designed to capture not only the observed reflectance or radiance value, but also metadata that contextualizes the observation.
Formally, for a given image $I$ with spatial coordinates $( x , \gamma )$ at resolution $r$ and spectral band $b$ , we define the token ${ \pmb z } _ { x \gamma b }$ as:
$$
\begin{array} { r } { \mathbf { z } _ { x \gamma b } = \mathrm { C o n c a t } \left( \Phi _ { I } ( I _ { x \gamma b } ) , \Phi _ { \mathrm { r e s } } ( r ) , \Phi _ { \lambda } ( \lambda _ { b } , \Delta \lambda _ { b } ) \right) } \end{array}
$$
Where $I _ { x \gamma b }$ is the raw band value, $\Phi _ { \mathrm { r e s } } ( \boldsymbol { r } )$ encodes spatial resolution and the position of the band value within its original image, and $\Phi _ { \lambda } ( \lambda _ { b } , \Delta \lambda _ { b } )$ encodes the spectral properties: central wavelength $\lambda _ { b }$ and bandwidth $\Delta \lambda _ { b }$ . This formulation allows us to make use of the metadata of each observation, transforming raw image bands into a structured, interpretable, and learnable representation.
Fourier Features. To encode continuous scalar metadata (such as time and resolution), we use Fourier features, also known as sinusoidal position encodings [9]. These project a scalar input into a higher-dimensional space using sine and cosine functions, allowing the model to capture both lowand high-frequency variations.
Let $\tilde { x } , \in [ 0 , 1 ]$ be a normalized scalar, $L$ the number of frequency components and $f _ { \mathrm { m a x } }$ the maximum frequency component. The Fourier embedding function $\gamma ( \cdot )$ is defined as:
$$
\gamma ( \tilde { x } ; L , f _ { \mathrm { m a x } } ) = \left[ \sin ( \pi f _ { 1 } \tilde { x } ) , \cos ( \pi f _ { 1 } \tilde { x } ) , \dots , \sin ( \pi f _ { L } \tilde { x } ) , \cos ( \pi f _ { L } \tilde { x } ) \right]
$$
Where the $L$ frequencies $f _ { i }$ are linearly spaced between 1 and $f _ { \mathrm { m a x } }$ .
Reflectance encoding $\Phi _ { I }$ : the reflectance of the band $b$ at position $x , \gamma$ within the image is encoded with using Fourier features, i.e. $\Phi _ { I } ( I _ { x \gamma b } ) = \gamma ( I _ { x \gamma b } ; L , f _ { \mathrm { m a x } } )$ .
Spatial resolution encoding $\Phi _ { \mathrm { r e s } }$ : To encode spatial resolution and position information, we implement a resolution-aware positional encoding scheme. This approach builds upon Fourier features while incorporating ground sampling distance (GSD) as an explicit factor in the encoding.
For each pixel at coordinates $( x , \gamma )$ with resolution $g$ , we first normalize the coordinates relative to the image center:
$$
x _ { d } = \frac { x - x _ { \mathrm { c e n t e r } } } { w / 2 } , \quad \gamma _ { d } = \frac { \gamma - \gamma _ { \mathrm { c e n t e r } } } { h / 2 }
$$
Where $\boldsymbol { \nu }$ and $h$ are the width and height of the image, resulting in normalized coordinates $x _ { d } , \gamma _ { d } \in \left[ - 1 , 1 \right]$ . We then compute the resolution-modulated Fourier features for each coordinate as $\begin{array} { r } { \Phi _ { \mathrm { r e s } } ( x _ { d } , \gamma _ { d } , g ) = \mathrm { C o n c a t } ( \dot { \gamma ( } x _ { d } \frac { g } { G } ; L , f _ { \operatorname* { m a x } } ) , \gamma ( \gamma _ { d } \frac { g } { G } ; L , f _ { \operatorname* { m a x } } ) ) } \end{array}$ , where $G$ is a reference GSD value chosen as a normalization constant, $g$ is the ground sampling distance (resolution) of the current band in meters per pixel, and $f _ { k }$ is a set of frequencies linearly spaced between 1 and $f _ { \mathrm { m a x } }$ . The ratio $\frac { g } { G }$ modulates the frequency based on resolution, allowing the model to differentiate between fine and coarse resolution inputs. We concatenate the original position of the pixel to the resulting positional encoding. This encoding yields a vector of size $2 K + 1$ , where $K$ is the number of frequency components. The modulation of frequencies by the resolution ratio $\textstyle { \frac { g } { G } }$ enables the model to represent the same spatial location differently based on resolution, providing an implicit understanding of scale. At finer resolutions $( \operatorname { s m a l l } g )$ , the encoding varies more gradually across the image, while at coarser resolutions $\left( { \mathrm { l a r g e } } g \right)$ , the encoding changes more rapidly, reflecting the different levels of detail available at each resolution.
Spectral encoding $\Phi _ { \lambda }$ : To encode the spectral characteristics of each band, defined by its central wavelength $\lambda$ and bandwidth $\mu$ . We use a radial basis function (RBF) encoding scheme based on Gaussian kernels.
We define a set of $k$ Gaussian functions over the spectral domain:
$$
G = \left\{ { \mathcal N } ( \mu _ { i } , \sigma _ { i } ) ; i = 0 , \ldots , k \right\}
$$
Unlike typical uniform RBF placements, our Gaussians are strategically distributed: we allocate more narrow basis functions in spectral regions where many sensors and modalities operate (e.g., between $4 0 0 \mathrm { n m }$ and $8 0 0 \mathrm { n m }$ in the visible range). Fewer and wider Gaussians are used in sparsely populated spectral zones. This non-uniform distribution ensures higher resolution where it’s most needed, enabling the model to distinguish between closely spaced bands. Given a band defined by a central wavelength $\lambda$ and bandwidth $\mu$ , we compute its spectral support as:
$$
P _ { \operatorname* { m i n } } = \lambda - \frac { \mu } { 2 } , \quad P _ { \operatorname* { m a x } } = \lambda + \frac { \mu } { 2 }
$$
We uniformly sample wavelengths within this interval to form the set $P$ . For each Gaussian $i$ , we compute its maximum activation over $P$ :
$$
\mathrm { f e a t u r e s } _ { i } = \operatorname* { m a x } _ { \lambda ^ { \prime } \in { \cal P } } \exp \left( - \frac { ( \lambda ^ { \prime } - \mu _ { i } ) ^ { 2 } } { 2 \sigma _ { i } ^ { 2 } } \right)
$$
This yields a $k$ -dimensional feature vector where each component reflects how much the band overlaps with a particular Gaussian. The final spectral encoding is then normalized:
$$
\Phi _ { \lambda } ( \lambda , \mathsf { \mu } ) = \frac { \mathsf { f e a t u r e s } ( \lambda , \mathsf { \mu } ) } { | \mathrm { 0 f e a t u r e s } ( \lambda , \mathsf { \mu } ) | \mathrm { 0 } _ { 2 } }
$$
By jointly encoding the bandwidth alongside the central wavelength, this representation allows the model to differentiate between bands with similar center wavelengths but different spectral widths. This is especially important in the visible spectrum, where sensors like Sentinel-2, MODIS, and Landsat 8 may use overlapping wavelengths but with significantly different bandwidths. This enables the model to better align and reason across sensors, even when they operate in overlapping spectral regions.
# 3.2 Architecture
The encoded tokens are processed using a Perceiver-style architecture [8]. Our model ingests the unordered set of tokens and maps them into a compact latent representation through cross-attention mechanisms. Specifically, a set of $L$ learnable latent tokens attends to the input tokens, capturing information from the entire token set regardless of its size or structure.
To generate prediction logits from this latent space, we employ attention pooling as described in [15], where the latent vectors are aggregated via attention mechanisms to produce classification outputs. A challenge in our approach is the potentially large number of tokens generated for high-resolution or spectrally-rich images. To address this computational limitation, we implement a token pruning strategy during training. Before each cross-attention operation, we randomly remove a proportion $p$ of the input tokens. This masking is applied independently at each layer, forcing the model to learn robust representations despite seeing only a subset of tokens at each step. In our experiments, we set $p = 0 . 5$ , which substantially reduces memory requirements while maintaining performance. This pruning approach, combined with the Perceiver’s latent bottleneck design and our weight sharing mechanism, enables efficient processing of inputs with arbitrary numbers of tokens without compromising the model’s representational capacity.
# 4. Experimental Framework
To evaluate Atomizer’s ability to generalize to unseen sensor configurations, we design a modalitydisjoint evaluation protocol using the BigEarthNet dataset [16], a large-scale benchmark for remote sensing image classification containing Sentinel-2 multispectral imagery across 19 land cover classes. Our experimental setup comprises 69,373 images for training, 65,618 for validation, and 63,972 for testing, with the complete dataset and modality configurations to be made publicly available on GitHub. We define distinct modalities by systematically varying three attributes: (1) spatial dimensions (pixel size), (2) ground sampling distance (GSD), and (3) spectral band composition (subset of Sentinel-2 bands). Each modality represents a unique combination of these attributes, simulating different satellite sensor configurations. This approach allows us to evaluate how models perform when encountering new sensor characteristics not seen during training. Our experimental design ensures that modalities used for training and testing are completely disjoint. We define separate training and evaluation modalities, with each training image associated exclusively with one training modality. By ensuring no image appears under multiple modalities during training, we prevent the model from explicitly learning relationships between different modality configurations. Test images are associated with entirely unseen modalities, creating a true cross-modality generalization challenge. Figure 2 illustrates the distinct modalities used for training and testing, highlighting their varying characteristics in terms of resolution, spatial dimensions, and spectral composition.
We evaluate performance on the multilabel classification task of BigEarthNet, reporting mean Average Precision (mAP) and overall accuracy metrics. For comparison, we implement three baseline architectures: (1) a small ResNet, representing CNN-based approaches; (2) a standard Vision Transformer (ViT), representing patch-based attention methods; and (3) ScaleMAE, representing state-of-the-art approaches that incorporate resolution information via positional encoding. Implementation details for all baseline models are provided in the appendix.
Implementation Details Atomizer was trained for 40 epochs using a learning rate schedule consisting of a 5-epoch linear warmup followed by cosine annealing to zero. We used a batch size of 1024 distributed across two NVIDIA H100 GPUs. The model architecture consists of 4 cross-attention blocks, each containing 4 self-attention layers. To improve parameter efficiency, we employed weight sharing across all cross-attention blocks except the first. For classification output, we implemented latent attention pooling following [15]. To reduce the computational overhead induced by the cross attention layer, we randomly apply pruning on $50 \%$ of the model input tokens before each cross attention block.
Figure 2. Modalities used for training and testing. Each row represents a distinct modality configuration with its spatial dimensions (in pixels), ground sampling distance (meters per pixels), and spectral bands. The shade of blue represents the total number of bands (2-12).
# 5. Results
This section presents experimental results comparing Atomizer with baseline models (Perceiver [8], ViT [10], ResNet [17], and ScaleMAE [6]) across different settings, resolutions, and input sizes.
Modalities used in the experiment
Table 1. Performance comparison of models across different test settings (mean Average Precision $\%$ ).
Table 1 shows model performance across the two evaluation settings. On the standard BigEarthNet benchmark (with access to every band at $1 0 \mathrm { m }$ resolution), Atomizer achieves $4 8 . 6 6 \%$ AP, outperforming all baseline models. The results on modality-disjoint tests (Test 1-6) provide further insights on generalization capability. ViT and ScaleMAE perform similarly on BigEarthNet $( 3 3 . 4 7 \%$ vs. $3 3 . 0 4 \%$ AP), but exhibit different behavior on modality-disjoint tests. In general, ScaleMAE maintains higher performance on the test sets compared to ViT, indicating that its resolution-aware positional encoding benefits cross-modality generalization. This confirms the importance of incorporating resolution information, though ScaleMAE still underperforms compared to a fully modality-agnostic solution. The Perceiver model’s performance is particularly informative. Despite architectural similarities with Atomizer, both representing inputs as tokens mapped to a latent space via cross-attention, Perceiver achieves lower results across all settings ( $1 5 . 1 6 \%$ AP on BigEarthNet). This difference demonstrates that the encoding scheme for tokens is a critical factor determining model performance on remote sensing data. Atomizer’s approach to encoding spatial, spectral, and resolution information within tokens contributes substantially to its performance advantage.
In the following, we discuss model performance across different spatial resolutions (Table 2) and input sizes (Table 3) on the original BigEarthNet. Table 2 shows model performance across spatial resolutions from 20 to $8 0 ~ \mathrm { m / p x }$ . Atomizer achieves $4 8 . 6 6 \%$ AP at $2 0 ~ \mathrm { m / p x }$ and $4 4 . 0 9 \%$ AP at 80 $\mathrm { { m } / \mathrm { { p x } } }$ , representing a larger relative decrease than the rest of methods, but staying comfortably above them in all settings. The nearly constant performance of Perceiver and ScaleMAE across resolutions aligns with its design goal of resolution invariance, but this comes at the cost of lower absolute performance compared to Atomizer. These results indicate that Atomizer’s token-based encoding of resolution as metadata enables effective processing across different spatial scales while maintaining higher performance than resolution-invariant approaches.
Table 2. mean Average Precision (mAP) at different spatial resolutions (in meters per pixel).
Table 3 presents model performance across input sizes from $3 0 \times 3 0$ to $1 2 0 \times 1 2 0$ pixels. Atomizer’s performance increases with input size, from $3 7 . 9 8 \%$ AP at $3 0 \times 3 0$ pixels to $4 8 . 6 6 \%$ AP at $1 2 0 \times 1 2 0$ pixels (a $2 8 . 1 \%$ relative improvement). At all input sizes, Atomizer outperforms all models. ResNet shows non-monotonic behavior, with performance peaking at $6 0 \times 6 0$ pixels ( $3 1 . 3 0 \%$ AP) before declining at $1 2 0 \times 1 2 0$ pixels $( 2 9 . 5 0 \%$ AP). ViT and ScaleMAE exhibit more consistent scaling patterns but with lower overall performance than Atomizer. The Perceiver model maintains constant performance ( $1 5 . 1 6 \%$ AP) regardless of input size. This suggests that despite its token-based design, Perceiver does not effectively utilize the additional spatial context in larger images. These results demonstrate the advantage of Atomizer’s approach to encoding spatial positions within each token.
Table 3. mean Average Precision (mAP) for varying input sizes (in pixels). | The growing number of Earth observation satellites has led to increasingly
diverse remote sensing data, with varying spatial, spectral, and temporal
configurations. Most existing models rely on fixed input formats and
modality-specific encoders, which require retraining when new configurations
are introduced, limiting their ability to generalize across modalities. We
introduce Atomizer, a flexible architecture that represents remote sensing
images as sets of scalars, each corresponding to a spectral band value of a
pixel. Each scalar is enriched with contextual metadata (acquisition time,
spatial resolution, wavelength, and bandwidth), producing an atomic
representation that allows a single encoder to process arbitrary modalities
without interpolation or resampling. Atomizer uses structured tokenization with
Fourier features and non-uniform radial basis functions to encode content and
context, and maps tokens into a latent space via cross-attention. Under
modality-disjoint evaluations, Atomizer outperforms standard models and
demonstrates robust performance across varying resolutions and spatial sizes. | [
"cs.CV"
] |
introduction,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017.
[26] A. K. Tanwani, N. Mor, J. Kubiatowicz, J. E. Gonzalez, and K. Goldberg, “A fog robotics approach to deep robot learning: Application to object recognition and grasp planning in surface decluttering,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), IEEE, 2019, pp. 4559–4566.
[27] J. Ichnowski, W. Lee, V. Murta, S. Paradis, R. Alterovitz, J. E. Gonzalez, I. Stoica, and K. Goldberg, “Fog robotics algorithms for distributed motion planning using lambda serverless computing,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2020, pp. 4232–4238.
[28] N. Tian, A. K. Tanwani, J. Chen, M. Ma, R. Zhang, B. Huang, K. Goldberg, and S. Sojoudi, “A fog robotic system for dynamic visual servoing,” in 2019 International Conference on Robotics and Automation (ICRA), IEEE, 2019, pp. 1982–1988.
[29] S. L. K. C. Gudi, S. Ojha, B. Johnston, J. Clark, and M.-A. Williams, “Fog robotics for efficient, fluent and robust human-robot interaction,” in 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA), IEEE, 2018, pp. 1–5.
[30] K. E. Chen, Y. Liang, N. Jha, J. Ichnowski, M. Danielczuk, Gonzalez, J. Kubiatowicz, and K. Goldberg, “FogROS: An adaptive framework for automating fog robotics deployment,” in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), IEEE, 2021, pp. 2035–2042.
[31] K. Chen, R. Hoque, K. Dharmarajan, E. Llontop, S. O. Adebola, J. Ichnowski, J. D. Kubiatowicz, and K. Goldberg, “FogROS2-SGC: A ROS2 cloud robotics platform for secure global connectivity,” 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–8, 2023.
[32] K. Chen, M. Wang, M. Gualtieri, N. Tian, C. Juette, L. Ren, J. Kubiatowicz, and K. Goldberg, “FogROS2-LS: A location-independent fog robotics framework for latency sensitive ROS2 applications,” Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2024.
[33] K. Chen, K. Hari, R. Khare, C. Le, T. Chung, J. Drake, S. Adebloa, J. Ichnowski, J. Kubiatowicz, and K. Goldberg, “FogROS2-Config: A toolkit for choosing server configuration for cloud robotics,” Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2024.
[34] H. R. Kam, S.-H. Lee, T. Park, and C.-H. Kim, “Rviz: A toolkit for real domain data visualization,” Telecommunication Systems, vol. 60, no. 2, pp. 337–345, 2015.
[35] Foxglove Technologies Inc, Foxglove, https://foxglove.dev.
[36] S. Lhomme, D. Rice, and M. Bunkus, “Extensible binary meta language,” RFC Editor, RFC 8794, Jul. 2020.
[37] Matroska Video Container, https://www.matroska.org/ index.html, Accessed: 2024-09-14.
[38] ITU-T, “Advanced video coding for generic audiovisual services,” International Telecommunication Union, Geneva, Switzerland, Recommendation H.264, 2003.
[39] ITU-T, “High efficiency video coding,” International Telecommunication Union, Geneva, Switzerland, Recommendation H.265, 2023, Version 9.
[40] Alliance for Open Media, Av1 bitstream & decoding process specification, https://aomediacodec.github.io/av1- spec/, Accessed: [Insert Date], 2019.
[41] Library of Congress, Ff video codec 1, version 0, 1 and 3, https: / / www . loc . gov / preservation / digital / formats / fdd/fdd000341.shtml, Accessed: [Insert Date], 2024.
[42] M. Niedermayer, D. Rice, and J. Martinez, “Ffv1 video coding format versions 0, 1, and 3,” RFC Editor, RFC 9043, Aug. 2021.
[43] Linux mmap(2) Manual, https://man7.org/linux/manpages/man2/mmap.2.html, Accessed: 2024-09-14.
[44] LeRobot Video Benchmark, https : / / github . com / huggingface / lerobot / tree / main / benchmarks / video, Accessed: 2024-09-13.
[45] J. Luo, C. Xu, X. Geng, G. Feng, K. Fang, L. Tan, S. Schaal, and S. Levine, “Multi-stage cable routing through hierarchical imitation learning,” IEEE Transactions on Robotics, 2024.
[46] J. Pari, N. M. Shafiullah, S. P. Arunachalam, and L. Pinto, “The surprising effectiveness of representation learning for visual imitation,” Robotics: Science and Systems, 2018.
[47] L. Y. Chen, S. Adebola, and K. Goldberg, Berkeley UR5 demonstration dataset, https://sites.google.com/view/berkeley-ur5/home.
[48] Pyav: Pythonic bindings for FFmpeg’s libraries, https : / / github.com/PyAV-Org/PyAV, Accessed: 2024-09-14.
[49] Decord: An efficient video loader for deep learning with smart shuffling that’s super easy to digest, https://github.com/ dmlc/decord, Accessed: 2024-09-14.
[50] NVIDIA Video Codec SDK, https://developer.nvidia. com/video-codec-sdk, Accessed: 2024-09-14. | Recent results suggest that very large datasets of teleoperated robot
demonstrations can be used to train transformer-based models that have the
potential to generalize to new scenes, robots, and tasks. However, curating,
distributing, and loading large datasets of robot trajectories, which typically
consist of video, textual, and numerical modalities - including streams from
multiple cameras - remains challenging. We propose Robo-DM, an efficient
open-source cloud-based data management toolkit for collecting, sharing, and
learning with robot data. With Robo-DM, robot datasets are stored in a
self-contained format with Extensible Binary Meta Language (EBML). Robo-DM can
significantly reduce the size of robot trajectory data, transfer costs, and
data load time during training. Compared to the RLDS format used in OXE
datasets, Robo-DM's compression saves space by up to 70x (lossy) and 3.5x
(lossless). Robo-DM also accelerates data retrieval by load-balancing video
decoding with memory-mapped decoding caches. Compared to LeRobot, a framework
that also uses lossy video compression, Robo-DM is up to 50x faster when
decoding sequentially. We physically evaluate a model trained by Robo-DM with
lossy compression, a pick-and-place task, and In-Context Robot Transformer.
Robo-DM uses 75x compression of the original dataset and does not suffer
reduction in downstream task accuracy. | [
"cs.RO",
"cs.AI",
"cs.DB",
"cs.LG"
] |
# 1 INTRODUCTION
In the current era of data-intensive applications, the demand for high-performance, cost-effective storage solutions is paramount. Key-value store, with their promise of scalability, flexibility, and rapid data access, have emerged as a pivotal component in this landscape. Renowned key-value stores such as Redis [50], Memcached [31], Cassandra [35], and HBase [54] have set industry benchmarks, offering solutions designed for diverse use cases ranging from inmemory caching to persistent storage.
At Ant Group, we face numerous challenges in managing our online data serving systems. These challenges stem from the intrinsic nature of our services and the evolving demands of our vast user base. We handle the immense volume of data generated by billions of users, necessitating robust, low-latency, and cost-effective storage solutions. The diverse scenarios and applications lead to $\mathbf { \delta } _ { \mathbf { \alpha } \mathbf { \beta } \mathbf { \vec { a } } }$ wide spectrum of workloads with varying needs for reliability, durability, and latency. Additionally, significant skewness in data access patterns, with dynamically changing hot spots, complicates data access and caching strategies. Furthermore, in order to handle these workloads, we have to maintain an extremely large number of machines, which leads to a non-trivial configuration decision challenge. This challenge demands a cost model to accurately quantify the cost-performance trade-offs.
Figure 1: Cost comparison in TierBase
To address these challenges, we introduce TierBase, a distributed key-value store developed by Ant Group since 2017. Initially Rediscompatible, TierBase has evolved to support advanced functions such as CAS operations, wide-columns, and vector searching. Extensively utilized within Ant Group, TierBase maintains sub-millisecond access latency even under peak loads of hundreds of millions of queries per second (QPS), crucial for delivering seamless user experiences during events like the Double 11 shopping festival.
To optimize costs further, we developed several cost-saving strategies within TierBase. Figure 1 illustrates the cost-performance trade-offs and the impact of these optimization techniques in a realworld use case. The figure shows how enabling our cost-saving strategies leads to reductions in both space cost (𝑆𝐶) and performance cost $( P C )$ , which represent storage and query processing expenses, respectively. In our cost model, the overall cost is defined as the maximum of $S C$ and $P C$ , represented by the green line in the figure. Despite the increase in $P C$ , the significant reduction in $_ { S C }$ results in an overall cost decrease. Our pre-trained compression technique (TierBase-PBC) achieves up to a $6 2 \%$ cost reduction over the baseline (TierBase-Raw), demonstrating substantial savings in one of our primary online serving scenarios.
In developing TierBase, we focus on two key questions: Q1: How can we develop a comprehensive cost model for large-scale online data serving systems that adapts to varying workloads? Developing a comprehensive cost model for complex online data serving systems requires identifying key metrics that accurately capture real-world costs across various workloads. Traditional models [40, 42] typically focus on overall system costs without considering specific workload characteristics. The challenge lies in developing a quantitative framework that accurately models cost-performance trade-offs between different storage configurations while incorporating workload-specific characteristics and capturing the non-linear relationship between system configuration and cost.
Q2: How can we effectively evaluate and apply optimization techniques in key-value stores to balance performance and cost for specific workloads? The research and industry communities have developed a number of innovative techniques [33, 53] to optimize performance and storage efficiency for key-value stores. However, they are often workload-specific, and there is no universal solution. The challenge lies in developing a unified framework to evaluate and apply these diverse optimization techniques across different workloads considering both performance metrics and overall system cost. This cost model aims to enable informed decisions about which techniques to apply in different scenarios, moving beyond one-size-fits-all solutions towards a workload-aware optimization strategy for key-value stores.
To answer these questions, in this paper, we introduce: SpacePerformance Cost Model: A novel approach balances performance and space costs, proposing that optimal cost is achieved when these factors are equal. This model extends to tiered storage systems, incorporating cache ratios and miss ratios to determine cost-effectiveness.
TierBase: Guided by this cost model, TierBase employs a tiered storage architecture that balances performance and storage space for cost-efficiency. It incorporates innovative features such as a flexible design for handling diverse workloads, pre-trained data compression techniques, elastic threading mechanisms, and utilization of persistent memory.
Cost Optimization Framework: We introduce a framework for evaluating and optimizing costs for key-value stores. This framework includes strategies for adapting to diverse workloads and provides guidelines for making cost-effective decisions in system configuration and resource allocation.
To sum up, the main contributions are as follows:
• We propose a comprehensive Space-Performance Cost Mode for key-value store that aligns with and extends the classic Five-Minute Rule, guiding decisions on optimal storage configuration selection.
We present the architecture of TierBase, a distributed keyvalue store that leverages a tiered storage design to effectively balance performance and cost-efficiency across diverse workloads.
We introduce and evaluate several cost-optimizing techniques implemented in TierBase, including pre-trained data compression, elastic threading mechanisms, and the utilization of persistent memory.
We demonstrate the effectiveness of our cost optimization framework through extensive evaluations using synthetic benchmarks, real-world workloads, and case studies from Ant Group’s production environments. These evaluations showcase TierBase’s superior cost-effectiveness compared to existing solutions and its ability to achieve significant cost reductions in large-scale, data-intensive applications.
Performance cost Pweorrfkolromaadnce-critical Cost-optima Performance cost Storage-only Cost-optimized Storage Cache Tier Space-critical Tier Cache-only workload Space cost Space cost
(a) Single-Tier Storage (b) Tiered Storage
# 2 SPACE-PERFORMANCE COST MODELING
In this section, we introduce a novel cost model for key-value storage systems, based on real-world implementations at Ant Group. Our model unifies performance and storage costs, providing a comprehensive framework for system optimization in online data serving with real-time latency requirements.
# 2.1 Cost Analysis Framework
Our framework is built upon two primary components: Performance Cost $( P C )$ in key-value storage systems reflects the expenses associated with data transfer from storage media to endusers, encompassing resource utilization for both read and write operations. This includes CPU overhead, network I/O, disk IOPS consumption, and memory bandwidth usage.
Space Cost (𝑆𝐶) depends on the resources and expenses for data storage, varying by storage medium and space used. In caching and in-memory storage, it’s linked to the data volume in RAM, which is fast but expensive. Disk-based storage costs are typically cheaper for large data volumes. These costs also account for storage overhead related to data structures and replicas necessary to ensure reliability and availability.
Space-Performance Cost Model. Our Space-Performance Cost Model is based on the observation that in enterprise data centers and cloud environments, resource instances (virtual machines or containers with compute and storage resources) are typically provided with pre-defined allocations. These allocations are usually evenly divided to maximize utilization, precluding arbitrary resource allocation.
Given this constraint, for a given workload $ { \boldsymbol { w } }$ , which is a stream of read and write operations on a given resource instance 𝑖 and storage system with configuration 𝑠, both the maximum performance $( M a x P e r f ( w , i , s ) )$ and the maximum storable data amount $( M a x S p a c e ( w , i , s ) )$ are deterministic and quantifiable. These metrics are measured in queries per second (QPS) and gigabytes (GB), respectively.
In a distributed, shared-nothing architecture, we define the monetary cost $C$ as the maximum of the performance cost $( P C )$ and the space cost (𝑆𝐶) for a given workload on a set of resource instances 𝑖 with the same configuration:
Definition 1 (Cost of workload $ { \boldsymbol { w } }$ ). The cost of workload $ { \boldsymbol { w } }$ is defined as the maximum of the performance cost and the space cost for a given workload on a set of resource instances $i$ with a specific storage configuration 𝑠:
$$
C ( w , i , s ) = \operatorname* { m a x } ( P C ( w , i , s ) , S C ( w , i , s ) )
$$
Where:
$$
\begin{array} { r } { P C ( w , i , s ) = C o s t ( i ) \times \left\lceil \frac { Q P S ( w ) } { M a x P e r f ( w , i , s ) } \right\rceil } \\ { S C ( w , i , s ) = C o s t ( i ) \times \left\lceil \frac { D a t a S i z e ( w ) } { M a x S p a c e ( w , i , s ) } \right\rceil } \end{array}
$$
Here, $C o s t ( i )$ is the monetary cost of a single resource instance 𝑖 , $Q P S ( w )$ is the total queries per second for the workload $ { \boldsymbol { w } }$ , $D a t a S i z e ( w )$ is the total amount of data to be stored for workload $ { \boldsymbol { w } }$ , and $M a x P e r f ( w , i , s )$ and $M a x S p a c e ( w , i , s )$ are the maximum performance and space capacity for the given resource instance 𝑖 and storage configuration 𝑠, respectively.
In a distributed, shared-nothing architecture, the maximum of the performance cost and space cost is used because the system must be provisioned to meet the greater of the two demands: query processing or data storage. This ensures that sufficient resources are allocated to handle the workload’s heaviest requirement, whether it’s the query throughput or the data volume.
For real-world deployments, we incorporate tolerance ratios for both 𝑀𝑎𝑥𝑃𝑒𝑟 𝑓 and 𝑀𝑎𝑥𝑆𝑝𝑎𝑐𝑒, ensuring system redundancy and reliability. These ratios accommodate variations in workload distribution and access patterns, enabling adaptation to scenarios that deviate from the cost model’s assumption of even data sharding.
In Figure 2(a), we illustrate both the performance and space costs. For workloads where the performance cost exceeds the space cost, we categorize them as performance-critical workloads. Conversely, when the space cost is dominant, we refer to these as space-critical workloads.
# 2.2 Cost Efficiency Metrics
Our cost model provides a framework for optimizing resource allocation and system configuration in distributed key-value storage systems. By leveraging the insights from this model, we can make informed decisions to minimize overall costs while meeting both performance and space requirements.
Consider a scenario where a workload typically requires multiple resource instances. We can simplify our model by removing the ceiling function and define the cost metrics as follows:
Definition 2 (Cost Metrics). We define two key cost metrics:
$$
\begin{array} { c } { { C P Q P S = C o s t ( i ) / M a x P e r f ( w , i , s ) } } \\ { { C P G B = C o s t ( i ) / M a x S p a c e ( w , i , s ) } } \end{array}
$$
where CPQPS is the Cost per Query per Second, representing the performance cost incurred for processing each query per second, and CPGB is the Cost per GB, representing the space cost for storing each gigabyte of data.
For ease of presentation in subsequent discussions, we will consider the same 𝑖 and $ { \boldsymbol { w } }$ , allowing us to represent different configurations as subscripts in later formulae. Using these cost metrics, we can express the total cost of the storage system as:
$$
C = \operatorname* { m a x } ( C P Q P S \times Q P S , C P G B \times D a t a S i z e )
$$
This formulation allows us to evaluate and optimize the overall system cost by considering both performance and space requirements.
# 2.3 Space-Performance Trade-off and Optimal Cost Theorem
Our cost model reveals an inherent trade-off between performance and space costs in key-value storage systems. This trade-off forms the basis for our Optimal Cost Theorem.
Definition 3 (Space-Performance Trade-off of Storage Configurations). Given a set of storage configurations 𝑆, the space-performance trade-off describes the relationship between the Cost per Query per Second (𝐶𝑃𝑄𝑃𝑆) and the Cost per Gigabyte (𝐶𝑃𝐺𝐵), expressed as: $C P Q P S _ { s } = f ( C P G B _ { s } )$ , $s \in S$ where $f$ is a non-increasing function. As $C P G B _ { s }$ decreases for a given configuration 𝑠, $C P Q P S _ { s }$ tends to increase, and vice versa.
A typical example of this trade-off is data compression: for a fixed compression algorithm, setting higher compression levels reduces space cost (𝐶𝑃𝐺𝐵) but increases performance cost (𝐶𝑃𝑄𝑃𝑆) due to added computational overhead, allowing a trade-off between storage efficiency and query performance.
Based on the trade-off, we establish the Optimal Cost Theorem:
Theorem 2.1 (Optimal Cost $C ^ { * }$ ). For a given workload 𝑤 with requirements $Q P S$ and 𝐷𝑎𝑡𝑎𝑆𝑖𝑧𝑒, and a set of storage configurations 𝑆, the optimal cost $C ^ { * }$ is achieved by selecting the configuration $s ^ { * } \in S$ that minimizes the overall cost while balancing performance and space costs: $\begin{array} { r } { C ^ { * } = \operatorname* { m i n } _ { s \in S } \operatorname* { m a x } ( P C _ { s } , S C _ { s } ) } \end{array}$
The optimal configuration $s ^ { * }$ is one that minimizes the absolute difference between performance and space costs: $\begin{array} { r } { s ^ { * } = \arg \operatorname* { m i n } _ { s \in S } \left| P C _ { s } - \right. } \end{array}$ $S C _ { s } |$
Proof. Let $s ^ { * }$ be the optimal configuration that minimizes the overall cost:
$$
\begin{array} { r } { C ^ { * } = \operatorname* { m i n } _ { s \in S } \operatorname* { m a x } ( P C _ { s } , S C _ { s } ) } \end{array}
$$
Assume, for contradiction, that $P C _ { s ^ { * } } \neq S C _ { s ^ { * } }$ . Without loss of generality, let $P C _ { s ^ { * } } > S C _ { s ^ { * } }$ . Then:
$$
C ^ { * } = P C _ { s ^ { * } }
$$
Now, consider a configuration $s ^ { \prime }$ that slightly reduces $P C _ { s ^ { * } }$ at the expense of increasing $S C _ { s ^ { * } }$ , such that:
$$
P C _ { s ^ { \prime } } = P C _ { s ^ { * } } - \epsilon S C _ { s ^ { \prime } } = S C _ { s ^ { * } } + \delta
$$
Where $\epsilon > 0$ and $\delta > 0$ are small positive values.
If we choose $\epsilon$ and $\delta$ such that $S C _ { s ^ { * } } + \delta < P C _ { s ^ { * } } - \epsilon$ , then
This contradicts the assumption that $s ^ { * }$ is the optimal configuration. Therefore, our initial assumption must be false, and we must have $P C _ { s ^ { * } } = S C _ { s ^ { * } }$ for the optimal configuration.
Hence, the optimal configuration $s ^ { * }$ is one that minimizes the absolute difference between performance and space costs:
$\begin{array} { r } { s ^ { * } = \arg \operatorname* { m i n } _ { s \in S } \left| P C _ { s } - S C _ { s } \right| } \end{array}$
This proves both parts of the theorem.
The proof demonstrates that any imbalance between performance and space costs can be optimized to yield a lower total cost, thus establishing the optimal point at their equality.
This theorem provides a guiding principle for optimizing keyvalue storage systems. It suggests that the most cost-effective configuration is one where the system’s resources are balanced such that neither space nor performance costs dominate. This balance point represents the optimal trade-off between space and performance for the given workload.
# 2.4 Cost Model with Tiered Storage
Tiered storage systems typically comprise two layers: a cache tier and a storage tier. The cache tier, often utilizing memory-based technologies, prioritizes performance, while the storage tier focuses on capacity and cost-effectiveness. This structure allows systems to balance performance and capacity requirements more efficiently than single-tier alternatives.
In a tiered storage system, the total cost is a function of both the cache tier and the storage tier. We propose a comprehensive cost model for tiered storage systems that accounts for both performance and capacity costs across tiers:
$$
\begin{array} { r } { C _ { t i e r e d } = \operatorname* { m a x } ( P C _ { c a c h e } + P C _ { m i s s } \times M R , } \\ { S C _ { c a c h e } \times C R ) } \\ { + \operatorname* { m a x } ( P C _ { s t o r a g e } \times M R , S C _ { s t o r a g e } ) } \end{array}
$$
where:
• $C R$ is the cache ratio (cache capacity / total capacity)
• 𝑀𝑅 is the cache miss ratio (proportion of requests served by the storage tier)
• $P C _ { m i s s }$ is the additional performance cost incurred on a cache miss
• $P C _ { c a c h e / s t o r a g e }$ and $S C _ { c a c h e / s t o r a g e }$ represent performance and space costs for each tier
This model enables a comparison between tiered storage and single-tier alternatives. Tiered storage becomes cost-effective when its total cost is lower than both a pure cache solution and a pure storage solution, expressed as: $C _ { t i e r e d } \ < \ \operatorname * { m i n } ( C _ { c a c h e } , C _ { s t o r a g e } )$ This approach can be particularly effective for workloads with skewed data access patterns.
The model provides insights into optimizing cache ratio (𝐶𝑅) and managing miss ratio (𝑀𝑅) to minimize overall system cost. Further detailed cost analysis on tiered storage is presented in Section 5.2.
# 2.5 Cost Optimization Strategies
Our cost model serves as a valuable guide for optimization efforts, assisting system designers and administrators in their decisionmaking process. We discuss optimization strategies for both singletier and tiered storage systems, using the space-performance cost model as our framework.
2.5.1 Single-Tier Storage Optimization. In single-tier storage systems, the primary goal is to balance performance cost (𝐶𝑃𝑄𝑃𝑆) and space cost (𝐶𝑃𝐺𝐵) to minimize overall cost. The optimization strategies depend on the workload characteristics:
Space-Critical Workloads: When the workload is space-critical. Here, reducing 𝐶𝑃𝐺𝐵 becomes the primary goal. One approach is to enable data compression, which reduces 𝐶𝑃𝐺𝐵 but may increase $C P Q P S$ due to the additional computational overhead. The overall cost can still be optimized through such trade-offs according to the Optimal Cost Theorem. Other potential approaches include using instance with larger storage and implementing tiered storage solutions.
Performance-Critical Workloads: In this case, optimization efforts should focus on reducing 𝐶𝑃𝑄𝑃𝑆. Strategies may include: Optimizing query execution, tiered caching mechanisms, and utilizing faster storage media for frequently accessed data.
Resource Instance Selection: Choosing the right type of resource instance can significantly impact both $C P Q P S$ and 𝐶𝑃𝐺𝐵. This involves analyzing different instance types to find the optimal balance between performance capabilities and storage space for the specific workload.
2.5.2 Tiered Storage Optimization. Tiered storage systems offer additional opportunities for cost optimization by leveraging the strengths of different storage tiers. The effectiveness of tiered storage depends on three key factors:
Skewed Data Access Pattern: Optimal tiered storage performance occurs when both Cache Ratio (𝐶𝑅) and Miss Ratio (𝑀𝑅) are low. This scenario is typical in workloads with high temporal locality, where a small subset of "hot" data is frequently accessed. The cache tier can store this data, resulting in a low 𝐶𝑅 while serving most requests (low 𝑀𝑅).
Cost Disparity between Tiers: The significant cost difference between cache and storage tiers is crucial for effective tiered storage. The cache tier offers high performance at a higher cost per unit capacity, while the storage tier provides lower performance at a much lower cost. This disparity allows the system to balance high performance for frequently accessed data with cost-effective storage for less accessed information.
Low Miss Penalty: The miss penalty $( P C _ { m i s s } )$ represents the additional performance cost when a request misses the cache. A low miss penalty is vital for effective tiered storage as it reduces the impact of cache misses, allowing for a smaller cache (lower 𝐶𝑅) without significantly degrading overall performance. We will present our techniques for minimizing $P C _ { m i s s }$ in Subsection 4.1.
2.5.3 Workload-Driven Optimization Approaches. Instead of immediately diving into TierBase-specific techniques, we can introduce a more general framework for mapping workload characteristics to optimization strategies. This sets the stage for the detailed techniques that will be discussed in the following section.
Table 1 presents a mapping of various workload features to optimization strategies implemented in TierBase. This mapping illustrates how specific workload characteristics inform the choice of optimization techniques.
These optimization techniques are designed to address specific workload characteristics and leverage the strengths of both singletier and tiered storage architectures. By applying these strategies guided by our cost model, we can iteratively refine system configurations, resource allocations, and feature enablements.
In the following sections, we will detail each of these optimization techniques, explaining how they work and how they contribute to overall system cost-effectiveness. Later, in Section 6, we will demonstrate how these strategies are applied in practice. Our evaluation methodology involves replaying real-world workloads and assessing costs across various configurations, providing empirical validation of our cost-optimization approach.
TierBase: A Workload-Driven Cost-Optimized Key-Value Store [Extended Version]
# 3 TIERBASE SYSTEM DESIGN
Building upon our unified space-performance cost model, we present TierBase, a high-performance, distributed key-value storage system designed to optimize cost for large-volume online storage. TierBase leverages a tiered storage architecture to provide low-latency, costeffective data access while addressing the challenges highlighted in our cost model.
TierBase extends Redis’s capabilities by supporting not only basic key-value operations like GET and SET, but also advanced data structures such as lists, sets, and sorted sets. Additionally, it provides CAS (Compare-And-Set) operations, wide-column data handling and vector search.
TierBase supports vector search by integrating the VSAG library[7] a vector indexing library developed by Ant Group for similarity search, which enables efficient ANN queries over high-dimensional vectors within our key-value infrastructure. The integration supports dynamic vector operations, including real-time insertion and deletion in memory, demonstrating performance improvements of 3-4x compared to conventional algorithms such as HNSW.
As shown in Figure 3, TierBase incorporates a tiered storage architecture, separating the caching tier from the storage tier, allowing independent scaling based on workload requirements. The cache tier employs in-memory hash tables stored in DRAM or persistent memory (PMem) for efficient random access performance, while the storage tier typically utilizes a LSM-tree structure stored on SSD or HDD to optimize write performance and storage capacity. This architecture allows TierBase to effectively balance high-speed data access with efficient data storage across different storage media.
TierBase also features memory compression and elastic threading support, optimizing resource utilization dynamically.
The architecture of TierBase is structured into three primary tiers: client, cache, and storage.
Client Tier. The client tier consists of TierBase clients and proxy services. TierBase clients, compatible with native Redis clients, retrieve cluster routing information from the coordinator cluster for direct data access. They handle failover, cluster scaling automatically. For small-scale scenarios, TierBase provides a proxy service facilitating rapid integration for public cloud users.
Cache Tier. The cache tier in TierBase consists of instances and coordinators. Each instance serves as a data node with key hashes for data sharding. The cache instances implements hash tables for efficient key-value storage. Moreover, the cache tier can operate independently without the storage tier for in-memory storage use cases, providing high-speed data access similar to systems like Redis and Memcached. TierBase supports both single-replica and multi-replica modes, implementing various replication protocols to accommodate different reliability requirements. Coordinators oversee the entire cluster, managing failovers and administering tenant resource allocation.
Storage Tier. The storage tier provides data persistence through a disaggregated key-value storage system. The cache tier directly accesses this tier for cache misses, employing either write-through or write-back policies for data writing.
TierBase offers various disaggregated storage options through a pluggable storage adapter. In our experiments, we focus on the Universal Configurable Storage $( \mathrm { U C S } ) ^ { 1 }$ , a sophisticated real-time serving and analytical storage engine. UCS implements an LSMTree with a shared disk architecture and remote compaction. This design ensures optimal online performance while supporting both row and columnar storage formats.
Although our experiments focus on an LSM-tree storage engine, the pluggable storage adapter in TierBase allows integration with various disaggregated storage systems based on different data structures. Consequently, the cost optimization techniques in the cache tier and cost evaluation with the cost model can be applied to a wide range of key-value stores.
Additionally, TierBase includes monitoring and analysis tools for real-time metrics collection, problem diagnosis, and workloadbased suggestions. It integrates with Cougar[51] for automatic scaling in cloud environments.
# 4 COST OPTIMIZATION STRATEGIES
In this section, we introduce the well-designed features which aim to optimize the cost of TierBase.
# 4.1 Tiered Storage
TierBase introduces a tiered storage architecture that disaggregates cache and storage components, allowing for independent optimization. This approach directly addresses the space-performance tradeoff highlighted in our cost model. The cache tier is optimized for speed (minimizing $P C _ { c a c h e } )$ , while the storage tier is designed for capacity and durability (optimizing $S C _ { s t o r a g e } )$ . Both tiers can scale independently, accommodating diverse workloads and data access patterns.
To ensure data consistency and reliability in this disaggregated architecture, we adapt the well-known caching techniques "writethrough" and "write-back". These strategies are commonly used in conventional hardware scenarios to synchronize data between cache and storage. However, applying these techniques to a disaggregated architecture presents unique challenges. In traditional contexts, write-through and write-back policies are implemented within a single tier, where the cache and storage are tightly coupled.
Application Application Client Tier TierBase Client Redis Client Monitor Insight Read/Write Proxy Read/Write ThErleastdiicng Compressors Mouldtei-l ThErleastdiicng Compressors Mouldtei-l Coordinator (Leader) Replication Layer PrSoytnocol Replication Layer Cache Tier In-memory Engine P In-memory Engine CFolnotrwol C(Foolrldoinwaetro)r WAL 2 WAL 包 Local Storage E Local Storage Coordinator 门 PMEM SSD SSD PMEM SSD 55D (Follower) TierBase instance (master) TierBase instance (replica) Coordinator Groups Write-through / Write-back Storage Tier 55 Disaggregated Storage I
This tight coupling allows for simpler coordination and synchronization mechanisms between the cache and storage, as they reside on the same physical node or have low-latency communication channels. In contrast, the separation of cache and storage tiers in TierBase introduces new complexities in maintaining data consistency and synchronization, requiring careful design and implementation of coordination protocols and synchronization strategies.
4.1.1 Write-through Caching. In the write-through caching policy (Figure 4(a)), TierBase prioritizes data consistency between the cache and storage tiers. When a write request is received, it is first executed on the cache tier and then synchronously passed to the disaggregated storage tier before acknowledging completion to the application. If the storage update succeeds, the cache tier maintains the updated data; otherwise, the corresponding cache entry is invalidated, and an error is returned to the application.
To ensure data consistency in the presence of failures while reducing $P C _ { m i s s }$ , TierBase employs several key techniques:
Temporary Update Buffer. Each connection maintains a temporary update buffer. Incoming update requests are initially performed on this buffer, and the results are used to update the main cache. If the storage write succeeds, the data is seamlessly transferred from the temporary buffer to the main cache. In case of a storage write failure, the corresponding entry in the main cache is removed, ensuring subsequent reads fetch the data from the storage, maintaining consistency.
Sequential Write Ordering. TierBase uses a per-key write queue to maintain the sequential order of writes to the same key, ensuring consistent execution order in the cache tier for asynchronous storage updates.
Write Coalescing. Within Redis’s event loop, TierBase coalesces multiple write commands targeting the same key into a single operation, similar to the concept of group commit in database systems. This approach efficiently updates the store with the final result, reducing the number of write operations and consequently lowering 𝑃𝐶𝑚𝑖𝑠𝑠 .
This write-through strategy is particularly effective in environments where read operations significantly outnumber write operations, and high data reliability are critical.
Application Application Update 1 4 Update Update 1 4 Query Request Response Request Response Disaggregated Cache l Disaggregated Cache (with replicas) EUxpedcauttee 2 3 CUapcdhaete EQxuecryute 3 CUapcdhaete BUaptdcahte CUapcdhaeteStatus SSD Disaggregated Storage SSD Disaggregated Storage (a) Write-through caching policy (b) Write-back caching policy
4.1.2 Write-back Caching. The write-back caching policy in TierBase prioritizes performance by optimizing $P C _ { c a c h e }$ and reducing $P C _ { m i s s }$ . Updates are first written to the cache tier with immediate response to the application, while data synchronization to the storage tier is deferred and performed asynchronously in batches. This approach minimizes 𝑃𝐶𝑠𝑡𝑜𝑟𝑎𝑔𝑒 by reducing the frequency of writes to the storage tier.
In cases where requested data is not present in the cache during an update operation, TierBase fetches the data from the storage tier before updating the cache. Data updated in the cache but not yet synchronized to storage is marked as“dirty" and periodically propagated in batches, minimizing remote calls to the storage tier.
Implementing write-back caching in a tiered storage architecture introduces unique challenges in ensuring data reliability and optimizing synchronization efficiency between cache and storage: Replication of Cache. To prevent data loss in case of cache tier failure, TierBase maintains multiple replicas of dirty data and cache contents.
Managing Dirty Data. TierBase balances the scale of dirty data by restricting its size and establishing maximum interval times for batch updates. A backpressure mechanism is activated when dirty data approaches a predefined threshold.
Optimizing Update. TierBase minimizes remote calls to the storage tier by batching updates and merging multiple updates for the same key.
Deferred Cache-fetching. For update operations on missing keys, TierBase accumulates operations and submits batch read tasks to fetch data from storage, reducing read requests and minimizing costs in both tiers.
Figure 5: The framework of pre-trained based compression
The consistency level in write-back caching is determined by the cache tier’s configurable coherent protocol, ensuring updates are eventually propagated to underlying storage with strong consistency support.
# 4.1.3 Write-through versus write-back. TierBase supports both writethrough and write-back caching policies, each offering different trade-offs:
Write-through caching provides lower $S C _ { c a c h e }$ as it doesn’t store dirty data, but potentially higher $P C _ { c a c h e }$ for write-heavy workloads due to synchronous storage updates. Write-back caching offers lower $P C _ { c a c h e }$ and $P C _ { m i s s }$ for write-heavy workloads due to deferred and batched storage updates, but incurs higher $S C _ { c a c h e }$ due to storing dirty data and potential replication.
The choice between these policies depends on specific workload characteristics and relative costs of cache and storage resources. Write-back caching may provide better cost-performance for writeheavy workloads with good temporal locality, while write-through caching may be more cost-effective for read-heavy workloads or when storage writes are relatively inexpensive.
TierBase’s flexible configuration allows users to select the appropriate caching policy based on their application’s requirements and cost structure, optimizing the space-performance trade-off for various workloads and scenarios.
# 4.2 Pre-trained Compression Mechanism
In the context of our space-performance cost model, in-memory data compression plays a crucial role in optimizing the trade-off between storage costs (𝑆𝐶) and performance costs $( P C )$ within the memory tier.
In TierBase, we develop a pre-trained compression mechanism which includes two efficient compression algorithms: our newly developed Pattern-Based Compression (PBC) [6, 59] and the widely adopted Zstandard (Zstd) [14] by Meta. The framework of pretrained compression is shown in Figure 5. In the pre-training phase, Zstd builds a dictionary by identifying frequent strings in the data, while PBC employs hierarchical clustering and a unique similarity metric to pinpoint and extract data patterns. In the compression phase, the resulting patterns, along with residual strings, are then compressed further using string compression techniques.
Initially, we construct the dictionary (patterns) offline using samples from data records. We then apply this dictionary to the entire workload, enabling data compression and decompression.
A key challenge with pre-trained compression in production is the need to re-sample and re-train datasets when patterns change to avoid reduced compression ratios. To address this, TierBase introduces a monitoring service that continuously tracks compression efficiency and initiates re-sampling and re-training when necessary. Specifically, it monitors the compression ratio and the number of data records that do not align with the pattern. Re-sampling and retraining are triggered when the compression ratio falls below a baseline level or when the rate of unmatched records exceeds a predefined threshold.
Furthermore, TierBase’s Insight service includes a compressor recommender that automatically suggests the optimal compressor based on data types and performance requirements. This adaptive compression mechanism supports various data types, dynamically rebuilding the dictionary to accommodate changes in data patterns.
Our experiments demonstrate the cost-effectiveness of our pretrained compression mechanism, enabling TierBase to dynamically adjust its compression strategy and balance the space-performance trade-off based on evolving data patterns and workloads. Despite a moderate increase in performance cost $( P C )$ for write operations due to compression overhead, the significant reduction in space cost (𝑆𝐶) and high decompression speed for read operations optimize overall cost-effectiveness. By adjusting compression levels, TierBase can fine-tune the balance between $S C$ and $P C$ , achieving an optimal point in the cost model to minimize total cost while maintaining high performance across diverse workloads.
# 4.3 Persistent Memory Utilization
A standout component in TierBase’s storage strategy is the adoption of persistent memory(PMem). The PMem distinguishes itself through its capacity, affordability compared to DRAM, swift memory-like access speeds, and its non-volatile nature. The utilization of PMem enhances the overall performance while reduce the cost of TierBase in two folds:
DRAM Extension: Serving as an economical DRAM supplement, PMem allows for efficient memory use by keeping frequently accessed (hot) data, in DRAM, while less accessed (cold) data is stored in PMem. This strategy optimizes the balance between space cost (𝑆𝐶) and performance cost $( P C )$ .
WAL Persistence: PMem greatly improves TierBase’s Write-Ahead Log (WAL) persistence by overcoming the I/O operations per second (IOPS) bottleneck found in disk or cloud storage, ensuring faster and more consistent data synchronization. Crucially, WAL files are first written to a PMem-based persistent ring buffer, then batch-moved to cloud storage, achieving high throughput and real-time persistence, thus significantly boosting performance in high-demand scenarios.
To address the performance gap between PMem and DRAM, particularly in write latency, TierBase employs a refined memory allocation strategy. Small, frequently accessed data (keys and indexes) are stored in DRAM, while larger value data resides in PMem. Write operations to PMem are optimized through batching: data structures are assembled in DRAM before bulk transfer to PMem, reducing the impact on performance costs.
Our production experience demonstrates that PMem, when integrated into a tiered storage architecture alongside DRAM and SSDs, delivers strong performance even with straightforward data structures. This approach effectively balances performance and cost in TierBase’s storage system.
CPU0 CPU1 CPU2 CPU3 Main Thread Main Thread Main Thread Main Thread Idle RPC Idle RPC Active RPC Idle RPC Thread Thread Thread Thread Idle RPC Idle RPC Active RPC Idle RPC Thread Thread Thread Thread Data Node Data Node Data Node Data Node Container Boost mode
# 4.4 Elastic Threading
TierBase implements an innovative elastic threading approach to optimize performance cost $( P C )$ within allocated node resources. This dynamic method seamlessly switches between single-thread and multi-thread modes based on workload demands, enhancing system responsiveness and resource efficiency without external scaling.
In normal conditions, TierBase operates in a default single-thread mode, utilizing an event-driven model with epoll. This approach offers high CPU efficiency and lower $P C$ for typical workloads. We claim that the efficiency of a single-threaded process per data shard generally outperforms that of multi-threading due to reduced locking overhead, a principle supported by Amdahl’s Law[4].
When the workload on a particular instance increases significantly, TierBase seamlessly transitions to multi-threaded mode by dynamically adding threads within the container’s pre-allocated CPU resources. Containers are provisioned with CPU capacity based on anticipated peak workloads, but this capacity isn’t fully utilized during normal operations. Elastic threading allows TierBase to leverage these underutilized CPU resources when needed, boosting the affected instance’s performance without exceeding resource limits or incurring additional costs. When the workload subsides, TierBase switches back to single-thread mode, allowing CPU resources to be used by other processes within the container and maximizing resource efficiency.
Elastic threading is particularly effective for skewed workloads like dynamic hotspots. Typically, one instance might switch to multi-threaded mode while others remain in single-threaded mode within the same container, optimizing resources across the system. If the container’s overall CPU load remains consistently high, the system recognizes the need to scale out to further enhance tenant performance.
This approach improves responsiveness to immediate demands and optimizes resource use, preventing unnecessary allocation during low-activity periods. By dynamically balancing between single-threaded efficiency and multi-threaded performance, elastic threading significantly contributes to overall CPU efficiency. Elastic threading uses only idle CPU resources within the allocated container. Threads are dynamically added or removed based on workload demands, ensuring efficient resource use without overprovisioning or incurring extra costs.
# 5 FURTHER COST ANALYSIS AND FRAMEWORK
This section provides an in-depth analysis of our cost model by examining space-performance trade-offs in storage systems, relating these to our Optimal Cost Theorem and the classic Five-Minute Rule. We adapt the Five-Minute Rule for modern distributed systems and provide a framework for cost optimization for tiered storage.
# 5.1 Adapting the Five-Minute Rule for Modern Storage Systems
The Five-Minute Rule[8, 22, 23], introduced by Jim Gray and Gianfranco Putzolu in 1985, has been a cornerstone in database system design. Originally formulated for single-server environments, it provided a simple heuristic for deciding whether data should be kept in memory or on disk based on its access frequency:
$$
\begin{array} { r } { B r e a k E v e n I n t e r v a l = \left( \frac { P a g e s P e r M B o f R A M } { A c c e s s P e r S e c o n d P e r D i s k } \right) } \\ { \times \left( \frac { P r i c e P e r D i s k D r i v e } { P r i c e P e r M B o f R A M } \right) } \end{array}
$$
However, in today’s distributed and cloud-based systems, we need to consider a broader range of factors and trade-offs. We propose an adapted version of the Five-Minute Rule that aligns with our cost model:
$$
B r e a k E v e n I n t e r v a l = \frac { C P Q P S _ { s l o w } } { C P G B _ { f a s t } \times A v e r a g e R e c o r d S i z e }
$$
Where $C P Q P S _ { s l o w }$ is the Cost Per Query Per Second for slower, space-optimized storage, $C P G B _ { f a s t }$ is the Cost Per Gigabyte for faster, performance-optimized storage, and 𝐴𝑣𝑒𝑟𝑎𝑔𝑒𝑅𝑒𝑐𝑜𝑟𝑑𝑆𝑖𝑧𝑒 is the average size of data records in the workload.
To illustrate how our adapted rule relates to the original, we provide the following mapping.
The ratio $\left( \frac { P r i c e P e r D i s k D r i v e } { A c c e s s P e r S e c o n d P e r D i s k } \right)$ effectively represents the 𝐶𝑃𝑄𝑃𝑆𝑠𝑙𝑜𝑤.
• 𝑃𝑟𝑖𝑐𝑒𝑃𝑒𝑟𝑀𝐵𝑜 𝑓 𝑅𝐴𝑀 correlates to $C P G B _ { f a s t }$ .
• 𝑃𝑎𝑔𝑒𝑠𝑃𝑒𝑟𝑀𝐵𝑜 𝑓 𝑅𝐴𝑀 is conceptually similar to $\scriptstyle \left( { \frac { 1 G B } { A v e r a g e R e c o r d S i z e } } \right)$
This formulation determines the optimal point on the spaceperformance trade-off spectrum for a given workload in a distributed environment. It can be illustrated by comparing fast, inmemory storage systems like Redis [50] with slower, more spaceefficient systems like HBase[54], or by examining different configurations of the same database system.
The break-even interval analysis, derived from our adapted FiveMinute Rule, optimizes data placement by comparing data access intervals to the break-even point. It guides the choice between fast, performance-oriented storage and slower, space-efficient options.
Our Cost Optimal Theorem extends this concept by determining the best overall storage settings for a given workload, aiming to minimize system cost by balancing performance cost $( P C )$ and space cost (𝑆𝐶). It considers the entire system configuration, including multiple storage tiers and complex workload characteristics, to guide high-level design and resource allocation decisions.
Our comprehensive experiments assessed the performance and cost efficiency of our system against leading open-source key-value databases using this integrated approach. The evaluation results on real application workloads demonstrate the model’s effectiveness in guiding cost-saving database system designs. The case study in Section 6.5 showcases how the break-even interval, derived from the Five-Minute Rule, helps choose the most cost-effective TierBase configuration within the framework established by the Cost Optimal Theorem.
# 5.2 Cost Analysis of Tiered Storage
Revisiting Equation 3, we can focus on optimizing the cost of the cache tier. In disaggregated storage systems with a sufficiently large storage pool, the storage tier cost is dominated by $S C$ when $M R <$ $S C _ { s t o r a g e } / P C _ { s t o r a g e }$ for skewed access patterns. As illustrated in Figure 2(b), this allows us to concentrate on the cache tier cost:
$$
\begin{array} { c } { C o s t _ { c a c h e } = \operatorname* { m a x } ( P C _ { c a c h e } + P C _ { m i s s } \times M R , } \\ { S C _ { c a c h e } \times C R ) } \end{array}
$$
To find the optimal cost, we consider the relationship between the Miss Ratio $( M R )$ and the Cache Ratio $( C R )$ , typically represented by the Miss Ratio Curve[29], where $M R = f ( C R )$ , and $f$ is a nonincreasing function.
Theorem 5.1 (Optimal Cache Tier Cost). The optimal cost for the cache tier of a tiered storage is achieved when the performance cost equals the space cost:
$$
P C _ { c a c h e } + P C _ { m i s s } \times f ( C R ^ { * } ) = S C _ { c a c h e } \times C R ^ { * }
$$
where $C { R } ^ { * }$ is the optimal cache ratio.
Let $C { R } ^ { * }$ be the optimal cache ratio that minimizes the overall cost of the cache tier:
$\begin{array} { r } { C o s t _ { c a c h e } ^ { * } = \operatorname* { m i n } _ { 0 \leq C R \leq 1 } \operatorname* { m a x } ( P C _ { c a c h e } + P C _ { m i s s } \times f ( C R ) , S C _ { c a c h e } \times 1 ) } \end{array}$ $C R )$ )
Define two functions:
Note that $g ( C R )$ is non-increasing (as $f ( C R )$ is non-increasing) and $h ( C R )$ is increasing linearly with $C R$ .
The optimal cost occurs at the intersection of these two functions. To see why, consider:
1. If $g ( C R ) \ : > \ : h ( C R )$ , we can decrease $C R$ to reduce cost. 2. If $g ( C R ) < h ( C R )$ , we can increase $C R$ to reduce cost. 3. The minimum cost occurs when neither of these improvements is possible, i.e., at $g ( C R ) = h ( C R )$ .
Therefore, the optimal cache ratio $C { R } ^ { * }$ satisfies:
$$
P C _ { c a c h e } + P C _ { m i s s } \times f ( C R ^ { * } ) = S C _ { c a c h e } \times C R ^ { * }
$$
This equality represents the balance point where performance cost (including miss penalty) equals space cost, minimizing the overall cache tier cost. This theorem provides a principle for optimizing tiered storage systems: the most cost-effective configuration is one where the cache tier’s performance cost (including the cost of cache misses) equals its space cost. This balance point represents the optimal trade-off between performance and space for the cache tier.
In practice, estimating the exact $C { R } ^ { * }$ is challenging, as $f ( C R )$ can be complex and highly dependent on specific workload characteristics. Nevertheless, this theorem serves as a valuable target for optimization efforts, guiding cache size tuning. To address this challenge, we propose an evaluation-based approach in Section 5.3 to find the optimal $C R$ .
Furthermore, this analysis provides a way for determining when to use tiered storage over single-tier solutions and how to optimally configure the cache tier within a tiered storage system, enhancing our understanding of cost-effective storage design in modern disaggregated environments.
# 5.3 Cost Optimization Framework
In order to speedup the cost optimization procedure, we develop a sample-based method to calculate the cost for various configurations with regard to real-world workloads. The method involves the following steps:
(1) Sample: Sample data snapshots and record a representative period of workload from production instances.
(2) Load: Load the sampled data snapshot into a testing instance with a specific configuration.
(3) Replay: Replay the recorded real-world key-value operation traces on the testing instance, measuring and collecting the maximum performance and maximum space utilization for the workload.
(4) Calculation: Calculate the workload cost based on measurements.
(5) Iteration: Repeatedly perform steps 2-4 with different configurations to approach cost-optimal configuration.
This method simulates key-value store behavior under realistic conditions, providing accurate performance and cost assessments. By using real workload traces and access patterns, we obtain a precise representation of system performance. This method enables comprehensive exploration of the configuration space, ensuring identification of the most cost-effective configuration for each individual workload.
While the configuration space for cost optimization can be large, in practice, we focus on the most impactful parameters specific to the workload, guided by user input and prior experience. This approach narrows the candidate configurations substantially. Optimization computations are performed offline and parallelized to accelerate the process, ensuring that the time invested is minor compared to the long-term cost savings achieved. To address the cold start problem, we initialize the system with configurations based on user inputs and best practices.The detailed optimization results and cost savings are analyzed with case studies in Section 6.
# 6 EXPERIMENTS
# 6.1 Settings
The experimental evaluation is conducted on the following servers. For the cache tier, we use three servers with dual Intel Xeon Platinum 8263C CPUs at 2.50 GHz, 192GB DRAM, and eight 128GB Intel® Optane™ DCPMM 100 series(App Direct Mode), while for the storage tier, we use three servers with Intel Xeon Platinum
8163 CPU at $2 . 5 0 \mathrm { G H z }$ , 64GB DRAM, and 8TB SSDs. For all servers, NUMA is enabled and Hyper-threading is disabled for linear performance scaling and to avoid contention between logical cores. The system and software environments are as follows: Linux kernel version 4.19.91, OpenJDK version 1.8.0, GCC version 10.2.1, Dragonfly 1.23.0, Redis 6.0.17, Cassandra 4.0.11, HBase 2.4.1, and Memcached 1.6.20.
YCSB (Yahoo! Cloud Serving Benchmark) [15] is utilized which encompasses the load and run phase. Our experiments utilize two distinct default workloads from YCSB: Workload A, characterized by a predominance of write operations, and Workload B, distinguished by a higher proportion of read operations. We have adapted YCSB to accept user-specified datasets for data insertion, as opposed to the default use of random strings as values. In particular, the Cities dataset [1] is designated as the default for our tests. We deploy 16 YCSB threads for single-thread cases and 48 for multi-thread cases.
We focus our comparisons on widely-used production systems to evaluate TierBase’s performance and cost-effectiveness in real deployment scenarios, which have essential features implemented, such as full failure recovery. These systems can reflect the performance and cost accurately in real world applications. We selected Redis [50], Memcached [31] and Dragonfly [17] for caching system comparison. Redis and Memcached are established caching systems, extensively utilized across diverse applications. Dragonfly is a newly introduced, high-performance caching system. For databases with persistence, we select Redis with AOF, Cassandra [35], and HBase [54] as competitors. Specifically, Redis-AOF ensures data durability by logging writes to disk, which can impact performance due to the additional disk I/O overhead.
Instances represent the fundamental units of resource allocation. In systems operating in single-thread mode, each instance is allocated 1 CPU core and 4GB of memory. In multi-thread mode and databases with persistence, the allocation for each instance increases to 4 CPU cores and 16GB of memory. These specifications are common instance configurations used by Ant Group.
# 6.2 Performance Evaluation
6.2.1 Caching Systems. Figure 7 illustrates the performance comparison of four caching systems: TierBase, Redis, Memcached, and Dragonfly. During this evaluation, TierBase was tested in its default mode without any cost optimization techniques enabled. We evaluate the performance for single-thread and multi-thread mode separately and report the throughput and 99th percentile tail latency respectively.
In single-thread mode (Figures 7(a) and 7(b)), TierBase and Redis exhibit similar performance, outperforming Memcached and Dragonfly across all workloads. This distinction arises because Memcached and Dragonfly are principally engineered for multi-thread environments, while in contrast, Redis is meticulously optimized for single-thread mode. TierBase maintains the lowest latency across most workloads. During the load phase, the latency of TierBase and Redis is significantly lower than that of Memcached and Dragonfly.
In multi-thread mode (Figures 7(c) and 7(d)), Memcached and Dragonfly surpass TierBase and Redis. Memcached combines a streamlined caching approach with a lightweight threading model,
Figure 7: Performance of single-thread and multi-thread mode
Figure 8: Performance of different data persistence mechanisms minimizing inter-thread contention and enhancing speed. Dragonfly benefits from a shared-nothing architecture for threads, boosting its parallel processing. Although TierBase’s per-instance throughput is slightly lower in multi-thread mode, it excels in real-world scenarios through efficient scaling. Figure 7(c) shows that 4 singlethreaded TierBase instances outperform a single multi-threaded instance of Memcached or Dragonfly using equivalent resources, leading to a lower performance cost. In practice, scaling out across multiple instances meets performance requirements, and TierBase’s cost-effectiveness makes this economically viable. Thus, TierBase effectively balances performance and cost, aligning with the demands of large-scale applications.
6.2.2 Persistence Mechanisms. Figure 8 outlines TierBase’s performance with four persistence mechanisms in single-thread mode: WAL, WAL while using PMem as the persistent ring buffer(WALPMem), write-back and write-through introduced in Section 4.1.
For the throughput performance(Figure 8(a)), write-back significantly outperforms write-through in the load phase by $9 2 . 5 2 \%$ , due to its deferred writing mechanism that reduces immediate write operation overhead. In various read-write workloads, write-back’s throughput is about twice of write-through, demonstrating its efficiency in write-intensive tasks. The WAL-PMem mode, while not as effective as write-back, still surpasses write-through, indicating benefits by using persistent memory. However, WAL mode outperforms WAL-PMem due to its use of SSDs and asynchronous disk flushes every second, while WAL-PMem synchronizes to PMem per transaction, potentially incurring higher synchronization overhead.
TierBase: A Workload-Driven Cost-Optimized Key-Value Store [Extended Version]
Table 2: Evaluation of compression techniques
In terms of latency (Figure 8(b)), write-through experiences the highest latency due to its immediate write to storage, being around 3 times higher than write-back in the load phase. Write-back, with its deferred write, significantly lowers latency, particularly in writeheavy scenarios. WAL-PMem offers a middle ground, with lower latency than write-through but higher than write-back.
# 6.3 Features Evaluation
6.3.1 Compression. As introduced in 4.2, TierBase has implemented pre-trained based compression strategies to mitigate memory utilization. We evaluate effectivness of the pre-trained based compression methods:(i) Zstd-b, (ii) Zstd-d, (iii) PBC. Basic Zstd[14] (denoted as Zstd-b) which is without pre-trained dictionaries, Zstd with pre-trained dictionaries(denoted as Zstd-d) and Pattern-Based Compression[59] (PBC). We also include the raw data without compression(Raw) as the bar of throughput.
As shown in Table 2, the pre-trained based methods, PBC and Zstd-d, consistently outperform Zstd-b. It demonstrates that the pre-trained mechanism, by prior analysis and storage of common data patterns, is able to enhance the compression ratio. Notably, PBC consistently achieves higher compression ratios than Zstd. In KV datasets, the distinctive patterns within the values lead to a more significant improvement in PBC’s compression performance. Specifically, PBC surpasses Zstd-d by $4 3 \%$ and Zstd-b by $7 4 \%$ in average compression ratios. These enhanced ratios contribute to PBC’s sustained superiority in overall compression performance.
In the evaluation of throughput. all three compression methods perform worse compared to Raw, especially in SET operations. Among the three compression mechanisms tested, Zstd-d demonstrated the highest performance. In public datasets, the throughput of Zstd-d was approximately twice as high as PBC and about 3.5 times higher than Zstd-b. This notable difference is primarily due to the higher computational overhead in PBC’s compression process which involves pattern matching and string encoding. Meanwhile, Zstd-b, without pre-trained dictionary, necessitates online data analysis during compression, which hampers throughput. In contrast, when considering average GET operation throughput, PBC not only surpasses Zstd-d but also nearly parallels the velocity of Raw. Zstd-b still demonstrates the least favorable performance.
Figure 9: Performance of TierBase and Redis for workload boosting The implementation of compression introduces a calculated trade-off, modestly impeding throughput performance in exchange for substantial memory conservation, thereby augmenting the judicious utilization of resources. Based on the pre-trained compression strategy, it is possible to preserve common patterns in the data in advance, avoiding real-time computation during compression and decompression, thereby enhancing the efficiency and effectiveness of the compression process. The Section 6.4 will provide an in-depth exploration of the multifaceted benefits engendered by compression.
6.3.2 Elastic threading. In order to show the effect of elastic threading, we simulate a scenario of workload burst to test the system’s adaptability to sudden workload influxes. At the beginning, the workload maintenance a low QPS(20,000). Then, we increase in the number of client requests to simulate a surge in workload at the 15 second. This state lasts for 30 seconds and finally the workload returned to normal, the low QPS state. We represent single-thread, multi-thread and elastic threading modes as 𝑠, 𝑚 and $e$ (e.g. TierBases, TierBase-m, TierBase-e).
As shown in Figure 9, under normal conditions, all databases manage well, with Redis showing some jitter in multi-thread mode. Upon increased workload, systems’ throughput reaches their limits, and latency rise. In single-thread mode, TierBase has the highest latency, followed by Redis. However, with elastic threading, TierBase initially faces higher latency but quickly adjusts to have the lowest latency, equal to its multi-thread mode performance. In terms of throughput, both TierBase and Redis hit 120,000 QPS in singlethread mode, with Redis peaking at 180,000 QPS and TierBase at over 240,000 QPS in multi-thread mode.
In summary, the experiment result indicates that elastic threading allows TierBase to operate in a cost-saving single-thread mode under normal scenario, while automatically switching to a multithread mode to achieve higher throughput performance during workload spikes, without the need for manual intervention.
# 6.4 Cost Evaluation
In the following two subsections, we evaluate the cost-effectiveness of TierBase and compare it with other representative systems under different configurations and synthetic workloads using the spaceperformance cost model.
6.4.1 Evaluation setup. We employ the framework introduced in Section 5.3 to evaluate synthetic workloads generated by YCSB using public datasets for write operations. Our simulated workload comprises 10GB data with 80,000 QPS for caching systems, and 10GB data with 40,000 QPS for databases with persistence. While our cost model is applicable to various workloads, we selected these specific parameters as a representative miniature of typical workloads at Ant Group.
Figure 10: Cost of caching system
Figure 11: Cost of database with persistence
The cost unit presented is relative, based on a standard container with 1 CPU core and 4GB of memory. All systems are tested within this standard container on a single instance, with $C P Q P S$ and $C P G B$ calculated accordingly. This evaluation methodology allows us to assess the cost-effectiveness of various systems under controlled conditions.
For systems employing replication (e.g., Redis with AOF, TierBase with WAL, and TierBase with write-back policy), we implement a master-replica setup in the cache tier to ensure data reliability. This configuration effectively doubles the cache tier cost.
We denote single-thread, multi-thread, and elastic threading as 𝑠, $m$ , and $e$ respectively. TierBase-PMem is for PMem activation in TierBase, and TierBase-Zstd and TierBase-PBC for compression. Redis-AOF and TierBase-WAL denote Redis with AOF and TierBase with WAL, respectively. Write-through and write-back policies in TierBase are abbreviated as 𝑤𝑡 and $w b$ . Workloads with a cache ratio of 10 are labeled as $1 0 X$ .
6.4.2 Cost analysis for caching systems. Figure 10 presents the cost results for caching systems. The primary cost driver for caching systems is memory storage expenses. Memcached has the lowest storage cost, followed by Dragonfly, while Redis and TierBase without additional features have relatively higher storage costs.
In terms of performance costs, TierBase, Redis, and Memcached exhibit similar low costs in single-thread mode, while Dragonfly shows a higher performance cost. When elastic threading is enabled, TierBase demonstrates improved throughput, leading to a significant reduction in performance costs, nearly half that of singlethread Redis, by efficiently utilizing surplus CPU resources within the containers.
Furthermore, when TierBase employs PMem to extend memory, it achieves a substantial $6 0 \%$ reduction in storage costs compared to the base configuration, with minimal performance impact. This cost is considerably lower than that of Memcached. Activating compression in TierBase leads to an additional decrease in storage costs.
The results show that TierBase’s features can effectively reduce both performance and storage costs. Similar trends are observed across different workload settings, as illustrated in Figure 10(b).
6.4.3 Cost analysis for databases with persistence. Figure 11 presents the results for databases with persistence. The traditional key-value store like Cassandra and HBase are observed to have relatively high performance costs while the storage costs are notably low. For Redis with AOF and TierBase with WAL, both systems ensure data persistence and adopt a dual-replica strategy for data safety. These approaches result in lower performance cost but significantly higher storage costs.
TierBase demonstrates a good balance in terms of both performance and storage costs. On the one hand, its inherent characteristics as a caching system enable high throughput. On the other hand, its persistence mechanism does not require storing all data in memory, which contributes to its overall lower costs. It is important to note that under the write-back approach, where data is stored in duplicate copies, the storage cost is higher compared to writethrough. However, due to different data update characteristics, the write-back approach exhibits higher throughput in write-intensive scenarios, translating to lower performance costs. This advantage diminishes or even disappears in read-heavy scenarios.
Using PMem for data persistence is a cost-effective choice. Although its space cost is relatively higher compared to write-through and write-back, its performance cost is sufficiently low due to PMem’s near-memory speed.
# 6.5 Case Study
TierBase is extensively utilized across a wide range of scenarios at Ant Group, with over 3,000 applications leveraging its capabilities. These applications span various use cases, employing hundreds of thousands of CPU cores and several petabytes of memory. Due to space constraints, we will focus on two representative case studies in this paper.
# Case 1: User Info Service.
The User Info Service at Ant Group manages basic user profile data, serving numerous applications through a proprietary SDK with TierBase client. During peak hours on a typical day, this service handles approximately 500,000 updates and 16,000,000 reads per second, indicating a significantly read-heavy workload. Given the service’s primary focus on online users, high availability and reliability are of paramount importance.
6.5.1 Systems comparison. To assess the cost-effectiveness of various systems, we replayed a real business trace with all databases configured for dual-replica reliability. Figure 12(a) shows that inmemory stores like Redis, Memcached, and Dragonfly have low performance costs but higher storage expenses. TierBase, using compression, halves its original volume, significantly reducing costs compared to Redis. This is particularly advantageous in this readheavy scenario where performance cost is not primary. The tradeoff between performance and storage efficiency demonstrates TierBase’s adaptability to specific workload characteristics, optimizing overall cost while maintaining performance efficiency. Activating compression in TierBase yields a $6 2 \%$ cost reduction compared to TierBase-Raw, showcasing its effectiveness in balancing performance and storage requirements in read-heavy, availability-critical scenarios.
Figure 12: Cost of case study
Figure 13: Space-Performance Cost Trade-offs
6.5.2 Space-performance cost trade-offs. We demonstrate the tradeoff using our proposed cost model under use case 1. Figure 13 shows the workload’s space cost significantly exceeds its performance cost. We employ compression techniques, sacrificing some performance to save substantial space. We tested Zstd compression levels -50, -10, 1, 15, and 22, both with and without a dictionary. As shown in Figure 13(a), higher compression levels increase space savings but have an upper bound, beyond which compression ratio gains become marginal while performance costs grow considerably. Practically, we may select compression level 1 for better performance tolerance. Additionally, pre-trained compression yields more substantial cost savings compared to compression without pre-training.
We also evaluate the cost-effectiveness of TierBase with writeback policy using four cache ratios ranging from 2X to 5X. As shown in Figure 13(b), Higher cache ratios result in lower space costs but higher performance costs.
The result reveals that a cache ratio of 5X approximately achieves the optimal balance between performance and storage costs as predicted by our model. The results validate our cost model for accurately guiding cost optimization, confirming that the achieved effects align with expectations.
6.5.3 Break-even interval. Furthermore, we calculate several sets of break-even intervals between the fast and slow storage configurations on TierBase based on the analyses in Section 5.1. As shown in Table 3, if the average access interval for a key in the workload is less than 98 seconds, the default TierBase is the most cost-effective choice. For access intervals between 98 and 264 seconds, TierBase with PMem mode is recommended. When the average access interval exceeds 264 seconds, employing compression becomes the optimal solution. By collecting the average access interval for a key in the real workload, we observe that it exceeds 1018 seconds. Consequently, TierBase is employed as a single-layer caching system, leveraging pre-trained compression (PBC) to optimize memory usage. This approach achieves a $2 5 \%$ compression rate for values and realizes cost savings of $5 0 \%$ , which is significant considering the hundreds of thousands of CPU cores utilized in this case. Table 3: Break-even interval between different configurations.
Although write-through caching could potentially be applicable in this scenario and achieve a $6 0 \%$ cost reduction, the decision to prioritize compression over write-through caching is driven by the client’s stringent requirements for low latency and high stability when serving online requests. TierBase’s adaptability allows it to configure for aligning with the specific needs and priorities of each client, ensuring an optimal balance between cost efficiency and performance in real-world scenarios.
Case 2: Capital Reconciliation. TierBase is also deployed in the capital reconciliation business at Ant Group. As a risk control scenario focused on financial auditing and verification, the capital reconciliation business is particularly sensitive to costs. During peak shopping seasons, the overall QPS for capital reconciliation can reach tens of millions. TierBase’s write-through and writeback caching strategies are selectively employed based on different scenarios. For this case study, we choose one of the main scenarios where the read and write operations are close to a 1:1 ratio. In this scenario, data from different channels is written into TierBase and then read out by the reconciliation system for verification.
Figure 12(b) illustrates the cost breakdown for this scenario. Disk-based key-value stores like HBase and Cassandra exhibit low space and performance costs. When TierBase is configured with write-through, performance costs are lowered by $3 5 \%$ compared to Cassandra. In high-throughput scenarios, enabling write-back mode further enhances performance. With the same configuration, TierBase can achieve $2 . 6 \mathrm { x }$ the performance of Cassandra. Overall, TierBase reduces costs by at least $3 7 \%$ compared to both Cassandra and HBase. Additionally, it cuts costs by $70 \%$ compared to TierBase default configuration.
The observations in the capital reconciliation case study reveal that recent data is frequently accessed in the cache, while longterm data is occasionally retrieved. Online statistics shows that
TierBase with write-through mode achieves a cache hit rate of approximately $8 0 \%$ , with only $1 \%$ of the hottest data stored in the cache tier. This demonstrates the effectiveness of TierBase’s cachestorage disaggregation in significantly reducing costs for workloads with temporal access skewness.
# 7 RELATED WORK
# 7.1 Key-Value Stores
Key-value stores, a type of NoSQL databases, play a crucial role in Internet applications. Diverging from traditional relational databases, they provide fast, scalable, and efficient data access essential for a wide range of online applications.
Memcached [31] is a distributed caching system enhancing web application performance by minimizing database load and optimizing memory across servers. Redis [50], unlike Memcached’s focus on caching, is a multifunctional in-memory database with a wide variety of data types, extended functionality, and persistence, suitable for complex, high-performance applications. KeyDB [34] enhances Redis with multi-threading. Dragonfly [17] offers compatibility with Redis and Memcached, using multi-threaded, shared-nothing architecture. Etcd [47] is crucial for Kubernetes, providing consistent configuration across clusters. EVCached [43], ElastiCache [52], and Azure Cache for Redis [41] are managed caching solutions, improving cloud applications by reducing database loads and enabling quick data access.
# 7.2 Cost Optimization
7.2.1 Cost model. Constructing a reasonable cost model is crucial for the optimization of database costs. Total Cost of Ownership (TCO)[42] refers to the sum of all related costs throughout the entire lifecycle of an item or service, including purchasing, operating, maintaining, and disposing of it. This concept is widely applied across various fields. Some cost models [27, 38, 40] primarily focus on estimating or optimizing query execution time or overall system performance. They consider factors like memory access patterns, cache behavior, and data structure efficiency to predict or improve query processing speed. Other cost models [36, 46] explicitly consider financial costs, such as the cost of operating cloud resources or the cost differences between different types of storage medium. However, these methods often struggle to adequately represent the complex relationships between configurations and costs, especially when considering diverse workload characteristics. As a result, they are not directly applicable to our study’s objective.
7.2.2 Optimization strategy. Contrasting with TierBase, which concentrates on a comprehensive cost model that balances performance and storage expenses, some studies [16, 25, 32, 57] achieve more effective resource allocation or configuration parameter adjustments by monitoring workload variations within databases, thereby enhancing resource utilization and reducing costs. Cosine [11] actively tailors its storage engine architecture to optimize costs, adapting to workload demands, cloud budgets, and specific performance objectives.
7.2.3 Compression. In database technology, Lempel-Ziv(LZ) algorithms are widely used for data compression to improve storage and transfer efficiency. TiDB [30] and Hadoop [54] use LZ4, while
LevelDB [20] uses Snappy [21]. Zstandard (Zstd) [14] is chosen by Facebook’s RocksDB [18] and Redshift [3] for its efficiency. Apache Cassandra [35] also uses compression to reduce disk space use. For memory efficiency, SlimCache [33] and zExpander [58] compress data in caches, and COLLATE [28] applies lightweight compression in in-memory databases.
7.2.4 Thread management. Thread management techniques play a pivotal role in DBMSs by optimizing the handling of concurrent client requests and maximizing resource utilization. In systems like [18, 44, 45, 53], dynamic adjustment of thread pools is utilized to efficiently handle fluctuating workloads. On the other hand, databases such as [31, 50] maintain a fixed thread count, ensuring consistent resource allocation. [13, 24, 26, 48], use coroutines in certain scenarios for managing concurrent requests, balancing thread overhead with concurrency. [49] discusses thread management in the NUMA-aware task scheduling. [2] merges these approaches, applying dynamic thread pools for general requests and fixed threads for specific tasks.
7.2.5 Persistent memory. Non-volatile memory, also known as persistent memory is a type of memory that keeps its data even when the power is turned off. Many current efforts are integrating PMem into storage systems [5, 9, 12, 19, 39, 55, 60] to enhance performance. However, PMem has its own set of challenges and issues. Some studies [10, 37, 56] are exploring how systems should navigate between PMem and DRAM, addressing concerns about optimal choice and management. | In the current era of data-intensive applications, the demand for
high-performance, cost-effective storage solutions is paramount. This paper
introduces a Space-Performance Cost Model for key-value store, designed to
guide cost-effective storage configuration decisions. The model quantifies the
trade-offs between performance and storage costs, providing a framework for
optimizing resource allocation in large-scale data serving environments. Guided
by this cost model, we present TierBase, a distributed key-value store
developed by Ant Group that optimizes total cost by strategically synchronizing
data between cache and storage tiers, maximizing resource utilization and
effectively handling skewed workloads. To enhance cost-efficiency, TierBase
incorporates several optimization techniques, including pre-trained data
compression, elastic threading mechanisms, and the utilization of persistent
memory. We detail TierBase's architecture, key components, and the
implementation of cost optimization strategies. Extensive evaluations using
both synthetic benchmarks and real-world workloads demonstrate TierBase's
superior cost-effectiveness compared to existing solutions. Furthermore, case
studies from Ant Group's production environments showcase TierBase's ability to
achieve up to 62% cost reduction in primary scenarios, highlighting its
practical impact in large-scale online data serving. | [
"cs.DB",
"cs.DC"
] |
# 1 Introduction
CodeBase Generated Perf Test “optimize SWE Agent llama_cpp for Is the model patch llama_cpp/ llm = Llama("qwen2-7b.gguf") this workload” correct and examples/ prompts $\mathbf { \sigma } = \mathbf { \sigma }$ load("sharegpt.json") match or exceed
setup.py reef euxrpnerlilme(nptr(o)m:p
ts) Model Patch human commit’s performance? def correct(ref, cur):
hciostmomryit foar rs,ecr rn["ztiopk(sr"e]f ==c cr[)":t
oks"] src/ggml-icmppul..ch Test ID RefTeirmeence MTiomdel 1 √ 2.17s 2.94s
Human Commit 218d361 2 √ 1.9s 1.62s LLM
Faster IQ1 mul_mat_vec using BMI2 Reference Patch 3 √ 2.74s 3.10s
instructions (#12154)
src/ggml-cpu.c +216 -86 human commit src/ggml-cpu.c +216 -86 serves as target
src/cpu-quants.c +97 -12 optimization src/cpu-quants.c +97 -12 Repo Evaluate Multi- Precise SWEB 100
Benchmark Level Runtime Lingual Specification Multi 80 73.3
HUMANEVAL ✗ ✗ ✗ ✓ SWEB
EVCACLOPERF ✗ ✓ ✗ ✓ ✓ Verified 60 56.8
LIVECODEBENCH ✗ ✗ ✗ ✓ Multi-SWEB S
KERNELBENCH ✗ ✓ ✗ ✓ Mini 40
SWEB-VERIFIED ✓ ✗ ✗ ✗
SWEB-MULTI ✓ ✗ ✓ ✗ GSO 20
MULTISWE-MINI ✗ 3.6
GSO (Ours) ✓ ✓ ✓ ✓ 101 10² 103 01 Oracle Patch #LoC (log scale) LCB SWEB-V GSO
Figure 2: Benchmark Feature Comparison and Performance Gap. Left: Depicting how GSO improves over existing benchmarks across key dimensions. Middle: Distribution of oracle LoC changes across benchmarks, showing GSO solutions require over $4 { \cdot } 1 5 \mathrm { x }$ larger edits than existing benchmarks. Right: Performance comparison of O4-MINI across LCB (algorithmic puzzles), SWEBENCH-VERIFIED (repository-level bug-fixes), and GSO depicting the performance gap on optimization tasks.
High-performance software is critical for modern computing systems, from data analytics frameworks to machine learning infrastructure. Developing such systems demands specialized expertise in algorithmic optimization, hardware-aware programming, performance analysis, and reasoning across multiple layers of the software stack. The complexity of these tasks is evident in productioncritical systems like VLLM [Kwon et al., 2023], HPC [Bradski, 2000], and VERL [Sheng et al., 2024], where teams dedicate substantial efforts to iterative and continuous maintenance over long development cycles. Simultaneously, SWE-Agents are gaining rapid traction in software development, demonstrating remarkable results on simple bug-fixing tasks [Jimenez et al., 2024]. This has also spurred excitement in adapting LLMS to aid in automating research tasks themselves, for example improving deep learning kernels [Ouyang et al., 2024]. In this work, we study the question – “Can LLM agents aid in the development of high-performance software?”. To answer this, we introduce GSO, a benchmark for evaluating SWE-Agents on challenging software optimization tasks.
To create GSO, we develop an automated pipeline that generates performance tests and runs them across a repository’s commit history to identify substantial optimizations discovered by expert developers. After careful manual curation, we extract 102 challenging tasks across 10 codebases, spanning diverse domains and languages including Python, C, and SIMD. Each task consists of a codebase, performance tests exercising real-world workloads, and a target optimization from expert developer commits. SWE-Agents receive a performance test as task specification and must produce an optimization patch that improves runtime efficiency while maintaining correctness. We evaluate these patches using our OPT ${ \ @ K }$ metric, providing reliable assessment in a machine-agnostic manner. Rather than naively measuring machine-dependent speedups, we assess whether model-generated patches can consistently match or exceed the performance of human expert optimizations.
Our benchmark evaluates the capabilities needed for high-impact optimization work, tracking usefulness for real-world high-performance software development. Particularly, problems in GSO evaluate challenging systems engineering tasks, including optimizing Pandas operations, Pillow image or video processing operations (like GIF animation), and LLaMA-CPP model inference runtimes.
Code optimization uniquely bridges algorithmic reasoning and systems engineering, providing a challenging yet well-specified evaluation domain for LLM-based programming agents. Unlike bugfixing SWE benchmarks that rely on potentially ambiguous natural language specifications [Aleithan et al., 2024], performance tests natively provide precise specifications for correctness and efficiency. Our tasks require substantial code changes, with gold-patches containing $4 { \cdot } 1 5 \times$ more lines edited than previous benchmarks (Figure 2-middle). We evaluate leading LLMS on GSO using the state-ofthe-art OPENHANDS agent framework [Wang et al., 2024b] (Section 3). Our evaluation reveals that most agents struggle with the benchmark, achieving less than $5 \%$ success rate measured by $\mathrm { { O P T } } @ 1 \$ , with test-time compute also providing only modest improvements (OPT $@ 1 0$ remaining around $1 5 \%$ ).
To understand why SWE-Agents struggle with GSO, we perform a qualitative analysis of agent behavior and failure modes (Section 5). First, agents struggle with low-level languages, often avoiding them entirely or introducing fatal errors.Second, agents resort to superficial optimizations (“lazy optimizations”) like compiler flag manipulation or input-specific fast-paths insertion, often making bizarre non-idiomatic code changes. Third, localization remains challenging - agents frequently misdiagnose the root cause of performance issues, leading to ineffective optimization attempts.
SIMD/Vectorize
Caching/Memoise Lazy evaluation Memory Layouts Parallelism
String-search algos Examples of algorithms: Scatter/Gather ·Boyer-Moore-Horspoolsearch CPU-Ft.dispatch ·Crochemore-Perrin Two-Way search Branch-elimination ·Aho-Corasick automaton
Table-driven lookup ·Monotonic two-pointer merge join ·Quickselect /Introselect
Select /Sort kernels ·AP/range detection Binary search ·Bitmap direct-address lookup O(n) merge-joins Data-sharding 0 10 20 # Commits
The key contributions of this paper are: 1) An automated pipeline leveraging test-generation and execution information for generating software optimization tasks from real-world codebases, resulting in the GSO benchmark. 2) Evaluation of leading SWE-Agents on GSO, revealing a substantial performance gap in systems engineering tasks. 3) Qualitative analysis of agent behavior and failure modes with directions for future research. Given the substantial performance gap, we believe considerable progress in reasoning capabilities and SWE-Agents will be required to close the gap and hope GSO serves as a valuable resource for future LLM-based programming agent research.
# 2 GSO
Global Software Optimization (GSO) is a benchmark for evaluating SWE-Agent capabilities for aiding in high-performance software development. Each task consists of an initial codebase snapshot, performance tests measuring runtime and correctness, a build script for environment setup, and a reference human commit establishing the target performance threshold. The goal is to generate a patch that improves the performance of the codebase while maintaining functional correctness.
# 2.1 Task Formulation
Input. The agent receives the initial codebase, build script, and a performance test serving as input and is tasked with correctly improving the runtime on the given workload in a generalizable manner.
Output. The agent produces a unified patch that implements the required performance improvements.
Evaluation. We apply the generated patch and execute all associated performance tests. Success requires that the patch (1) applies cleanly, (2) passes all correctness checks, and (3) matches or exceeds the target human commit’s performance.
# 2.2 Benchmark Construction
Unlike prior benchmarks that rely on manually written issues and test cases, we develop an automated pipeline to construct GSO tasks from GITHUB repositories. Our key insight is that software optimization problems can be identified by executing tests across commit boundaries and measuring performance improvements with minimal human curation. Therefore, we use LLMS to identify performance-related commits, generate performance tests, and execute them to identify optimization tasks. Particularly, we use the following two-stage pipeline:
Stage I: Identifying Performance Improving Commits. We scan popular open-source GITHUB repositories using an LLM-based judge with code-change heuristics to identify performance-related commits. For each candidate, we extract context including relevant files, commit messages, linked issues, pull requests, and endpoints exercising the affected code. This efficient filtering process handles large commit volumes while gathering the rich context needed for test generation.
Stage II: Generating and Executing Performance Tests. We generate performance tests via execution-based rejection sampling using an LLM prompted with the commit context. Tests exercise the codebase with real-world workloads, e.g., generating completions from qwen-7b for the sharegpt dataset using llama-cpp. They measure runtime, and verify equivalence between the pre- and postcommit codebase states via assertions on the outputs. We retain commits showing significant performance improvements across multiple test cases. See Appendix C.1 for further details.
Final Curation. We perform a careful manual review of the automatically collected candidates to ensure the benchmark’s quality and diversity. We remove instances with weak tests or reproducibility issues, selecting problems spanning various optimization techniques, difficulty levels, and application domains. Additional curation details and examples of generated tests are in Appendices C.2 and C.3.
# 2.3 Designing $\mathbf { o r r } _ { p } @ K$ Metric
Evaluating code optimization presents unique aggregation challenges absent in traditional code generation benchmarks. Existing metrics fail to handle two critical issues: (1) different tasks have varying baseline performance levels, making cross-task comparison and aggregation difficult, and (2) within tasks, tests with disparate speedup magnitudes can considerably skew aggregate metrics.
Robust Speedup Calculation. Prior work aggregates per-test speedups using geometric mean, but this approach is vulnerable to outliers. A model achieving speedups of [0.1, 1000] across two tests yields a geometric mean of 10, despite degrading performance on one test. In Section 5, we show that agents indeed perform such optimizations and thus can “game” the geometric mean. Drawing from systems optimization literature [Jacob and Mudge, 1995], we compute speedup using the harmonic mean of individual test speedups which is more robust to extreme positive outliers. Let si = T (C21,i) denote the speedup on test $i$ , where $C _ { 1 }$ and $C _ { 2 }$ represent two codebase states and $T ( C , i )$ denotes runtime on test $i$ . We then define the overall speedup as the harmonic mean:
$$
S ( C _ { 1 } , C _ { 2 } ) = { \frac { n } { \sum _ { i = 1 } ^ { n } { \frac { 1 } { s _ { i } } } } } = { \frac { n } { \sum _ { i = 1 } ^ { n } { { \frac { T ( C _ { 2 } , i ) } { T ( C _ { 1 } , i ) } } } } }
$$
We discuss these characteristics of our metric and other potential metrics in Appendix E.
Relative Performance Evaluation. To enable cross-task comparison, we evaluate model patches against human-authored optimization targets rather than absolute speedups against the original codebase. For each task, we measure whether the model achieves performance comparable to expert developers. Thus, we measure the speedup against the human target as $S ( C _ { h } , C _ { a } )$ , where $C _ { h }$ is the codebase state from the human target optimization and $C _ { a }$ is the codebase after applying the model’s patch. For each task, we define success using both performance and correctness criteria:
$$
{ \mathrm { O P T } } _ { p } = \left\{ { \mathrm { t r u e } } , \quad { \mathrm { i f } } \ S ( C _ { h } , C _ { a } ) \geq p \ { \mathrm { a n d } } \ { \mathrm { c o r r e c t } } ( C _ { a } ) \right.
$$
The first criterion ensures that the model’s patch achieves at least $p$ fraction of the human speedup.
The second criterion (correct $\left( C _ { m } \right)$ ) ensures functional equivalence through test assertions.
Final Metric Definition. We compute $\mathrm { O P T } _ { p } @ K$ as the fraction of tasks where at least one successful solution exists among $K$ attempts:
$$
\operatorname { O P T } _ { p } \circledast K = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \mathbb { 1 } ( \exists k \in [ K ] : \operatorname { O P T } _ { p } )
$$
We estimate confidence intervals following established methods for pass ${ \mathfrak { Q } } \mathbf { K }$ metrics [Chen et al., 2021, Lightman et al., 2023]. Our $\mathrm { O P T } _ { p } @ K$ metric provides machine-independent assessment by comparing against human baselines rather than absolute speedups. While raw speedups vary
significantly across machines (Appendix D), the relative evaluation ensures consistent assessment across different hardware configurations. Finally, we denote $\operatorname { O P T } @ K$ as the $\operatorname { O P T } _ { 0 . 9 5 } \textcircled { a } K$ metric that uses a $9 5 \%$ threshold for evaluating success against the human target.
# 2.4 Distinctive Features of GSO
Precise task specification. GITHUB issues provide ambiguous specifications, especially for complex software engineering tasks [Aleithan et al., 2024]. GSO employs performance tests as specifications that unambiguously define optimization targets, enabling rigorous evaluation.
Unifying algorithmic coding with real-world SWE. Code LLM research is divided across isolated but algorithmically-focused benchmarks, and simple bug-fixing based SWE benchmarks. GSO bridges these two domains by integrating algorithmic challenges with real-world software tasks.
Diverse tasks spanning system boundaries. ${ \approx } 6 0 \%$ of tasks demand non-Python modifications across five programming languages, reflecting production environments where performance-critical components leverage systems languages beneath high-level interfaces (Figure 2-right).
Challenging tasks via strong human targets. Each task centers on human-authored commits averaging 108 lines, establishing demanding optimization targets requiring sophisticated code comprehension and algorithmic reasoning. Figure 9a shows the LoC distribution for our target commits.
Unbounded performance measurement. Software optimization inherently enables unbounded performance improvements through identification of previously unexplored bottlenecks. Speedups thus serve as a critical secondary metric for quantifying exceptional performance beyond human optimization targets. Since task-specific factors can skew raw speedup metrics, we establish OPT ${ \ @ K }$ as our primary metric while providing comprehensive speedup analysis in the appendix.
Evading contamination. Contamination represents a fundamental concern for agent benchmarks, particularly with real-world codebases potentially present in pretraining data. Our speedup metric provides a continuous signal that systematically detects potential contamination between human and model patches. We posit that models substantially outperforming human-written patches demonstrate generalization capabilities and thus can transcend contamination concerns.
# 3 Evaluation Setup
Machine Configuration. We use Docker to containerize the task environment for each task in GSO. The initial codebase is cloned and installed into a local environement in the container before providing it to the agent. All tasks are run on a single Google Cloud n2-standard-64 VM (64 vCPUs, 256 GB Memory). While raw speedups may vary across machines, we empirically find that measuring $\operatorname { O P T } @ K$ is resilient to machine variations, provided each task gets sufficent resources (Appendix D)
Agent Scaffold. We use OPENHANDS [Wang et al., 2024b] (CodeActAgent-v0.35.0), as our common agent scaffold for all models and experiments. The scaffold provides access to a file-editor tool and a bash terminal tool to the agent to perform code changes and execute commands. To support lengthy and frequent codebase rebuilds (in the case of C or $^ { \mathrm { C + + } }$ code changes), we configure the agent with a 3-hour time limit per task and a 20-minute timeout per step. Our task-specific prompt instructs the agent to optimize the runtime of the specification performance test and also contains the build and test commands. See Appendix G.1 for the complete agent prompts and details.
Models. We evaluate GPT-4O, O3-MINI, O4-MINI, CLAUDE-3.5-V2 (referred to as claude-3.6), CLAUDE-3.7, and CLAUDE-4.0. Our experiments focus on two settings: OPT $\boldsymbol { \mathcal { Q } } \boldsymbol { 1 }$ (Section 4.1) and inference-time scaling (Section 4.2). For OPT $\ @ 1$ , we sample 3 rollouts (trajectories) at temperature $T = 0 . 1$ . For inference-time scaling (Section 4.2), we limit our evaluations to O4-MINI and CLAUDE3.5-V2 due to API rate limits and high cost and sample rollouts at temperature $T = 0 . 8$ .
# 4 Experiments & Results
# 4.1 OPT@1
Figure 4-left shows consistently poor $\mathrm { { O P T } } \ @ 1 \$ performance across agents based on all models, confirming software optimization as a significant challenge for current SWE-Agents. Even the best performing model, CLAUDE-4.0, achieves less than $5 \%$ success, while GPT-4O fails completely at $0 . 0 \%$ . These results demonstrate that success on SWE-Bench-like benchmarks does not transfer to more-challenging real-world tasks like software optimization requiring both algorithmic reasoning and engineering expertise.
Figure 4: OPT $\ @ 1$ performance. (a) Left: $\mathrm { { O P T } } \ @ 1 \$ (speedup threshold $p$ set to 0.95) across models, with all models achieving less than $5 \%$ success (b) Right: $\mathrm { O P T } _ { p } @ 1 \$ indicating portion of problems where model patches match $p$ fraction of human commit’s performance. We find that strongest performing models remain strong throughout, with the success rates reducing as it becomes more challenging to match human-level performance.
Figure 5: Scaling test-time compute for O4-MINI and CLAUDE-3.5-V2. (a) Left: $\operatorname { O P T } @ K$ performance as a function of inference steps (L) and parallel rollouts (K), showing parallel compute scales more efficiently than serial compute. (b) Right: OPT ${ \ @ K }$ performance with increasing rollouts, improving to $1 5 \%$ with diminishing returns beyond eight rollouts.
We next vary $p$ in $\mathrm { O P T } _ { p } @ 1 \$ (Figure 4-right). Recall that $\mathrm { O P T } _ { p } @ 1$ evaluates whether the agent’s patch is able to match $p$ fraction of the human commit’s performance. Thus $p = 0$ evaluates whether the agent’s patch is correct, regardless of its performance, while $p = 1$ evaluates whether the agent’s patch is identical to the human commit, increasing in difficulty. We find that $\mathrm { { O P T } _ { 0 } } @ 1$ performances shows considerably more variation with CLAUDE-4.0 achieving $7 0 \% \mathrm { O P T } _ { 0 } @ 1$ while O4-MINI achieves $45 \%$ . We also find that the trend stays the strongest performing model but the gap compresses as $p$ increases indicating challenges in matching human-level performance.
# 4.2 Scaling Inference-time Compute
Drawing inspiration from [Olausson et al., 2023], we examine two dimensions of test-time compute scaling: (1) sampling multiple trajectories and picking the best (referred to as parallel compute) and (2) allowing more steps per trajectory (referred to as serial compute).
Scaling serial vs parallel compute. In Figure 5-left, we analyze steps scaling from 50 to 400 with different numbers of rollouts between 1 and 8. Results show parallel compute scales more efficiently than serial compute. With only 50 steps, 8 rollouts yields higher performance (8.82 for O4-MINI
Avoid Complexity Mismanage Compute Localization Avoid Complexity Mismanage Compute Localization Less Impactful $( 1 0 . 0 \% )$ Misdiagnosed Bottlenecks $( 6 . 6 \% )$ Misdiagnosed Bottlenecks $( 1 3 . 2 \% )$ Less Impactful $( 6 . 8 \% )$ Explore-Heavy $( 3 . 0 \% )$ Exploit-Heavy $( 6 . 9 \% )$ 1 Lazy Optimization $( 1 6 . 6 \% )$ 0 Wrong AbstractionLevel (25.1%) Wrong Abstraction Level (30.1%) Lazy Optimization (29.0%) 0 5 10 15 20 25 30 0 5 10 15 20 25 30 % of Claude-3.6 Trajectories % of O4-mini Trajectories
and 11.76 for CLAUDE-3.5-V2) than 400 steps with a single rollout (1.96 for O4-MINI and 4.95 for CLAUDE-3.5-V2). This indicates increased sample diversity across trajectories can effectively compensate for reduced step counts, providing insights for optimal inference-time compute allocation.
Low OPT $@ 1 0$ performance. Building on these findings, we further examine performance with extended parallel compute. Figure 5-right demonstrates both models gain performance with additional rollouts, with ${ \mathrm { O P T } } \ @ K$ increasing from under $4 \%$ to over $12 \%$ with 8 rollouts. Despite these improvements, $\mathrm { O P T } @ 1 0$ performance remains modest (under $20 \%$ ) for both models with diminishing returns, indicating fundamental limitations in current SWE-Agents.
# 4.3 Performance with Ground-Truth Plans
Beyond engineering, solving GSO requires identifying bottlenecks and planning optimization strategies over a long horizon. Inspired by prior work on “backtranslation” guided reasoning [Li et al., 2023, Wang et al., 2024a, Pham et al., 2021, Sennrich et al., 2015], we assess the impact of guided reasoning by prompting O4-MINI with descriptive backtranslated plans of ground-truth optimizations. We provide O4-MINI with the groundtruth diff and sample 5 plans describing the optimization strategy and specific file-localized changes. Appendix H details the prompt and example plans.
We observe that prompting agents with backtranslated plans improves performance suggesting that high-level plans aid in matching human-level performance. However, $\mathrm { { O P T } } \ @ 1 \$ only reaches $5 . 7 \%$ and $\mathrm { O P T } @ 5$ improves by just $9 \%$ with these plans. So while strategic planning and reasoning helps, implementing low-level system changes remains challenging for current models.
Figure 7: Qualitative analysis of agents. Model failures are classified into three high-level categories: (1) Localization: misidentifying code regions or opportunities for optimization, (2) Mismanage Compute: battling explore-exploit tradeoffs, and (3) Avoid Complexity: challenges with low-level code changes. Left: CLAUDE-3.5-V2 shows an exploit-heavy behaviour, making massive code changes with lesser exploration of the codebase. It also attempts deeper changes but fails to localize bottlenecks and changes to the right abstraction level. Right: O4-MINI in contrast is explore-heavy, avoids low-level code, and makes “lazy” optimizations like spurious compiler flag modifications.
Figure 6: O4-MINI performance with and without backtranslated ground-truth plans describing the human commit’s optimization strategy.
# 5 Qualitative Analysis of Agent Behavior
We use an LLM-aided pipeline (details in Appendix I) to qualitatively analyze agent behavior and failure modes. We categorize the failures as (1) challenges with low-level code, (2) compute management issues, and (3) localization errors.
# 5.1 Agents Struggle with Low-Level Code Changes
Poor performance on low-level problems. We identify sharp declines in agent performance as language complexity increases. Models perform best with high-level languages, with O4-MINI achieving $21 \%$ on Python tasks. Performance drops drastically to $4 \%$ when Cython, C and $^ { \mathrm { C + + } }$ , etc. are involved.
# Modifications at the wrong abstraction level.
Production codebases have a hierarchy of abstraction levels, from high-level APIs to lowlevel implementations, with each layer encapsulating complexity beneath it. Our analysis reveals that operating at inappropriate abstraction levels contributes to $2 5 - 3 0 \%$ of agent failures. However, interestingly, models exhibit opposite but equally problematic approaches. Figure 8 shows that O4-MINI avoids making changes to the $\mathrm { C } / \mathrm { C } + +$ files $40 \%$ of the times even when it was necessary based on the human optimization commit. CLAUDE-3.5-V2 on the other hand surprisingly makes unnecessary lowlevel C changes $( 9 . 2 \% )$ when even the human optimization commit was Python-only!
py 10.3 C 9.2 h 6.9 C 22.1 pyx 4.6
cpp 19.1 py 16.1 h 14.7 cpp 11.5 pyi 10.3
pyi 4.4 pyx 5.7 Added
py 2.9 C 4.6 Omitted 0 5101520 0 5 101520 % O4-mini Patches % Claude-3.6Patches
In Example F.1, O4-MINI attempted to optimize NumPy’s np.subtract.at function. NumPy conceptually implements this in a layer below the Python API called ufunc (universal function) written in C. While the model scrolled through these C files, it decided to not make changes there and instead tried to override it with a Python function, completely avoiding the required deeper change.
Fundamental errors in low-level programming. Beyond selecting incorrect abstraction levels, agents also struggle with fundamental low-level programming concepts. In Example F.2, CLAUDE3.5-V2 incorrectly modified Pillow’s SIMD pointer arithmetic, causing segmentation faults.
# 5.2 Agents Favor Lazy Optimizations
Optimization Minimalism: The Path of Least Resistance. Agents consistently favor trivial code changes to meet performance targets rather than investigating and implementing more substantial improvements. O4-MINI exhibits this behavior in nearly $30 \%$ of trajectories (Figure 7), with patch sizes significantly smaller than human-written optimizations. In fact, in over $60 \%$ of incorrect trajectories, the agent made $\leq 1 5 \%$ of the edits compared to the corresponding human developer commit, as shown in Appendix F.2.
Spurious compiler-flag twiddling. In Example F.3, CLAUDE-3.5-V2 attempted to optimize Pillow’s SIMD implementation by simply adding $^ { - 0 3 }$ compiler flags. This approach is ineffective since the Pillow project already uses optimized builds by default. This pattern appears across many agent trajectories, revealing a fundamental misunderstanding of real-world project configurations.
Input-specific fast paths. Agents frequently implement narrow optimizations targeting only the specific input patterns present in given performance test. In Example F.4, O4-MINI created a specialized fast path for NumPy’s ljust API that only handled “matching-shaped” input arrays. Our test suite identifies these narrow optimizations as failures due to their poor generalization properties.
Bizarre overrides in __init__.py. A recurring pattern in O4-MINI trajectories is modifying _init__.py files to override functions instead of making core improvements. These overrides typically implement input-specific optimizations in a non-idiomatic manner, as shown below:
# _init__.py
_orig_strftime $\mathbf { \tau } = \mathbf { \tau }$ _PeriodCls.strftime
def _fast_strftime(self, fmt): if fmt is None and getattr(self, "freqstr", None) $\begin{array} { r l } { \mathbf { \Psi } } & { { } = = \mathbf { \Psi } ^ { " \Vert } \mathbf { M } ^ { \prime \prime } } \end{array}$ : return f"{y:04d}-{m:02d}" # Fast path for default monthly formatting return _orig_strftime(self, fmt)
See examples and analysis for this behavioral pattern in Example F.5 and Example F.6.
# 5.3 Agents Mismanage Compute
Underutilize available compute. First, we find that agents often underutilize their available compute budget. We observe this quantitatively in our inference-time scaling experiments (Section 4.2), where we increased the number of available agent steps. Even with larger budgets of $2 0 0 +$ steps, $7 5 \%$ of trajectories terminate before 100 steps! This again underscores the lazy behavior discussed earlier and highlights the need for better agent scaffolding and model improvements to optimally use compute.
Imbalance in exploration and exploitation. Figure 7 reveals a dichotomy in exploration-exploitation behaviours. O4-MINI trajectories are rated as explore-heavy meaning they spend most of their steps examining the codebase without converging on actionable optimizations. On the other hand, CLAUDE3.5-V2 trajectories are rated as exploit-heavy, meaning they commit to solutions with insufficient exploration of alternatives, and eagerly make tons of code changes. This also indicates a promising research direction to improve agent performance by leveraging the strengths of the two models.
# 5.4 Agents Misdiagnose Optimizations
Misidentify bottlenecks and solutions. Agents misdiagnose performance bottlenecks, implementing ineffective optimizations. In Example F.7, CLAUDE-3.5-V2 attempted to parallelize NumPy’s char.count API, ignoring Python’s GIL and process startup overhead, resulting in worse performance. After multiple failures, the model concluded: "For this specific use case, numpy’s string operations are already highly optimized, stick with the original implementation."
# 5.5 Analyzing Model Successes
Section 4.2 shows the with increasing test-time compute, SWE-Agents can solve a small fraction of the tasks. Here, we analyze the characteristics of the tasks that SWE-Agents can solve. We find that agent solutions vary significantly in sophistication, ranging from simple but effective changes to genuinely impressive algorithmic improvements.
Some successful optimizations are less impressive when compared to what humans achieved on the same problems. In Example F.8, O4-MINI added a fast path for writing data when network streams are idle, avoiding unnecessary buffering. But the human developer completely redesigned the entire buffering system with a much more sophisticated approach. In Example F.9, CLAUDE-3.5-V2 optimized database-style lookups using bit-combining. The human solution was more comprehensive, upgrading the underlying search algorithms across the entire codebase. In Example F.10, O4-MINI improved sorting by working directly with integer codes instead of string values. However, the human approach was cleaner, refactoring shared utilities that benefited multiple sorting operations.
However, agents can also implement sophisticated optimizations that outperform human solutions. O4- MINI completely rewrote image file parsing to read only essential metadata instead of decompressing entire frames, reducing algorithmic complexity from ${ \mathrm { O } } ( { \mathrm { n } } ^ { 2 } )$ to ${ \bf O } ( { \bf n } )$ (Example F.11). The human developer only made a simple check, while the agent delivered a fundamentally superior approach. CLAUDE-3.5-V2 eliminated memory waste by calculating exact allocation sizes upfront instead of repeatedly resizing arrays (Example F.12). The human solution still used dynamic resizing, just with better growth patterns, while the agent eliminated resizing entirely.
# 6 Related Work
Code LLM Benchmarks. Initial code generation benchmarks like HumanEval [Chen et al., 2021, Liu et al., 2023] and MBPP [Austin et al., 2021] focused on isolated small programs with simple specifications. These benchmarks have since evolved to evaluate LLMs across multiple languages (MultiPL-E [Cassano et al., 2022]), Data Science (DS-1000 Lai et al. [2023]), Arcade [Yin et al., 2022]), API usage (Jigsaw [Jain et al., b], ODEX [Wang et al., 2022], BigCodeBench [Zhuo et al., 2024]), and more complex algorithmic tasks in competitive programming (LiveCodeBench [Jain et al., 2024b], APPS [Hendrycks et al., 2021], CodeContests [Li et al., 2022], XCodeEval [Khan et al., 2023], CodeScope [Yan et al., 2023]). However, these benchmarks remain focused on isolated puzzle-solving tasks rather focusing on only code correctness and not performance optimization.
Performance Evaluation. Various works have introduced benchmarks to evaluate the performance capabilities of LLMS. EvalPerf [Liu et al., 2024] and EffiBench [Huang et al., 2024] assess runtime efficiency of code generated from natural language specifications on HumanEval and LeetCode tasks. In contrast, PIE [Madaan et al., 2023], ECCO [Waghjale et al., 2024], and NoFunEval [Singhal et al., 2024] focus on code optimization capabilities, where models improve existing programs while maintaining functional equivalence. These benchmarks employ different approaches to reliably measure program runtimes. PIE simulates hardware-level runtime for $^ { C + + }$ programs while EvalPerf employs hardware counters for precise performance measurement. While providing reliability, these approaches unfortunatly do not scale to larger codebases considered in our work. Other works [Coignion et al., 2024, Niu et al., 2024] utilize LeetCode’s execution environment to evaluate LLM-generated code performance adding unwarranted dependence on external services. ECCO, similar to our approach, leverages cloud computing environments to ensure consistent benchmarking.
Repo-Level SWE-Agent Benchmarks. SWE-Bench [Jimenez et al., 2024] evaluates issue resolution in open-source repositories. Extensions include multi-modal capabilities [Yang et al., 2024] and multi-lingual capabilities [Zan et al., 2025, Kabir et al., 2024]. Specialized benchmarks address test generation [Jain et al., 2024a, Ahmed et al., 2024] and bug-localization [Chen et al., 2025]. Zhao et al. [2024] proposed Commit-0 for library generation from scratch while Jain et al. [a] and Xie et al. [2025] propose frameoworks for function level code generation. These benchmarks however emphasize functional correctness rather than performance optimization considered in our work.
Recently, using LLMS to generate code has been receiving considerable attention, with hopes of automating AI research and development. Particularly, KernelBench [Ouyang et al., 2024] and METR-KernelEngineering [METR, 2025] are two benchmarks that evaluate the performance of LLMS in generating performant code for kernel engineering. While they focus on a specific domain of kernel engineering, we explore sofware optimization capabilities of LLMS across domains. | Developing high-performance software is a complex task that requires
specialized expertise. We introduce GSO, a benchmark for evaluating language
models' capabilities in developing high-performance software. We develop an
automated pipeline that generates and executes performance tests to analyze
repository commit histories to identify 102 challenging optimization tasks
across 10 codebases, spanning diverse domains and programming languages. An
agent is provided with a codebase and performance test as a precise
specification, and tasked to improve the runtime efficiency, which is measured
against the expert developer optimization. Our quantitative evaluation reveals
that leading SWE-Agents struggle significantly, achieving less than 5% success
rate, with limited improvements even with inference-time scaling. Our
qualitative analysis identifies key failure modes, including difficulties with
low-level languages, practicing lazy optimization strategies, and challenges in
accurately localizing bottlenecks. We release the code and artifacts of our
benchmark along with agent trajectories to enable future research. | [
"cs.SE",
"cs.AI",
"cs.CL",
"cs.LG"
] |
# 1 Introduction
Modeling human motion remains a significant challenge across neuroscience, medicine, computer graphics, and robotics due to data scarcity, high dimensionality, motor redundancy, and the requirement for real-time computation [22]. While neural networks are commonly employed, their effectiveness depends on large training sets that are often difficult or impossible to obtain, particularly in specialized applications. Gaussian processes (GPs), namely the Gaussian process dynamical model (GPDM), offer an effective alternative by enabling accurate human movement synthesis from limited data [20].
The GPDM is an extension of the Gaussian process latent variable model (GPLVM) with a hidden Markov model (HMM) prior. Decomposed into two GPs, the emission GP, like in the standard GPLVM, maps the latent space to the observed data, while the dynamical GP models the temporal evolution of the latent space as an HMM. It is effective in low-dimensional representation, motion modeling, and sequence prediction [20]. Additionally, sparse approximation techniques have made GPDMs both scalable and suitable for real-time applications [13, 17]. Since GPDMs are non-parametric, they are more robust to model misspecification than parametric models, and can better leverage limited data.
GPDMs also offer superior interpretability compared to black-box neural models. Its latent space provides a visualizable low-dimensional representation, its structure is easy to explain with a straightforward Markovian prior, and its probabilistic formulation explicitly quantifies uncertainty in both the dynamics and observations [20]. These features are critical for applications requiring transparency such as medical diagnosis or prosthesis control.
Despite the many advantages of GPDMs, they have distinct weaknesses. They cannot explicitly classify movements (though they may do so implicitly by predicting the dynamics of a particular class correctly). Furthermore, our own experiments show that they struggle to robustly handle the prediction of multiple movement types simultaneously. This limitation stems from their small temporal prediction horizon, which causes ambiguity when movements intersect or converge in the latent space, leading to inappropriate morphing or switching between classes. Additionally, we found that even when modeling a single movement type, GPDMs yield unstable predictions over longer time horizons due to their step-by-step Markovian prediction process, which accumulates errors that cause trajectories to drift from their true paths.
To address the challenge of modeling diverse movements with limited training examples, we propose the Gaussian process dynamical mixture model (GPDMM), which applies the mixture-of-experts approach [10] to GPDMs. The GPDMM comprises a probabilistic mixture of dynamical GPs, where each GP acts as an expert on a particular movement class’s dynamics, all unified by a single emission GP for positional representation. This architecture enables classification while preventing unintended switching or morphing between movements from these different classes. Central to the model’s effectiveness is our method for embedding geometric features in the latent space, which enables smooth dynamics and stable long-horizon prediction. Significantly, we achieve high-quality performance using training data with only a single example per movement class, fulfilling one of our primary objectives.
In this paper, we present the formulation of the GPDMM and demonstrate its ability to classify and generate movements. We showcase the model’s performance under design variations and ablations, and benchmark it against transformers, VAEs, and LSTMs.
# 2 Background
# 2.1 Gaussian Process Dynamical Models
The GPLVM [12] is a non-linear dimensionality reduction approach that maps each high-dimensional data point onto a lower-dimensional latent space via a GP prior. Building on the GPLVM, the GPDM [20] adds a Markovian prior over the latent states, modeling temporal structure and enabling the generation of new sequences. Furthermore, it allows us to evaluate how likely a new sequence is to have arisen from the trained distribution:
$$
\begin{array} { r l r } { p \big ( { \bf X } ^ { * } \mid a , { \bf X } _ { i n } , { \bf X } _ { o u t } \big ) } & { \propto } & { \exp \Big ( - \frac { 1 } { 2 } \mathrm { t r } \big [ { \bf K } _ { { \bf X } ^ { * } } ^ { ( a ) - 1 } { \bf Z } _ { \bf X } ^ { ( a ) } { \bf Z } _ { \bf X } ^ { ( a ) T } \big ] \Big ) , } \end{array}
$$
where $\mathbf { X } ^ { * }$ , ${ \bf X } _ { i n }$ and $\mathbf { X } _ { o u t }$ represent, respectively, the latent projection of the new data, and the latent input and output of the autoregressive dynamics. $\mathbf { K } _ { \mathbf { X } ^ { \ast } }$ is a kernel matrix evaluated between ${ \bf X } _ { i n }$ and $\mathbf { X } ^ { * }$ . $\mathbf { Z } _ { \mathbf { X } }$ is a mean function that includes ${ \bf X } _ { i n }$ , $\mathbf { X } _ { o u t }$ , and $\mathbf { X } ^ { * }$ . Here, $a$ is used as an indicator variable to specify a single GPDM in a mixture model. See Wang et al. [20] for a full evaluation of the GPDM and eq. (1).
The computational cost of full GP inference scales cubically, $\mathcal { O } ( N ^ { 3 } )$ , where $N$ is the number of training points. To mitigate this, sparse Gaussian process approximations introduce a smaller set of $M$ inducing points ( $M \ll N$ ) [17]. The fully independent training conditional (FITC) approach approximates the GP prior so that function values become conditionally independent given these inducing points, reducing computational cost to $\mathcal { O } ( N M ^ { 2 } )$ while preserving much of the GP’s flexibility [13]. Although the data sizes used in these experiments are small enough to employ the full GPDMM with little cost, we build the FITC approximated GPDMM to measure its level of underfitting compared to the full GPDMM and to guide future research on GPDMMs with larger datasets.
# 2.2 Mixture of Experts
Multiple GPDMs can be integrated in a single model using a mixture-of-experts formulation [10], wherein each dynamical GP is trained only on a subset of the data determined by its class, $S _ { a } \subset S$ where $a \in \{ 1 , 2 , . . . , A \}$ , $A$ is the number classes, $| S _ { a } | = n _ { a }$ , $| S | = N$ , and $N$ is the total number of training data points. For our applications, $\begin{array} { r } { n _ { 1 } = n _ { 2 } = . . . = n _ { A } = \frac { N } { A } } \end{array}$ . Compared to a singular GPDM, this reduces the computational complexity of the dynamical component from $\mathcal { O } ( N ^ { 3 } )$ to $\begin{array} { r } { \mathcal { O } \big ( \frac { N ^ { 3 } } { A ^ { 2 } } \big ) } \end{array}$ .
Given a subset of consecutive data points in a sequence (e.g. the first few seconds of a movement), the model can assign likelihoods for each possible category. In the following, we abbreviate eq. (1) by $p ( \mathbf { X } ^ { * } | a )$ . We use Bayes theorem to evaluate the posterior probabilities,
$$
p ( a | \mathbf { X } ^ { * } ) = \frac { p ( a ) p ( \mathbf { X } ^ { * } | a ) } { \displaystyle \sum _ { \alpha \ \epsilon \ A } p ( \alpha ) p ( \mathbf { X } ^ { * } | \alpha ) } , \qquad p ( a ) = \frac { n _ { a } } { N }
$$
where $\mathcal { A } = \{ 1 , 2 , . . . , A \}$ . A prediction is made by evaluating the argument of the maximum,
$$
\underset { \alpha \ \epsilon \ A } { \arg \operatorname* { m a x } } p ( \alpha | \mathbf { X } ^ { * } )
$$
# 3 Methods
# 3.1 Model Architecture
The basic GPDMM, like GPDMs, employs a single-layer architecture. The observation space of the model represents the observed variables (here joint-angle time series) and is denoted by a single data structure $\mathbf { Y } = [ \mathbf { y } _ { 1 } , \dots , \mathbf { y } _ { N } ] ^ { T } \in \mathbb { R } ^ { N \times D }$ where $D$ is the number of features and $N$ is the number of data points over all sequences. Each column in $\mathbf { Y }$ is composed of individual sequences stacked endto-end for the length of the full data set.
The core of the model is an emission GP that maps a low-dimensional latent space to a higher-dimensional observable data space. Dynamics in the latent space are modeled as state transitions by $G$ dynamical GPs, one for each category, g, of actions. Both the emission and dynamical GPs can be sparsified for scaling to large data sets. Furthermore, the model can be expanded to multiple layers of GPLVMs, though it proved inefficient for our limited data sets (see table 2).
Implementation. The GPDMM was built on top of the GPy library [8]. In particular, we used their core GPLVMs, kernel functions, and optimization procedures. The mixture model and the bulk of the dynamical GPs were produced in-house with some code heavily adapted from GPy, e.g., in applying sparse approximations. The full (non-sparse) model was about 7.5K parameters on the BM data set and 7.2K on the CMU data set.
Fig. 1. GPDMM Graphical Model. The data space represented by a single data structure $\mathbf { Y }$ comprises kinematics with $N$ data points consisting of equal-length sequences with $D$ feature dimensions. The data is represented by a sparse emission GP that maps from a lower-dimensional latent vector $\mathbf { x } _ { t }$ at any point in $t$ with inducing inputs $\mathbf { Z } _ { 1 }$ representing the entire latent space. The latent dynamics are modeled as state transitions by $G$ sparse dynamical GPs with inducing inputs $\mathbf { Z _ { g } }$ , each specializing in a distinct action category, $g$ . Both $\mathbf { Z }$ variables are dropped for the non-sparse, full-inference model.
Comparison to Switching GPDMs. The GPDMM shares conceptual ground with Chen et al.’s switching GPDM [4]. However, it differs on several key points: first, the GPDMM emphasizes stable, long-horizon generation rather than shortterm tracking. Second, it uses a fully probabilistic mixture-of-experts structure in a shared latent space, removing the need for discrete switching. Third, our model leverages direct likelihood evaluation for classification and generation, facilitating efficient inference. Crucially, the GPDMM supports single-example learning, embeds geometric features for smoother latent representations, and enables sparse approximations to scale effectively to larger datasets.
# 3.2 Initial Conditions and Latent Space Geometric Features
The optimization of GPLVMs is a highly non-convex problem, making solutions vulnerable to converging on local optima. Consequently, performance can depend significantly on the choice of the initial conditions for the optimization, particularly for the latent space [3, 14]. Latent space initialization is often done using a form of dimensionality reduction like PCA.
Previous work has employed back-constraints (BCs) to enforce specific topologies in latent spaces. For example, Urtasun et al. [19] used BCs to represent periodic data on a unit circle, and Taubert et al. [18] extended this idea to nonperiodic motions in a hierarchical model. In our single-layer framework for dynamics, we compared both BC-based and unconstrained approaches, ultimately finding better performance with the unconstrained option (see table 2).
Fourier Basis as Latent Geometries. The "unconstrained" approach embeds a geometry in the latent space via initialization with geometric features. Outlined below is our method for constructing these features, using the example of our top-performing geometry based on Fourier basis functions.
We defined a matrix of latent features $\mathbf { X _ { G } }$ as a set of Fourier basis functions:
$$
\mathbf { X _ { G } } = \mathbf { f } ( \pmb \theta ) = \left[ \mathbf { 1 } , \cos ( 2 \pi \pmb \theta ) , \sin ( 2 \pi \pmb \theta ) , \dots , \cos ( m \pi \pmb \theta ) , \sin ( m \pi \pmb \theta ) \right] ,
$$
where $\mathbf { 1 }$ is a vector of ones, and $\cos ( \cdot ) / \sin ( \cdot )$ terms are included at frequencies from 1 to $m$ . This setup yields a $( 2 m + 1 )$ -dimensional representation. The hyperparameter $m$ and the inclusion of the constant term were both optimized during model selection.
Mapping Data to the Latent Space. We construct a one-dimensional progression vector $\pmb \theta$ for each sequence, running from $0$ to $2 \pi$ ; the step size between consecutive data points is inversely proportional to the velocity of trajectory at this point. Thus, points with higher velocities in the original data receive smaller $\pmb \theta$ increments, improving coverage in regions sampled sparsely over time. By applying eq. 4 to these cumulative $\pmb \theta$ values, we obtain a set of $d$ -dimensional basis vectors that form the matrix $\mathbf { X _ { G } }$ .
Initialization of Latent Variables. Finally, we combine $\mathbf { X _ { G } }$ with features from a suitable dimensionality reduction of the data (see table 1), denoted $\mathbf { X } _ { \mathbf { R } }$ , to initialize the latent variables: $\mathbf { X } = \left[ \mathbf { X _ { G } } , \mathbf { X _ { R } } \right] \in \mathbb { R } ^ { N \times Q }$ where $N$ is the total number of data points and $Q$ is the combined dimension of $\mathbf { X _ { G } }$ and $\mathbf { X } _ { \mathbf { R } }$ . This initialization preserves the geometric relationships derived from the Fourier basis while leveraging low-dimensional embeddings of the original dataset.
Model Influence. These geometric embeddings significantly boost performance (see table 1). We reason that this is explained by enforcing smoothness, capturing key periodicities, and allocating finer resolution to high-velocity segments, enabling coherent, comparable representations for diverse motion tempos within a shared latent space.
# 3.3 Benchmarking Model Architectures
The authors are unaware of a standard benchmark for the task of solving classification and long-horizon sequence prediction on small data sets of human movement. Given these limitations, we benchmarked the GPDMM against three popular sequence-generation and classification models: a VAE, Transformer, and LSTM network. Hyperparameter optimization, including variations of the model architectures, was performed consistently for all models (see section 4.2).
Variational Autoencoder (VAE). Implemented in PyTorch [11,16], the VAE features an encoder–decoder design with a reparameterizer (for mean and log variance) plus a classifier branch. It optimizes a combined loss of mean-squared error (reconstruction), KL divergence (latent space regularization), and crossentropy (classification). This architecture was optimized over the number of hidden and latent dimensions. The number of parameters of the optimized VAE was roughly 12.3M and 7.6M, respectively, for the BM and CMU data sets.
Transformer. Built via Hugging Face’s BERT architecture [6, 21], the Transformer applies separate linear heads for classification (using the [CLS] token) and sequence generation. It uses cross-entropy for classification and mean-squared error for generation, optimized with Adam. This architecture was optimized over the number of hidden dimensions, layers, attention heads, and dropout rate. The number of parameters of the optimized Transformer was 19M and 15M, respectively, for the BM and CMU data sets.
Long Short-Term Memory (LSTM). An LSTM layer (via PyTorch [9, 16]) feeds two linear heads: one for classification (final hidden state) and one for generation (all hidden states). Again, training combines cross-entropy and meansquared error losses. This architecture was optimized over the number of layers and hidden dimensions. The number of parameters of the optimized LSTM was about 220K and 330K, respectively, for the BM and CMU data sets.
# 3.4 Resources
The implementation of our proposed methods is publicly available at https://github.com/jesse-st-amand/H-GPDM-and-MM-GPy-Ext.
All computations were performed on a single workstation with a AMD Ryzen 7 3700X 8-Core Processor and an NVIDIA GeForce RTX 2070 SUPER graphics card.
Claude 3.7 Sonnet [1] and ChatGPT o1 [15] were used in revising the text and for assistance in coding. The authors take full responsibility for the content of the manuscript.
# 4 Results
# 4.1 Data
We used two motion-capture data sets: (1) the Carnegie Mellon University (CMU) data set [5], comprising 6 trials each of 8 full-body movements (e.g., kicking, lunging, swimming), and (2) our bimanual (BM) data set, recorded inhouse, featuring upper-body activities of daily living (e.g., lifting a box, rotating a tablet, opening a jar), performed while seated, and aligned to a universal starting position. Each sequence was converted to joint-angle representation at equal-length intervals. The CMU data contained 77 joint features (neck, spine, pelvis, arms, legs); the BM data contained 117 (upper body plus finger articulation).
# 4.2 Hyperparameter Optimization and Significance Testing
Bayesian hyperparameter optimization was performed individually per data set via the Python package Scikit-Optimize on all models used in our experiments. Within each iteration of the Bayesian search, we performed a limited Monte Carlo cross-validation (MCCV). At each iteration of the MCCV, we trained on a single example per class and validated and tested on the remaining sequences. For the BM set, this yielded 4 validation and 5 test examples per class; for the CMU set, 2 validation and 3 test examples per class. The validation sets were used to find the optimal number of training epochs before overfitting occurred. We averaged these optimal validation scores over MCCV iterations as input to our Bayesian model evaluation. Final model evaluation and significance testing was performed on the test sets.
Classification used only the first $T$ elements of each test sequence, and generation was performed on the remaining elements. $T$ was set to $4 0 \%$ and $1 5 \%$ of the sequence length for the BM and CMU data sets, respectively.
# 4.3 Scoring Procedure
To evaluate the GPDMM against other approaches, we computed metrics for both classification accuracy and sequence generation quality.
For scoring classification accuracy, we used the standard F1 score, calculating precision and recall across all classes.
To measure distances from the ground truth, we computed the Fréchet distance, which measures the distance between trajectories by minimizing the maximum point-wise distance between re-parameterizations of the curves, providing a natural control for time warping [7]. To account for differing classification accuracy, we only measured sequences that were correctly classified by that model.
We averaged and normalized our distance measures according to the following equation:
$$
D _ { a v g } = \frac { 1 } { | C | } \sum _ { c \in C } \frac { 1 } { | S _ { c , v } | } \sum _ { \{ \mathbf { s } _ { t , a } , \mathbf { s } _ { t , b } \} \in S _ { c , v } } D _ { N } \left( g ( \mathbf { s } _ { t , a } ) , \mathbf { s } _ { t , b } \right)
$$
$$
D _ { N } ( \mathbf { s } _ { g } , \mathbf { s } _ { t } ) = \frac { d _ { F } ( \mathbf { s } _ { g } , \mathbf { s } _ { t } ) } { \operatorname* { m a x } _ { \mathbf { s } _ { 1 } , \mathbf { s } _ { 2 } \in \mathbf { S } _ { c } } d _ { F } ( \mathbf { s } _ { 1 } , \mathbf { s } _ { 2 } ) }
$$
where $C$ is the set of all classes, $S _ { c }$ is the set of all testing set sequences in class $c$ , $S _ { c , v }$ is the subset of those sequences correctly classified, $d _ { F } ( { \bf s } _ { 1 } , { \bf s } _ { 2 } )$ is the Fréchet distance between $\mathbf { s } _ { 1 }$ and ${ \bf s } _ { 2 }$ , $D _ { N } ( \cdot )$ is the normalized Fréchet distance, $\mathbf { s } _ { t , a }$ is the ground truth sub-sequence used in classification, $\mathbf { s } _ { t , b }$ is the remaining ground truth sub-sequence, and ${ \bf s } _ { g }$ is the sequence generated by $g ( \mathbf { s } _ { t , a } ) = \mathbf { s } _ { g }$ of size $\mathbf { s } _ { t , b }$ . This normalization controls for biases between sequence classes.
In addition to distance, we evaluated dampening and smoothness metrics. Dampening captures the reduction in motion amplitude and follow-through compared to natural movements. Smoothness quantifies the fluidity of a movement trajectory, with poor scores indicating jittery movements. These metrics address quality dimensions that F1 and distance metrics can miss. See table 2 row 7 (Fourier; None) for an example of a model that scored well on all metrics except for dampening, and table 3 row 3 (VAE) for a similar example that scores poorly on LDJ.
For the dampening metric, we analyze mean displacement over sliding windows calculated as,
$$
d = \frac { 1 } { N - w } \sum _ { i = 1 } ^ { N - w } \| \mathbf { p } _ { i + w } - \mathbf { p } _ { i } \|
$$
where $N$ is the number of points in the sequence, $w$ is the window size, and $\mathbf { p } _ { i }$ is the position of the $i$ -th point. The dampening metric is computed as a ratio between ground truth and generated sequences. Dampening scores greater than 1 indicate that generated movements exhibit diminished amplitude or incomplete execution compared to the ground truth.
To quantify smoothness, we employed the log dimensionless jerk (LDJ) metric [2], which controls for duration and amplitude. The LDJ is defined as:
$$
\eta _ { L D J } = - \ln \left( \frac { ( t _ { 2 } - t _ { 1 } ) ^ { 3 } } { \nu _ { \mathrm { p e a k } } ^ { 2 } } \cdot \int _ { t _ { 1 } } ^ { t _ { 2 } } \left( \frac { d ^ { 3 } x } { d t ^ { 3 } } \right) ^ { 2 } d t \right)
$$
where $t _ { 2 } - t _ { 1 }$ represents the movement duration, $\nu _ { \mathrm { p e a k } }$ is the peak velocity, and the integral represents the squared jerk over the movement interval. Higher (less negative) LDJ values indicate smoother movements. Similar to the dampening metric, we calculate the LDJ ratio between generated and ground truth sequences, where values greater than 1 indicate degraded smoothness in the generated movements.
# 4.4 Experiments
Tables 1, 2, and 3 summarize model performances with $F \mathcal { I }$ indicating the F1 score, and Distance, Dampening, and $L D J$ represented by eqs. (5), (7), and (8), respectively. An asterisk (\*) denotes a statistically significant difference from the top performing model for that metric ( $p \ < \ 0 . 0 5 )$ . Values in bold indicate the best score per column. Values not significantly different from the top performing model are considered competitive. For both the dampening and LDJ metrics, we consider scores closer to 1 to be better since this indicates a value closer to the ground truth. However, in-context we found LDJ scores below 1 to often be acceptable since this indicates trajectories with "less noise" than the ground truth–a possible indicator for over-smoothing that is (in)validated by the dampening metric.
Table 1. GPDMM comparison for top-performing variations of geometries (geo) and dimensionally reduced features (DR) as initial conditions in the latent space. Dims refers to the number of latent dimensions occupied by the geometry where "NA" is not applicable and "all" is the total number of dimensions in the latent space.
Table 2. GPDMM parameter comparison for top-performing variations of the numbers of layers, order of the dynamics, and back-constraints (BCs).
Table 1 illustrates how adding geometric and dimensionally reduced (DR) features to the latent space initialization impacts GPDMM performance. The table shows only top-performing configurations from a much larger set of parameters tested, including additional geometries and DR methods. All geometries were constructed as described in section 3.2. The ellipse and torus were created using their parametric equations. The Chebyshev and Laguerre geometries were defined analogously to the example Fourier geometry in eq. (4). The column labeled "dims" indicates how many latent dimensions a given geometry occupied. Function-based geometries (e.g., Fourier), which lack a fixed dimensionality, were optimized as hyperparameters. From these experiments, we concluded that combining a Fourier geometry with a form of PCA produced the best set of results, particularly due to the consistently strong dampening scores on both data sets compared to other approaches.
Table 3. Comparison between the GDMM and benchmark models.
Table 1 also emphasizes the importance of the dampening metric. Many variations of the GPDMM and our benchmarks generated movements that scored well on distance and LDJ, but when evaluated by eye, were severely underexpressing their range of motion or halting in place. Dampening scores above around 1.3 were found to be indicative of poor performance. Furthermore, in row 4 (None; Random), the GPDMM scores well on LDJ, but very poorly on dampening, indicating an artificially good LDJ score (further examination revealed the generated motion to be both under-expressed and noisy).
Table 2 reports the GPDMM’s performance under variations in the layer count, order of the dynamical model, and type of back-constraint (BCs). The first row presents the GPDMM’s performances with the Fourier-PCA latent spaces from table 1. The subsequent rows detail our most successful models optimized over hyperparameters (e.g. latent dimensions, geometries, and DRs) for specified numbers of layers, orders, and BCs. The terms "GP," "MLP," and "C-Kernel" respectively refer to Gaussian process, multi-layer perceptron, and circular kernel BCs. Based on these findings, we concluded that a simplified single-layer, first-order-dynamics model without BCs performed best.
Table 3 compares the GPDMM and its sparse variant ( $M = N / 2$ ) against our three benchmark models: the VAE, LSTM, and transformer. On the BM data set, the GPDMM maintained strong performance across all metrics, significantly outperforming the other approaches in distance, LDJ, and dampening, and performing within significance of the highest F1 score produced by the sparse model. Performances on the CMU data set closely matched the BM data set. Again, the GPDMM performed well on all metrics, scoring significantly above the other models on distance. It tied with the sparse model, the transformer, and the VAE for the best F1 score, and scored within significance of the VAE on dampening. While the LSTM achieved the top LDJ value, its extremely high dampening makes this score unreliable.
Figure 2 presents feature dimensions of the GPDMM’s latent space and the LSTM’s hidden state dynamics when separately trained on the CMU (the top figures marked with 1) and BM (bottom, marked 2) data sets. Plots A show the GPDMM’s geometric representations for capturing dynamics. A1 shows the first three Fourier basis functions (eq. 4), and A2 shows functions 4-6. Different function sets were selected for each plot as the models produce visually similar latent representations for identical functions. Data points are aligned with the geometry and distributed sequentially across its breadth to promote accurate state transitions (see section 3.2). Plots B show features optimized for class discrimination and exhibit more variation across the space. Plots C display the first three dimensions of the LSTM’s hidden state dynamics, combining information on dynamics and class discrimination together. B2, C2, and C1 display a divergence from a common centerpoint. This divergent pattern accurately represents the movement structure in the BM dataset but not in the CMU dataset. The GPDMM approach correctly captures this distinction (B1), while the LSTM misrepresents it (C1).
Fig. 2. GPDMM and LSTM Latent Space Visualizations. The figure displays latent spaces for select feature dimensions of top-performing GPDMMs and LSTMs. The top plots (1) display models trained on the CMU data set, while the bottom plots (2) show the BM data set. The left-hand and middle plots (A and B) depict GPDMM latent spaces. "A" plots illustrate the Fourier basis geometries. "B" plots present latent features that were initialized by dimensionally reduced representations of the data, "B1" with standard PCA and "B2" with RBF kernel PCA. The right-hand plots (C) show the first three dimensions of the LSTM’s hidden state dynamics.
# 5 Discussion
We introduced the GPDMM, a mixture-of-experts framework that leverages geometry-embedded latent spaces to classify and generate diverse motion classes from minimal data. Across two human-motion datasets, our model consistently outperformed or matched popular neural baselines, while preserving the interpretability inherent to GPDMs. These results highlight the ability of the GPDMM to produce robust, data-efficient motion analysis and generation in scenarios such as prosthetic control, where patient-specific data sets are limited in size, and reliability and transparency are critical.
Acknowledgments. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Jesse St. Amand. This research was funded through the European Research Council ERC 2019-SYG under EU Horizon 2020 research and innovation programme (grant agreement No. 856495, RELEVANCE). The CMU mocap data set used in this project was obtained from mocap.cs.cmu.edu and was created with funding from NSF EIA-0196217.
Disclosure of Interests. The authors have no competing interests to declare that are relevant to the content of this article.
# References
1. Anthropic: Claude 3.7 sonnet (2025)
2. Balasubramanian, S., Melendez-Calderon, A., Burdet, E.: A robust and sensitive metric for quantifying movement smoothness (2011). https://doi.org/10.1109/tbme.2011.2179545 3. Bitzer, S., Williams, C.K.I.: Kick-starting GPLVM optimization via a connection to metric MDS. In: Proc. NIPS Workshop on Challenges of Data Visualization (2010)
4. Chen, J., Kim, M., Wang, Y., Ji, Q.: Switching Gaussian process dynamic models for simultaneous composite motion tracking and recognition (2009). https://doi.org/10.1109/CVPR.2009.5206580
5. CMU Graphics Lab: Carnegie mellon university motion capture database. http: //mocap.cs.cmu.edu
6. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding (2018). https://doi.org/10.48550/arXiv.1810.04805 7. Driemel, A., Har-Peled, S., Wenk, C.: Approximating the Fréchet distance for realistic curves in near linear time (2010). https://doi.org/10.1145/1810959.1811019
8. GPy: GPy: A gaussian process framework in python. http://github.com/ SheffieldML/GPy (since 2012)
9. Hochreiter, S., Schmidhuber, J.: Long short-term memory (1997). https://doi.org/10.1162/neco.1997.9.8.1735
10. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts (1991). https://doi.org/10.1162/neco.1991.3.1.79
11. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes (2013). https://doi.org/10.48550/arXiv.1312.6114
12. Lawrence, N.: Probabilistic non-linear principle component analysis with Gaussian process latent variable models. J. Mach. Learn. Res. 6(11), 1783–1816 (2005)
13. Lawrence, N.D.: Learning for larger datasets with the gaussian process latent variable model. In: Proc. Int. Conf. Artificial Intelligence and Statistics. vol. 2, pp. 243–250 (2007)
14. Liu, O., Losey, D.: A survey and theoretical analysis of gaussian process latent variable models. In: Proc. NeurIPS (2017)
15. OpenAI: GPT-o1 (2025)
16. Paszke, A., Gross, S., et al.: Pytorch: An imperative style, high-performance deep learning library (2019). https://doi.org/10.48550/arXiv.1912.01703
17. Snelson, E., Ghahramani, Z.: Sparse gaussian processes using pseudo-inputs. In: Adv. Neural Inf. Process. Syst. vol. 18, pp. 1257–1264 (2005)
18. Taubert, N., St. Amand, J., Kumar, P., Gizzi, L., Giese, M.A.: Reactive hand movements from arm kinematics and EMG signals based on hierarchical Gaussian process dynamical models (2020). https://doi.org/10.1007/978-3-030-61609-0_11
19. Urtasun, R., Fleet, D.J., Lawrence, N.D.: Modeling human locomotion with topologically constrained latent variable models (2007). https://doi.org/10.1007/978- 3-540-75703-0_8
20. Wang, J.M., Fleet, D.J., Hertzmann, A.: Gaussian process dynamical models for human motion 30(2) (2008)
21. Wolf, T., et al.: Transformers: State-of-the-art natural language processing (2020). https://doi.org/10.48550/arXiv.1910.03771
22. Wolpert, D.M., Ghahramani, Z.: Computational principles of movement neuroscience (2000). https://doi.org/10.1038/81497 | We present the Gaussian process dynamical mixture model (GPDMM) and show its
utility in single-example learning of human motion data. The Gaussian process
dynamical model (GPDM) is a form of the Gaussian process latent variable model
(GPLVM), but optimized with a hidden Markov model dynamical prior. The GPDMM
combines multiple GPDMs in a probabilistic mixture-of-experts framework,
utilizing embedded geometric features to allow for diverse sequences to be
encoded in a single latent space, enabling the categorization and generation of
each sequence class. GPDMs and our mixture model are particularly advantageous
in addressing the challenges of modeling human movement in scenarios where data
is limited and model interpretability is vital, such as in patient-specific
medical applications like prosthesis control. We score the GPDMM on
classification accuracy and generative ability in single-example learning,
showcase model variations, and benchmark it against LSTMs, VAEs, and
transformers. | [
"cs.LG"
] |
# 1 INTRODUCTION
# 1.1 Background and Motivation
Data stream monitoring has become instrumental in deriving valuable insights for a myriad of applications, such as anomaly detection [2–5], network measurement [6, 7], predictive maintenance [8–10], and customer behavior analysis [11, 12], among others [13–15]. These tasks present significant challenges due to the high volume and velocity of data streams coupled with the constraints on memory and processing time in practical applications. A data stream is a series of items, each of which is a key-value pair. Items that share the same $k e y$ compose a flow, and the key can be considered as the flow $\mathrm { I D } ^ { 1 }$ . The value is the metric that needs to be monitored. All flows’ items are intermingled together in a data stream (𝑒.𝑔., $D S = \{ \langle a , 3 \rangle , \langle a , 2 \rangle , \langle b , 5 \rangle , \langle d , 1 \rangle , \langle a , 4 \rangle , \dots \} ) .$
Sketch, a probabilistic data structure, has gained widespread use in data stream monitoring due to its impressive speed and performance within the confines of limited memory. As of now, most tasks related to data stream monitoring can be categorized into two types:
Monitoring the sum of value per key. Such tasks record value sum of every flow in the data stream, with potential application in anomaly detection, healthcare analysis, social media analysis, and so on. Classical works include CM Sketch [16], CU Sketch [17], and Count Sketch [18]. Monitoring the cardinality of value per key. Such tasks record value cardinality of every flow in the data stream. For instance, the key could represent the source address, while the value could represent the destination address. They are particularly useful in anti-attack scenarios, such as DDoS or superspreaders. CSM Sketch[19] proposes the randomized counter sharing scheme to solve the problem.
In this paper, we define a third task type:
Monitoring the variation of value per key. SELECT key, 𝑣𝑎𝑙𝑢𝑒𝑖 (𝑘𝑒𝑦) FROM 𝐷𝑎𝑡𝑎_𝑆𝑡𝑟𝑒𝑎𝑚 WHERE 𝑣𝑎𝑙𝑢 𝑖 𝑒 (𝑘𝑒𝑦) − 𝑣𝑎𝑙𝑢𝑒 (𝑘𝑒1 𝑦) > 𝑇ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑
Such tasks monitor the variation in the value of items belonging to the same flow. If the difference of value between two adjacent items in the same flow exceeds a threshold, we report the flow and the value of the latter item.
A scenario corresponding to this task is the real-time detection of flow gaps in the data stream [20–24]. In this context, the value corresponds to the sequence number each item carries, such as the sequence number of frames in videos or the Identification field in an IP header. The sequence number indicates the relative position of an item in a flow, incrementing by one for each new item. Consequently, if we observe a nonconsecutive $( \geq 2 )$ variation in the sequence number (𝑠𝑒𝑞) between two adjacent items belonging to the same flow, it suggests the occurrence of item loss or item reordering during the data transmission process. Such flow anomalies are defined as the flow gap. If a flow gap is too large, it may have a significant negative impact on the Quality of Service (QoS). Therefore, it becomes crucial to report any incident where the variation in 𝑠𝑒𝑞 between two adjacent items within the same flow exceeds a predefined threshold, which we call major flow gap.
While consistent dropping of a few items over a prolonged period is undeniably important, the sudden and substantial loss of data items, as detected by major-flow-gap, can have immediate and severe consequences in certain scenarios. For example:
Use case 1: Evaluation of real-time video and audio communication quality[25–32]. A frame drop occurs when one or more frames in a video or audio sequence are lost in the transmission process, which is frequently attributed to network congestion or an unstable internet connection. While people often neglect or are able to tolerate a certain level of frame loss in video and audio if the number of frames lost at once is small. A one-time drop of many frames can lead to missing a crucial piece of information or significant visual or auditory disruptions that can affect the overall user experience. Therefore, reporting the major flow gaps is our primary goal in order to evaluate the communication quality. Besides, it is common that a few frames are lost in the video and audio data stream. It would be burdensome to report and analyze them all.
Use case 2: Detection of malicious packet dropping attack in network[33–38]. Malicious packet dropping attacks pose a significant threat to network security as they inhibit the transmission of regular packets, potentially leading to the loss of crucial information. Every network system has certain resilience towards attacks. So the situation we need to worry about is when an attack cannot be automatically dealt with. A major flow gap can indicate a more aggressive form of attack or a change in attack strategy, which requires immediate attention. Our algorithm provides a viable countermeasure to this threat. It enables continuous monitoring of packet drops by tracking the variation in sequence numbers for each flow. Furthermore, our algorithm is designed compactly, allowing it to operate effectively even on devices with limited resources. This capability assists network administrators in pinpointing the causes of packet drops and effective actions can be taken to ensure network performance.
Drawing from the use cases outlined above, we delineate two key requirements for a solution designed for the real-time detection of flow gaps:
R1: Speed. Each item should be processed rapidly enough, which necessitates a solution that is not only simple but also ingenious. The reason behind this lies in the fact that data transmission typically occurs at high speed. Furthermore, the processing nodes along the data transmission path are typically constrained by limited computational power, while the volume of the data stream is immense.
R2: Accuracy. The flow gap reports should be highly accurate with minimal memory overhead. False positive reports would impose additional scrutiny burdens on the network, and failure to detect flow gaps may lead to crucial data losses being overlooked. Besides, to fulfill R1, the memory cost should be minimized to enable the data structure to fit within CPU caches.
# 1.2 Our Proposed Solution
In this paper, we introduce GapFilter as a solution for real-time detection of flow gaps, achieved by monitoring the variation of sequence numbers within a flow. We design GapFilter using Sketch, a concise probabilistic data structure that allows for rapid data processing with manageable errors, as a foundation. Our solution realizes both of the design goals: accuracy and speed. The design of our solution rests on two key ideas: the similarity absorption technique and the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism. Furthermore, we develop two versions of GapFilter. The first version incorporates only the similarity absorption technique, while the second integrates both the similarity absorption technique and the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism. The first version, known as Speed-Oriented GapFilter (GapFilter-SO), excels in terms of speed while maintaining good accuracy. The second version, termed Accuracy-Oriented GapFilter (GapFilter-AO), prioritizes accuracy without significantly compromising speed. Within the context of this paper, we define the number of items within a flow as the flow size, or item frequency. A flow with a large number of items is considered a large flow, while one with a small number of items is considered a small flow. An abnormal flow is characterized by flow gaps caused by item loss or reordering, while flows devoid of such problems are classified as normal flows.
The similarity absorption technique leverages the sequence number as an index to achieve efficient matching, which saves memory and increases speed. Conventionally, to store the information related to a flow, it’s necessary to record the flow ID for future matching. However, in GapFilter, the need to record the flow ID is eliminated, as we utilize the sequence number for the matching task. The similarity absorption technique is to select the recorded sequence number closest to the sequence number of the incoming item as the matched sequence number, which corresponds to the smallest degree of anomaly. This method is chosen based on the premise that normal flows are prevalent in the data stream and severe anomalies are less likely to occur. In practical scenarios, the flow ID typically consists of a 13-byte five-tuple, whereas the sequence number requires no more than 4 bytes (for instance, the Identification field in IP is 2 bytes, and the Packet Number field in QUIC is 4 bytes). Hence, omitting the flow ID leads to considerable memory savings. It also enhances speed since a single field access can accomplish both matching and inspection tasks.
The 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism proficiently integrates broad monitoring of all flows with detailed scrutiny of suspicious flows. We divide our data structure into two parts: (1) 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 and (2) 𝑠𝑢𝑠𝑝𝑒𝑐𝑡. The 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism functions as a digital police force within the network. The 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 component operates like a cyber patrol, broadly monitoring all citizens. When this patrol identifies any suspicious activity, the corresponding citizen transitions to a 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 status. Then, a comprehensive inspection process begins, akin to a detailed investigation conducted by a detective, ensuring network safety and integrity. This strategy leverages our limited resources efficiently, performing a broad inspection of all flows, then focusing more detailed investigation on suspicious flows as flagged by the initial examination.
We use two methods to further improve the accuracy of GapFilter. First, we randomize flows’ sequence number and put flows into small buckets. This ensures that the sequence numbers from different flows are well-distributed, and only a limited number of flows share the entire sequence number range. When a bucket becomes saturated, we implement the Least Recently Used (LRU) replacement policy to update the bucket residents. Secondly, we enhance accuracy by utilizing fingerprints, which are derived by mapping the original long ID to a shorter bit sequence using a hash function. The fingerprint assists the sequence number in the matching process.
To further boost GapFilter’s speed, we utilize the Single Instruction and Multiple Data (SIMD) operations. In the typical implementation of the LRU replacement policy, storing and comparing timestamps leads to additional memory and time costs. The SIMD operations allow us to eliminate these costs by processing all flows in a bucket in parallel, keeping them in chronological order. Consequently, the least recently used flow is simply the last one in the bucket. This approach obviates the need for storing timestamps and allows us to identify the least recently used flow in $O ( 1 )$ time, as opposed to $O ( w )$ , where $w$ denotes the number of flows maintained in each bucket. It also reduces the time complexity of matching from $O ( w )$ to $O ( 1 )$ . Although there is a limit on the number of flows we can operate on in parallel using SIMD, the experiments in Section 6.2 shows that the optimal bucket size $ { \boldsymbol { w } }$ falls under the limit.
We devise a Straw-man solution (Section 6.1.4) for better comparison. Extensive experiments were performed to evaluate the accuracy and speed of our algorithm. The results show that the accuracy of GapFilter-SO and GapFilter-AO achieves about 1.4 and 1.6 times higher than the Straw-man solution, respectively. GapFilter-SO is 3 times faster than the Straw-man solution, while GapFilter-AO is 2.5 times faster than the Straw-man solution. The experiments also show that GapFilter is memory-efficient, for it only needs 64 KB to handle 55M items. We also theoretically prove that GapFilter achieves a high accuracy with limited memory.
# 1.3 Key Contributions:
To the best of our knowledge, we are the first work that formalizes the task of monitoring the variation in the value of items belonging to the same flow, thus addressing a previously unexplored area in the research fields.
• We develop the similarity absorption technique, designed to save memory and increase speed.
• We develop the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism, which optimally leverages limited resources to provide both comprehensive monitoring of all flows and detailed scrutiny of suspicious flows.
• We implement GapFilter-SO and GapFilter-AO on the CPU platform. Compared to the Straw-man solution, both GapFilte SO and GapFilter-AO not only offer superior accuracy but also offer superior speed. All associated codes are publicly accessible on GitHub [1].
# 2 BACKGROUND AND RELATED WORK
To the best of our knowledge, no previous work can be directly applied to accomplish real-time flow gap monitoring in data streams. Though some research has targeted the detection of packet dropping attacks in networks, these studies have different problem definitions and application scenarios compared to our work. Besides, our work meets both requirements delineated in Section 1.1, R1: Speed and R2: Accuracy, which are not met by previous approaches.
# 2.1 Statistics-based mechanisms
The methodologies presented in [38, 39] rely on historical records to form a correlation or distribution pattern. If the following packet do not conform the pattern established by their predecessors, the system will report a packet dropping attack. This approach, however, can only detect packet drops after a significant number of packets have been collected and processed, thereby lacking in speed. Additionally, maintaining records of substantial packet information is memory consuming. The need to transmit topology data to a base station for processing also imposes a significant burden on the network. Furthermore, constant network fluctuations necessitate frequent pattern modifications, making the system susceptible to false-positive attack reports due to changing network conditions. These models may also overlook attacks exhibiting changing behaviors.
# 2.2 Trust-based mechanisms
In [40–42], each node is required to monitor its neighboring nodes to gather various status information about the received and forwarded packets. The observed features are then mathematically modeled and used as input to a probability estimate function to determine whether a particular node is malicious. However, this method has notable drawbacks. Firstly, it necessitates the transmission of a substantial number of packets before any packet drops can be detected, and recording this vast amount of information is memory consuming . Secondly, this method demands the exchange of numerous messages between nodes, which consumes significant bandwidth.
# 2.3 ML-based mechanisms
The approaches adopted by [43, 44] incorporate Machine Learning for their detection methodology. They initially gather multidimensional features from the network, MAC, and physical layers by monitoring events and computing topology statistics. These features are then fed into a model to classify an unknown behavior. However, a primary issue with this kind of solution is that it is heavyweight for energy-limited nodes to extract features from packets and do machine learning at the same time. The processing overhead and memory cost are unacceptable.
# 3 PROBLEM STATEMENT
Definition 1. Data stream. The data stream is a series of items appearing in sequence. For a given data stream $\begin{array} { r l } { \mathcal { D } S } & { { } = } \end{array}$ $\{ e _ { 1 } , e _ { 2 } , e _ { 3 } , . . . , e _ { i } , . . . \} _ { : } ,$ , each item 𝑒 in $\mathcal { D } \boldsymbol { S }$ contains an $\boldsymbol { I D }$ field and a sequence field: $\boldsymbol { e } = \langle F I D , S E Q \rangle$ . 𝐹 𝐼 𝐷 stands for the flow to which $e$ belongs, while 𝑆𝐸𝑄 stands for its relative position in the flow. In other words, 𝐹 𝐼 𝐷 serves as an unordered index and 𝑆𝐸𝑄 serves as an ordered serial number.
Definition 2. Monitoring flow gaps. The flow gap describes the nonconsecutive variation $( \geq 2 ,$ in 𝑆𝐸𝑄 between adjacent items from the same flow. This irregularity may arise due to item loss or item reordering throughout the data transmission process. The magnitude of a flow gap is indicative of the severity of the issue it represents. Specifically, the relationship between the variation in $S E Q$ of two adjacent items and the corresponding situation is delineated as follows:
$$
s i t u a t i o n = \left\{ \begin{array} { l l } { n e g l e c t , } & { v a r \in ( - \mathcal { T } _ { 2 } , 1 ) } \\ { n o r m a l , } & { v a r = 1 } \\ { m a t c h e d \left\{ \begin{array} { l l } { m i n o r g a p , } & { v a r \in [ 2 , \mathcal { T } _ { 1 } ) } \\ { m a j o r g a p , } & { v a r \in [ \mathcal { T } _ { 1 } , \mathcal { T } _ { 2 } ) } \\ { n o t m a t c h e d , } & { v a r \in ( - \infty , - \mathcal { T } _ { 2 } ] \cup [ \mathcal { T } _ { 2 } , + \infty ) } \end{array} \right. } \end{array} \right.
$$
Explanation. In the above equation, the major gap is the condition we aim to detect and report. Our focus is on identifying only positive gaps, because negative gaps are usually accompanied by their positive counterparts. Therefore, we can neglect the variation $\in ( - \mathcal { T } _ { 2 } , 1 )$ . If the situation falls in {𝑛𝑒𝑔𝑙𝑒𝑐𝑡, 𝑛𝑜𝑟𝑚𝑎𝑙, 𝑚𝑖𝑛𝑜𝑟 𝑔𝑎𝑝, 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝}, we call it 𝑚𝑎𝑡𝑐ℎ𝑒𝑑, because we deem the two items belong to the same flow.
$\mathcal { T } _ { 1 }$ and $\mathcal { T } _ { 2 }$ serve as valuable parameters to bound the detection scale of the algorithm, ensuring efficient operation, clear diagnostics, and adaptability across different applications. $\mathcal { T } _ { 1 }$ is designed to adjust the tolerance level of flow gaps. A flow gap that surpasses $\mathcal { T } _ { 1 }$ should be detected and reported.
As for $\mathcal { T } _ { 2 }$ , it guarantees controlled response time and limited computational resources. The reason why we set $\mathcal { T } _ { 2 }$ is similar to why we set a "timeout" threshold in many protocols (e.g., TCP, HTTP, DNS). In dynamic environments, where data streams are volatile, waiting indefinitely for a response can be detrimental, because we cannot let a possibly broken connection take up the memory and computational resources forever. Besides, as shown in Figure 4(b), most of the gaps have small size. Therefore, by capping the detectable gap with $\mathcal { T } _ { 2 }$ , we bring efficiency and predictability to the system.
Depending on the application, $\mathcal { T } _ { 1 }$ and $\mathcal { T } _ { 2 }$ can be adjusted. For high-reliability applications, smaller $\mathcal { T } _ { 1 }$ and $\mathcal { T } _ { 2 }$ might be appropriate, forcing a quicker reconnection or system response. For more lenient applications, $\mathcal { T } _ { 1 }$ and $\mathcal { T } _ { 2 }$ can be increased, allowing for longer interruptions before deeming the flow undetectable.
# 4 GAPFILTER
In this section, we propose our solution in details. The symbols frequently used are shown in table I.
# 4.1 Overview
We devised two versions of GapFilter: Speed-Oriented GapFilter (GapFilter-SO) and Accuracy-Oriented GapFilter (GapFilter-AO). GapFilter-SO is neat and fast, capable of achieving high accuracy on most real-world scenarios. While GapFilter-AO is a little more complicated, but can achieve robust performance even on extreme scenarios. GapFilter-SO (Section 4.2) utilizes the similarity absorption technique, which uses the sequence number to serve as the index number to accomplish matching. GapFilter-AO (Section 4.3) utilizes both the similarity absorption technique and the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛- 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism. The 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism proficiently integrates broad monitoring of all flows with detailed scrutiny of suspicious flows. Furthermore, we develop multiple optimization techniques (Section 4.4) to further improve the performance.
Table 1: Notations.
# 4.2 Proposed Speed-Oriented solution
Figure 1: Illustration of GapFilter-SO. This is a GapFilterSO with $d = 4$ buckets and each bucket has $w = 4$ cells. $e _ { 1 } , e _ { 2 }$ , $e _ { 3 } , e _ { 4 }$ arrive in order and are mapped to different buckets. The final result after the arrival of these four items lies on the right.
# 4.2.1 Data structure.
As shown in figure 1, the data structure of GapFilter-SO is composed of a bucket array (denoted as $B$ ) of $d$ buckets, and a hash function $h ( \cdot )$ . There are $ { \boldsymbol { w } }$ cells in one bucket, each bearing a field of sequence number $S E Q ^ { c }$ in it. The $S E Q ^ { c }$ in the $j ^ { t h }$ cell of the $i ^ { t h }$ bucket is denoted as $B [ i ] [ j ]$ .
# 4.2.2 Monitoring operations.
When a new item $\textit { e } = \langle F I D , S E Q \rangle$ arrives, we map it to the bucket $B [ h ( F I D ) \% d ]$ and choose the cell with the $S E Q ^ { c }$ closest to $S E Q$ . Suppose the $j ^ { t h }$ cell is the one. We plug $\ v a r \ = \ ( S E Q \ -$ $B [ h ( F I D ) \% d ] \left[ j \right] )$ into equation (1) to get the situation:
(i) If the situation is 𝑚𝑎𝑡𝑐ℎ𝑒𝑑, we call the chosen cell the matched cell. We set $B [ h ( F I D ) \% d ] \left[ j \right] = S E Q$ if $S E Q$ is larger. If the situation is 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝, we report it. • (ii) If the situation is 𝑛𝑜𝑡 𝑚𝑎𝑡𝑐ℎ𝑒𝑑, meaning none of the $S E Q ^ { c }$ recorded in this bucket matches $e$ , we will find a cell for $e$ . If there exists an empty cell, we will initialize it by setting its $S E Q ^ { c } = S E Q$ . If there is no empty cell in the bucket, we will empty the cell containing the least recently used (LRU) flow and initialize it by setting its $S E Q ^ { c } = S E Q$ .
Traditionally, two primary methods are employed to implement the LRU (Least Recently Used) replacement policy. The first method is to store and compare timestamps, which leads to extra memory and time cost. The second method is to arrange the items chronologically, obviating the need for extra memory but necessitating a constant rearrangement of items each time a new one arrives. We choose the second method in our solution. To optimize the item rearrangement process, we utilize the tool of Single Instruction and Multiple Data (SIMD) to decrease the time complexity of rearranging to $O ( 1 )$ . The detailed process of implementing LRU through SIMD is elaborated upon in Section 4.4.1. The item rearrangement operation proceeds as follows: the priority of cells within a bucket decreases sequentially from the first to the last. Every time a new item arrives, we put it in the first cell of the bucket while existing items are shifted backward. When we need to empty a cell, we simply empty the last cell because it holds the item of the lowest priority (i.e., the least recently used).
The corresponding pseudo-code is shown in Algorithm 1.
Algorithm 1: Speed-Oriented solution
19 return 𝑟𝑒𝑝𝑜𝑟𝑡
# 4.2.3 Example.
In the example shown in Figure 1, we set $\mathcal { T } _ { 1 } = 5$ and $\mathcal { T } _ { 2 } = 3 0$ .
• When $e _ { 1 } \mathrm { = } \langle F I D = f _ { 1 } , S E Q = 2 3 4 2 1 \rangle$ arrives, it successfully finds a matched cell with $S E Q ^ { c } = 2 3 4 2 0$ in the bucket. Since 23421
$2 3 4 2 0 = 1$ , the situation is considered normal, Consequently, we update $S E Q ^ { c }$ to 23421 and rearrange the items in the bucket.
• When $e _ { 2 } \ = \ \langle F I D \ = \ f _ { 2 } , S E Q \ = \ 1 1 9 3 1 \rangle$ arrives, it successfully finds a matched cell with $S E Q ^ { c } = 1 1 9 2 0$ in the bucket. Since $\mathcal { T } _ { 1 } \leq 1 1 9 3 1 - 1 1 9 2 0 < \mathcal { T } _ { 2 }$ , a 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝 is found and reported. Then, we update the $S E Q ^ { c }$ to 11931 and rearrange the items in the bucket.
• When $\begin{array} { r } { e _ { 3 } \ = \ \langle F I D \ = \ f _ { 3 } , S E Q \ = \ 1 0 0 1 \rangle } \end{array}$ arrives, it fails to find a matched cell, prompting the insertion of $e _ { 3 }$ into an empty cell in the bucket. The items in the bucket are subsequently rearranged.
• When $e _ { 4 } \ = \ \langle F I D \ = \ f _ { 4 } , S E Q \ = \ 2 1 0 3 \rangle$ arrives, it fails to find a matched cell or an empty cell. So it empties the last cell and inserts itself into the first cell. Finally, all the items in the bucket are rearranged.
# 4.2.4 Analysis.
Similarity Absorption. In our design, the sequence number also serves as the index number to accomplish matching. The traditional way that uses the ID to do matching is both memory consuming and time consuming. In our solution, we use a greedy algorithm that selects the recorded sequence number closest to the sequence number of the incoming item as the matched sequence number. This is because in most cases, we expect the network to function properly. The recorded sequence number that indicates the smallest flow gap is most likely to be the matched sequence number.
Grouping. To reduce the risk of sequence number collision (𝑠𝑒𝑞 collision), we conduct flow grouping in our approach. Each bucket in GapFilter-SO represents a group, with a group size of $ { \boldsymbol { w } }$ . By grouping flows, we can assure that only a limited number of flows share the entire sequence number range. Thus, A smaller group size implies a lower risk of 𝑠𝑒𝑞 collision. However, it is important to avoid setting $ { \boldsymbol { w } }$ too small. If $w$ is too small, it increases the likelihood of more than $ { \boldsymbol { w } }$ flows competing for the limited $w$ cells in a bucket. The experiment results related to this issue are shown in Section 6.2.
# 4.3 Proposed Accuracy-Oriented solution
Figure 2: Monitoring operations in GapFilter-AO.
# 4.3.1 Data structure.
Similar to GapFilter-SO, the data structure of GapFilter-AO consists of a bucket array (𝐵) with $d$ buckets and a hash function $h ( \cdot )$ . GapFilter-AO differs from GapFilter-SO in that each bucket in GapFilter-AO is divided into two parts: (1) $c$ civilians and (2) 𝑠 suspects $( c + s = w )$ ). Each cell in a bucket contains a field $S E Q ^ { c }$ . The priority of cells in the civilian part is updated based on the Least Recently Used (LRU) replacement policy. While in the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 part, we devise a different replacement policy called Least Recently Disrupted (LRD). Instead of assigning a flow with the highest priority when a new item of it arrives as what 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 does, 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 only assigns the flow with the highest priority when a new 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝 occurs in it. Every time we need to evict a flow, we evict the one that has not encountered 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝 for the longest time. The 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 part of GapFilter-AO functions as an overall monitor for all the flows, while the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 part focuses on keeping suspicious flows under constant and meticulous surveillance.
Algorithm 2: Accuracy-Oriented solution
24 return 𝑟𝑒𝑝𝑜𝑟𝑡
# 4.3.2 Monitor operation.
When an item $\boldsymbol { e } = \left. F I D , S E Q \right.$ arrives, we map it to the bucket $B [ h ( F I D ) \% d ]$ and select the cell with the $S E Q ^ { c }$ closest to $S E Q$ Suppose the $j ^ { t h }$ cell is the one selected. We plug $\upsilon a r = ( S E Q -$ $B [ h ( F I D ) \% d ] \left[ j \right] )$ into equation (1) to get the situation. Figure 2 shows the different cases of insertion. The $e _ { i } ( i \in \{ 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 \} )$ in the figure represents the item recorded in the $i ^ { t h }$ cell.
Case (i) The matched cell indicates one of the following situations: 𝑛𝑒𝑔𝑙𝑒𝑐𝑡, 𝑛𝑜𝑟𝑚𝑎𝑙 or 𝑚𝑖𝑛𝑜𝑟 𝑔𝑎𝑝. If the matched cell is in the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 part $\textcircled{1}$ , we simply update the $S E Q ^ { c }$ without rearranging items. However, if the matched cell is in 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 part, we not only update the $S E Q ^ { c }$ but also rearrange the matched cell to the first cell in the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 $\textcircled{2}$ .
• Case (ii) The matched cell indicates a 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝. First, we report the 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝. Next, no matter whether the matched cell is in 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 $\textcircled{3}$ or 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 $\textcircled{4}$ , we update the $S E Q ^ { c }$ and rearrange it to the first cell in 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 . If the matched cell is in 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛, then 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 must be full before $e$ arrives (we will explain later). Therefore, the rearrange operation will force 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 to transfer its last item to 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛.
• Case (iii) There is no matched cell in the bucket. In this situation, we fail to find the information about this flow, so we insert it to 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 for further examination. Because we do not know whether it is an abnormal flow, we assign it the lowest priority when inserted to 𝑠𝑢𝑠𝑝𝑒𝑐𝑡. Specifically, if 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 still has empty cells, we insert $e$ to the first (leftmost) empty cell $\textcircled{5}$ . If 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 is full, we transfer the last (rightmost) item in 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 to 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛, and insert $e$ to the last cell in 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 $\textcircled{6}$ .
Using the above operations $\textcircled{1}$ to $\textcircled{6}$ ), flows may transition back and forth between the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 and 𝑠𝑢𝑠𝑝𝑒𝑐𝑡. A flow that becomes suspicious in 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 will be transferred to 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 . A flow in 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 that gets squeezed out by other more suspicious flows, which we call "exonerated", will be transferred to 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛. When a flow gets squeezed out by other flows in 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛, it will simply be discarded. This 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism proficiently integrates broad monitoring of all flows with detailed scrutiny of suspicious flows.
The corresponding pseudo-code is shown in Algorithm 2. Further details on how these operations sufficiently meet our requirements will be explained in the following paragraphs.
At the beginning of the monitoring process, the data structure is empty and every flow is considered new due to the absence of historical data. We should insert all the flows to 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 as priority, because space is abundant and we can give them as meticulous surveillance as possible. Therefore, when there are empty cells in 𝑠𝑢𝑠𝑝𝑒𝑐𝑡, it implies that the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 is entirely vacant. Conversely, when 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 contains items, it indicates that the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 is at full capacity. This need is met by operation $\textcircled{5}$ .
During the mid-phase of the monitoring process, for a large abnormal flow, if it is at 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 , operation $\textcircled{3}$ will ensure its information is well stored, so we can detect 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝𝑠 in time. If it is at 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛, operation $\textcircled{2}$ will ensure its information is well stored. Once a 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝 happens, the flow will be transitioned to 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 via operation $\textcircled{4}$ . For a small abnormal flow, if it is at 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 , operation $\textcircled{3}$ and $\textcircled{1}$ prevent it from being displaced by normal large flows, thus guaranteeing its data is well stored for 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝 detection. If it is at 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛, it may get squeezed out by large flows. Nevertheless, if that happens, operation $\textcircled{6}$ will send it to 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 , where its data will be well protected, enabling the detection of 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝𝑠.
# 4.3.3 Discussion.
The motivation for suspect. The Least Recently Used (LRU) replacement policy implemented in GapFilter-SO inherently favors large flows. This favoritism is generally justified as large flows often represent crucial or high-priority data4. However, this LRU replacement policy does present a potential area for improvement. Specifically, there is a risk that smaller abnormal flows may be displaced by larger normal flows, in which case we lose the information we want and keep the information we do not need. The Least Recently Disrupted (LRD) replacing policy, which is used in the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 part, guarantees that no matter how large a normal flow is, it crowds out no small abnormal flow. Thus, all the information we want is well protected.
The reason to keep civilian. 𝑆𝑢𝑠𝑝𝑒𝑐𝑡 is supposed to protect the flows that are encountering 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝𝑠. However, if we do not know any information about the new flows coming into 𝑠𝑢𝑠𝑝𝑒𝑐𝑡, we may have let normal flows squeeze out abnormal flows that are already in 𝑠𝑢𝑠𝑝𝑒𝑐𝑡. This is where the 𝐶𝑖𝑣𝑖𝑙𝑖𝑎𝑛 becomes particularly valuable. It offers every flow the opportunity to have its flow gap detected, acting as a crucial preliminary screening stage for potentially suspicious flows. That is why we claim the 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism proficiently integrates broad monitoring of all flows with detailed scrutiny of suspicious flows.
# 4.4 Optimizations
# 4.4.1 SIMD Acceleration.
Normally, to find a matched sequence number in a bucket, the time complexity is related to the bucket size $ { \boldsymbol { w } }$ . Besides, it requires storing and comparing timestamps to realize LRU replacing policy, which leads to extra memory and time cost. Nevertheless, by using SIMD, we can operate on all fields in a bucket at once with only one instruction, reducing the time complexity of finding a matched sequence number to $O ( 1 )$ . As for the implementation of LRU, instead of storing and comparing timestamps, we sort all flows in a bucket according to the time order. Every time an item $\boldsymbol { e } = \langle F I D , S E Q \rangle$ arrives, we put the flow $F I D$ to the first cell of the bucket and move other flows backward.
Specifically, we can fetch all $ { \boldsymbol { w } }$ cells at once using the function $\_ m m \_ s e t 1 \_ p i 1 6 ( )$ , and use function _𝑚𝑚_𝑠𝑢𝑏𝑠_𝑝𝑖16 and _𝑚𝑚_𝑚𝑖𝑛_𝑝𝑖16 to compare $S E Q$ with $S E Q ^ { c }$ in all the cells to try to find the matched cell. After updating the $S E Q ^ { c }$ , we use the function _𝑚𝑚_𝑠ℎ𝑢 𝑓 𝑓 𝑙𝑒_𝑝𝑖16() to rearrange all the flows according to the time order. Every operation above is implemented in $C { + } { + }$ and can be completed in one SIMD instruction (𝑖.𝑒., one CPU cycle). For an example of the rearrange operation, suppose the $j ^ { t h }$ item is the one we want to update, we move the $j ^ { t h }$ item to the $0 ^ { t h }$ cell, the $0 ^ { t h }$ item to the $1 ^ { s t }$ cell, the $1 ^ { s t }$ item to the $2 ^ { n d }$ cell, the $2 ^ { n d }$ item to the $3 ^ { r d }$ cell, ..., the $( j - 1 ) ^ { t h }$ item to the $j ^ { t h }$ cell. The $( j + 1 ) ^ { t h }$ to $( w - 1 ) ^ { t h }$ items remain at their original cells. The priority of cells is defined to diminish from first (leftmost) to last (rightmost). In this way, each time we insert an item $\boldsymbol { e } = \left. F I D , S E Q \right.$ , we update flow 𝐹𝐼𝐷’s priority to the highest. When we need to empty a cell, we simply empty the last cell because it has the lowest priority (least recently used). When a bucket is not full, the empty cells will be all at the back of the bucket.
# 4.4.2 Sequence number randomizing.
To mitigate 𝑠𝑒𝑞 collisions caused by internal problems, such as the possibility of multiple flows starting with the same sequence number and thereby increasing the probability of sequence number collision, we preprocess the sequence number before it enters the GapFilter algorithm. This preprocessing involves the utilization of a hash function $b ( \cdot )$ to generate a random bias $b ( F I D )$ , which is then added to the original sequence number of every item in the same flow. By adding this random bias to the sequence number, we achieve a desirable outcome: the range of the randomized sequence numbers for every flow is separate from each other. Consequently, the risk of 𝑠𝑒𝑞 collisions is significantly reduced.
# 4.4.3 Fingerprint.
In some extreme scenarios, some assistance on matching may further improve the accuracy. We use a hash function $\mathcal { S } ( \cdot )$ to generate a fingerprint $\mathcal { P } ( F I D )$ , which is a short bit sequence. We store the fingerprint with the sequence number in the cells. When a new item $e$ arrives, the matched cell must have both a matched fingerprint and a matched sequence number. With memory set as a constant, when the length of fingerprint increases, the risk of hash collisions of fingerprint decreases. But the number of buckets in the data structure decreases because each fingerprint takes more space. Therefore, the choice of the optimal length of fingerprints is made by a good trade-off between the above two effects. Besides, a longer fingerprint requires longer computation time.
# 5 MATHEMATICAL ANALYSIS
In this section, we provide the theoretical analysis of the performance of GapFilter.
For convenience, we first define some variables.
Correct Instances (CI): Number of correct instances. Not-Reported Instances (NRI): Number of unreported instances, 𝑅𝑅 = 1 − 𝑁𝑅𝐼 . Recall Rate (RR): Ratio of the number of correctly reported instances to the number of the correct instances.
Next, we assume:
Flows are uniformly distributed in the $d$ buckets, which means each bucket might receive $\begin{array} { r } { t \ = \ \frac { N } { d } } \end{array}$ different flows, $N = \# ( f l o w )$ is the number of flows. The flows also share the same 𝑓 𝑙𝑜𝑤 𝑔𝑎𝑝 ratio $\beta$ .
In each $A _ { i } = \{ f l o w \mid h ( F I D ) \% d = i \}$ , the next arriving item is independent to the existing items. Suppose the next item is $\langle F I D , S E Q \rangle$ and a major gap occurs, the major gap’s being reported indicates that the flow is contained in the existing items in the $i ^ { t h }$ bucket.
Flows’ size follows $Z i p f$ distribution. $Z i p f$ distribution is a commonly used approximation of reality. Flow sizes may vary greatly. The items from the large flows (or the elephant flows defined in [45]) is the vast majority and deserves most attention[45–47]. Therefore, we assume the size of the $j ^ { t h }$ largest flow is $L _ { 0 } * j ^ { - \alpha } ( 1 < \alpha \leq 3 )$ .
Normally, $N \gg d$ , $t \gg 1$ . With $d$ over 1000 and $M \ll d$ , the first $M$ large flows are typically evenly distributed in $M$ different sets $A _ { j _ { 1 } } , A _ { j _ { 2 } } , A _ { j _ { 3 } } , \cdot \cdot \cdot , A _ { j _ { M } }$ This is based on the independence of their FIDs’ hash values. It is just like $M$ balls randomly thrown into $d$ buckets. When $M \ll d$ , the probability of $M$ balls thrown in different $M$ buckets is close to 1. In every $A _ { j _ { i } }$ , the large flow is the vast majority since parameter $\alpha > 1$ . Thus, we analyze one large flow and multiple small flows in each $A _ { j _ { i } }$ .
Lemma 1. We denote the probability of $i$ flows distributed in i different sets $A _ { j _ { 1 } } , A _ { j _ { 2 } } , A _ { j _ { 3 } } , \cdot \cdot \cdot , A _ { j _ { i } }$ as $P _ { d i f f } ( i )$ . For $M < d$ we have 𝑃𝑑𝑖𝑓 𝑓 (𝑀) > (1 − 𝑀 )𝑀−𝑑𝑒−𝑀 .
Proof. For a good hash function, we can assume the independence and uniformity among the hashed values of different flow IDs.
$$
\frac { P _ { d i f f } ( i + 1 ) } { P _ { d i f f } ( i ) } = P ( i + 1 ~ d i f f e r ~ f r o m 1 , 2 , . . . , i )
$$
$$
\begin{array} { r l } & { \quad P _ { d i f f } ( i + 1 ) } \\ & { = \cfrac { P _ { d i f f } ( i + 1 ) } { P _ { d i f f } ( f ( i ) ) } \cfrac { P _ { d i f f } ( i ) } { P _ { d i f f } ( i - 1 ) } \cdots \cfrac { P _ { d i f f } ( 2 ) } { P _ { d i f f } ( 1 ) } P _ { d i f f } ( 1 ) } \\ & { \quad ( P _ { d i f f } ( 1 ) = 1 ) } \\ & { = \pi _ { j = 1 } ^ { i } P ( j + 1 \ d i f f e r \ f r o m 1 , 2 , \ldots , j ) } \\ & { = \pi _ { j = 1 } ^ { i } \cfrac { d - j } { d } } \\ & { = \pi _ { j = 1 } ^ { i } ( 1 - \cfrac { j } { d } ) } \end{array}
$$
let $i + 1 = M$ , we get
Proof. If $f _ { i , j }$ have already arrived before the arrival of $ { \boldsymbol { w } }$ other arrived items, the 𝑚𝑎 𝑗𝑜𝑟 𝑔𝑎𝑝 would be reported. In view of the weak correlation between the FIDs of two adjacent items, we can consider them to be independent. The probability of $f _ { i , j }$ has not arrived in the last $ { \boldsymbol { w } }$ items is $\textstyle ( 1 - L _ { i , j } / ( \sum _ { j } L _ { i , j } ) ) ^ { w }$ and then we have 𝑃 ( 𝑓𝑖,𝑗 𝑎𝑟𝑟𝑖𝑣𝑒𝑑 𝑖𝑛 𝑤 𝑖𝑡𝑒𝑚𝑠) = 1 − (1 − Í𝐿𝑗 𝑖,𝐿𝑗𝑖,𝑗 ) 𝑤. Therefore the NRI of flow 𝑓𝑖,𝑗 is smaller than 𝛽 ∗ 𝐿𝑖,𝑗 (1 − 𝐿𝑗 𝑖,𝐿𝑗𝑖,𝑗
Theorem 1. For a data stream obeying Zipf distribution with 𝛼 > 1 , we have 𝑅𝑅 > 1−2𝛼 (𝛼 −1) 𝛼 −1𝑀− $R R > 1 - 2 ^ { \alpha } ( \alpha - 1 ) ^ { \frac { 1 } { \alpha } - 1 } M ^ { - \frac { ( \alpha - 1 ) ^ { 2 } } { \alpha } }$ with the probability of 1 𝑀 𝑀−𝑑𝑒−𝑀 . 𝑑 )
$$
\begin{array} { r l } & { \mathbb { P } _ { p , q , p ; \lambda \in \mathcal { N } } ( \lambda ) } \\ & { = \mathcal { R } _ { p , q , p } ^ { \lambda \star \star } ( 1 - \frac { \lambda } { q } ) } \\ & { - \mathbb { P } _ { p , q , p } ^ { \lambda \star } ( 1 - \frac { \lambda } { p } ) } \\ & { - \exp ( \lambda - \frac { 1 } { p } ) \log ( 1 - \frac { \lambda } { p } ) } \\ & { \exp ( \lambda - \frac { 1 } { p } ) } \\ & { \exp ( \lambda - \frac { 1 } { p } ) \int _ { q , q \in \mathcal { N } } ^ { \lambda \star \lambda } \hat { d } \phi ( 1 - \lambda ) ) } \\ & { - \exp ( \lambda ) \int _ { q , q \in \mathcal { N } } ^ { \lambda \star \lambda } \hat { d } \phi ( 1 - x ) ) } \\ & { \times \exp ( \lambda ) \int _ { p } ^ { \lambda \star \lambda } \hat { d } ( \lambda - x ) d x ) } \\ & { - \exp ( \lambda - \frac { 1 } { p } ) \langle \Delta ( - \Delta ( \lambda - \lambda ) \hat { d } ( \lambda - \lambda ) \hat { d } ( \lambda - \lambda ) ) \rangle } \\ & { = ( \exp ( \lambda - \frac { 1 } { p } ) \cdot ( \lambda - \frac { 1 } { p } ) ) \cdot ( \lambda - \frac { 1 } { p } ) ) } \\ & { = ( \lambda - \frac { 1 } { p } ) \cdot ( \lambda - \frac { 1 } { p } ) \cdot ( \lambda - \frac { 1 } { p } ) ) } \end{array}
$$
Furthermore, for fixed $M$ , we have
$$
\operatorname * { l i m } _ { d \infty } ( 1 - \frac { M } { d } ) ^ { M - d } e ^ { - M } = \operatorname * { l i m } _ { d \infty } ( 1 - \frac { M } { d } ) ^ { - d } e ^ { - M } = e ^ { M } e ^ { - M } = 1
$$
Therefore, we have $\operatorname* { l i m } _ { d \to \infty } { \cal P } _ { d i f f } ( M ) = 1 .$
Given a data stream and a sketch, the $j ^ { t h }$ largest flow in $A _ { i }$ is hereby denoted as $f _ { i , j }$ , while the size of $f _ { i , j }$ is denoted as $L _ { i , j }$ .
Lemma 2. The number of incorrectly reported instances of flow $f _ { i , j }$ is smaller than 𝛽 ∗ 𝐿𝑖,𝑗 (1 − 𝑖,𝑗 𝐿𝑖,𝐿𝑗 )𝑤 , where 𝑤 is the number of cells in a bucket.
Proof. From Lemma 1, we know that with the probability $( 1 - \textstyle { \frac { M } { d } } ) ^ { M - d } e ^ { - M } $ $M$ .loLwest .r bWuetehdavine $M$ $A _ { j _ { 1 } } , A _ { j _ { 2 } } , A _ { j _ { 3 } } , \cdot \cdot \cdot , A _ { j _ { M } } .$ $\begin{array} { r } { L _ { M } = \frac { M ^ { 1 - \alpha } } { \alpha - 1 } } \end{array}$ $\begin{array} { l l l l } { { L _ { M } } } & { { = } } & { { \int _ { M } ^ { + \infty } x ^ { - \alpha } d x } } & { { > } } & { { \sum _ { i = M + 1 } ^ { + \infty } i ^ { - \alpha } } } \end{array}$ . Besides, we−introduce a middle variable $M _ { 1 } . M _ { 1 }$ is defined as the largest integer such that $L _ { M } < M _ { 1 } ^ { - \alpha }$ . Hence, we have $( M _ { 1 } + 1 ) ^ { - \alpha } < L _ { M } < M _ { 1 } ^ { \alpha }$ .
Assume the $i ^ { t h } ( 1 \leq i \leq M )$ largest flow is located in set $A _ { k }$ , $L _ { k , 1 } = L _ { 0 } * i ^ { - \alpha }$ .
$$
\begin{array} { r } { \mathrm { L e t } \ p _ { i } = \frac { L _ { k , 1 } } { \sum _ { A _ { k } } L _ { k , j } } \quad \mathrm { , c o n s i d e r } \quad \sum _ { A _ { k } } L _ { k , j } = L _ { k , 1 } + \sum _ { j \neq 1 } L _ { k , j } \ < } \\ { L _ { k , 1 } + \sum _ { i = M } ^ { + \infty } L _ { 0 } * i ^ { - \alpha } < L _ { k , 1 } + L _ { M } * L _ { 0 } , \mathrm { w e \ h a v e } \quad p _ { i } > \frac { L _ { k , 1 } } { L _ { k , 1 } + L _ { M } * L _ { 0 } } . } \end{array}
$$
From 2, we know that the NRI of $i ^ { t h }$ flow is smaller than $\beta *$ $L _ { k , 1 } * ( 1 - p _ { i } ) ^ { w }$ , so:
$$
\begin{array} { r l } { \sin ( \theta _ { 1 } ) } & { = \phantom { - } \sum _ { k \geq 0 } x _ { k } ^ { 0 } } \\ & { \leq x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } } \\ & { \leq x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } } \\ & { - \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } } \\ & { \leq x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } } \\ & { \leq x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } } \\ & { \leq x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } } \\ & \leq x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \\ & - x _ { k } ^ { 0 } + \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } { \mu _ { k } } \frac { 1 } \mu _ \end{array}
$$
With $\begin{array} { r } { C I = \beta ( \sum _ { i } { L _ { 0 } * i ^ { - \alpha } } ) > \beta ( \int _ { 1 } ^ { + \infty } { \frac { x ^ { - \alpha } } { \alpha - 1 } } ) = \frac { \beta L _ { 0 } } { \alpha - 1 } } \end{array}$ , we have
$$
R R = 1 - \frac { N R I } { C I } > 1 - 2 ^ { \alpha } ( \alpha - 1 ) ^ { \frac { 1 } { \alpha } - 1 } M ^ { - \frac { ( \alpha - 1 ) ^ { 2 } } { \alpha } }
$$
For $1 ~ < ~ \alpha ~ < ~ 3$ , the constant $2 ^ { \alpha } ( \alpha - 1 ) ^ { \frac { 1 } { \alpha } - 1 } < 6$ . Therefore, $\operatorname* { l i m } _ { M \to \infty } R R = 1$ .
# 6 EXPERIMENTS
In this section, we evaluate the performance of GapFilter through experiments. We implement GapFilter on the CPU platform, and evaluate it in the 𝑓 𝑙𝑜𝑤 𝑔𝑎𝑝 problem. The source codes are available at GitHub [1].
# 6.1 Experiment Setup
# 6.1.1 Datasets.
We select four real-world datasets and perform experiments using the Identification field in the $\mathbb { P }$ header as the sequence number. The concurrency circumstances of all/abnormal flows in these datasets are illustrated in Figure 3. The distributions of flow length and gap size in these datasets are illustrated in Figure 4. We set $\mathcal { T } _ { 1 } = 5 , \mathcal { T } _ { 2 } = 3 0$ .
CAIDA: We use an anonymized network trace dataset collected by CAIDA [48] in 2018. Each item in the dataset is distinguished by a 5-tuple (source IP address, source port, destination IP address, destination port, protocol) that uniquely identifies a UDP/TCP session. The slice used in this work contains network traffic in 1 min, which includes around 30M items and 1.3M flows.
MAWI: The MAWI dataset contains real traffic trace data maintained by the MAWI Working Group[49]. Each item ID in this dataset is also a 5-tuple, similar to the CAIDA dataset. There are around 55M items and 9M flows in the MAWI dataset.
• MACCDC: The MACCDC dataset, comprising of approximately 3M items and 2M flows, is provided by the U.S. National CyberWatch Mid-Atlantic Collegiate Cyber Defense Competition (MACCDC)[50].
IMC: The IMC dataset is sourced from one of the data centers studied in Network Traffic Characteristics of Data Centers in the Wild[51]. Each item is identified by a 5-tuple. There are around 18M items in the IMC dataset, with 560K total flows.
Synthetic item loss. To validate the effectiveness of GapFilter under extreme network conditions, we design a method of imposing synthetic item loss and apply it to our original datasets. In the experiments, the CAIDA, MAWI, and MACCDC datasets are used without synthetic item loss, whereas the IMC dataset is imposed with synthetic item loss.
We divide our dataset into $n _ { T }$ time windows of equal lengths and categorize flows within each window as normal or abnormal. For each time window, a certain portion $r \in ( 0 , 1 )$ of the flows are randomly chosen and marked as abnormal, with the remaining flows marked as normal. We define two kinds of item loss in the data stream: (1) Consecutive item loss and (2) Single item loss. Consecutive item loss is implemented only on the abnormal flows. For any item $\boldsymbol { e } _ { 1 } = \langle F I D _ { 1 } , S E Q _ { 1 } \rangle$ in the abnormal flows, we generate a random number $j \in \{ x \in \mathbb { N } | \mathcal { T } _ { 1 } \leq x < \mathcal { T } _ { 2 } \}$ with equal probability. In the flow $F I D _ { 1 }$ , we drop all the items with $S E Q \in \{ x \in \mathbb { N } | S E Q _ { 1 } \leq$ $x < S E Q _ { 1 } + j \}$ with a probability of $b ^ { j }$ , where $b$ is a predefined constant. For a normal flow or an abnormal flow escaping the consecutive item loss, we execute the single item loss on it. For any item $e _ { 2 } = \langle F I D _ { 2 } , S E Q _ { 2 } \rangle$ in the flows applicable to single item loss, we drop $e _ { 2 }$ with a probability of a predefined number $p$ .
Consecutive item loss simulates the network congestion observed in real-world scenarios, where the buffer in a router or switch fills up and subsequent items arriving at this node are dropped. If such situation persists, consecutive item loss would occur in a flow. Single item loss represents items being lost during the transmission process due to weak or unstable signal conditions in the real world.
# 6.1.2 Implementation.
We implement our GapFilter-SO, GapFilter-AO and Straw-man solution in $C + +$ on a CPU platform. The hash functions utilized are the 64-bit Bob Hash [52] initialized with different random seeds. Both GapFilter-SO and GapFilter-AO utilize sequence number randomizing. Only GapFilter-AO utilizes fingerprint.
Figure 3: Concurrency circumstances of all/abnormal flows on different datasets.
# 6.1.3 Metrics.
Precision Rate (PR): Precision rate is the ratio of the number of correctly reported instances to the number of reported instances. • Recall Rate (RR): Recall rate is the ratio of the number of correctly reported instances to the number of the correct instances. • 𝐹1 Score: 2×𝑃𝑅×𝑅𝑅
# 6.1.4 Algorithm Comparison.
We devised a Straw-man solution based on Cuckoo Filter [53]. Cuckoo Filter is an efficient hash table implementation based on cuckoo hashing [54], which can achieve both high utilization and compactness. It realizes constant time lookup and amortization constant time insertion operations. Straw-man also uses fingerprint instead of recording flow-id to improve space utilization.
Specifically, the data structure consists of a table of buckets and three hash functions $h _ { 1 } ( \cdot ) , h _ { 2 } ( \cdot )$ , and $h _ { f } ( \cdot )$ . Every bucket contains 𝑤 cells, each recording a fingerprint $\check { f } p ^ { c }$ and a sequence number $S E Q ^ { c }$ . Due to the fact that $ { \boldsymbol { w } }$ in the original design of Cuckoo Filter must be a power of two, we improve it by dividing buckets into two blocks to accommodate various memory sizes: For each incoming item $\textit { e } = \langle F I D , S E Q \rangle$ , we first calculate its $f p \ = \ h _ { f } ( F I D )$ and map it into the $[ h _ { 1 } ( f p ) ] ^ { t h }$ bucket in block one and the $[ h _ { 2 } ( f p ) ] ^ { t h }$ bucket in block two. Then we search for a cell with its $f p ^ { c } = f p$ in the two buckets. With the matched cell found, we calculate $d i f = S E Q - S E Q ^ { c }$ , use equation (1) to determine the situation, and update the $S E Q ^ { c }$ to be $m a x \{ S E Q , S E Q ^ { c } \}$ . If such cells do not exist, we will insert $e$ to a cell by setting its $F I D ^ { c } = F I D$ and $f p ^ { c } = f p$ when a bucket still contains empty cells. If all the buckets are full, we will randomly evict an item in a cell, and place it into the other bucket where it can go. If that bucket is also full, then another eviction will be triggered. This process goes on and on until an empty cell exists or a predefined MAX_NUMBER_OF_TURNS is reached. In these experiments, we set $w = 4$ and length of $f p$ to be 32𝑏𝑖𝑡𝑠 to avoid collisions. MAX_NUMBER_OF_TURNS is set to 8 because it can already achieve a high utility rate of memory. Setting a larger MAX_NUMBER_OF_TURNS will not improve the accuracy much but will lower the throughput.
Figure 4: Distributions of flow length and gap size.
6.2 Experiments on Parameter Settings
Figure 5: Effect of 𝑤 on GapFilter-SO on different datasets.
Figure 6: Effect of $\mathbf { \boldsymbol { s } } / \mathbf { \boldsymbol { c } }$ on GapFilter-AO on different datasets.
0.6 0 0.9
E Ours-SO(w=2) 0.7 Ours-SO(w=2)
0.4 Ours-SO(w=4) Ours-SO(w=4) Ours-SO(w=8) 0.6 Ours-SO(w=8) Ours-SO(w=16) Ours-SO(w=16) 100 101 102 100 101 102 Memory Usage (KB) Memory Usage (KB) (a) CAIDA (b) MAWI 0.9
0.6
1 Ours-AO(s=1,c=7) 0.7 Ours-AO(s=1,c=7) 0.4 Ours-AO(s=3,c=5) Ours-AO(s=3,c=5) Ours-AO(s=5,c=3) 0.6 Ours-AO(s=5,c=3) Ours-AO(s=7,c=1) Ours-AO(s=7,c=1) 100 101 102 100 101 102 Memory Usage (KB) Memory Usage (KB) (a) CAIDA (b) MAWI
Effect of w on GapFilter-SO (Figure 5). We perform experiments with $ { \boldsymbol { w } }$ ranging from 2 to 16 in GapFilter-SO and observe their performance on various datasets. As shown in the graph, the accuracy of GapFilter-SO first increases and then decreases as 𝑤 enlarges.
0.98 0.95 0 0.94
s s
0.85 F Ours-AO(lf=2 bits) 0.92 Ours-AO(lf=2 bits) Ours-AO(lf=4 bits) Ours-AO(lf=4 bits) 0.80 Ours-AO(lf=8 bits) 0.90 Ours-AO(lf=8 bits) Ours-AO(lf=16bits) Ours-AO(lf=16bits) 100 101 102 100 101 102 Memory Usage (KB) Memory Usage (KB) (a) MACCDC (b) IMC
The GapFilter-SO with $w = 8$ performs best. This is attributable to the two counteracting effects on accuracy as 𝑤 gets larger: the risk of different $S E Q$ colliding in a bucket increases, while the probability that too many large flows crush in a bucket decreases.
Effect of $\mathbf { \boldsymbol { s } } / \mathbf { \boldsymbol { c } }$ on GapFilter-AO (Figure 6). We perform experiments with the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡/𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 memory ratio ranging from $1 : 7$ to $7 : 1$ in GapFilter-AO and observe their performance on various datasets. As shown in the graph, the accuracy of GapFilter-AO first rises and then falls when 𝑠𝑢𝑠𝑝𝑒𝑐𝑡/𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 memory ratio becomes larger. The GapFilter-AO with ratio $= 3 : 5$ performs best. As we analyze in 4.3.3, the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 part is designed to protect the small abnormal flows from being ousted by large normal flows. However, when the 𝑠𝑢𝑠𝑝𝑒𝑐𝑡 part consumes too much memory, we lack sufficient 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 cells to monitor overall flows, leading to a loss of much necessary information.
Effect of fingerprint length on GapFilter-AO (Figure 7). We conduct experiments with fingerprint lengths $( l _ { f } )$ ranging from 2 bits to 16 bits in GapFilter-AO and observe their performance on different datasets. As shown in the graph, the accuracy of GapFilter-AO first increases and then decreases when $l _ { f }$ enlarges. The GapFilter-AO with $l _ { f } = 8$ bits performs the best. This is because on one hand, increasing $l _ { f }$ improves the accuracy of matching(see Section 4.4.3 for more details) by reducing mistakes caused by 𝑠𝑒𝑞 collisions. On the other hand, a fingerprint with a larger $l _ { f }$ will occupy more memory, decreasing the total number of cells.
Parameter Selection. In experiments from here on, we set $w = 8$ $l _ { f } = 8$ and 𝑠𝑢𝑠𝑝𝑒𝑐𝑡/𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛 memory ratio to be $3 : 5$ because they are the most robust setting considering performances at different datasets.
# 6.3 Experiments on Accuracy
In this section, we compare the accuracy of the Straw-man solution, GapFilter-SO and GapFilter-AO. We analyze the changes in $F _ { 1 }$ as we alter the memory allocation from 1KB to 128KB.
$F _ { 1 }$ -Score (Figure 8). The experiments shows that the $F _ { 1 }$ of GapFilter-AO and GapFilter-SO are constantly higher than that of the Straw-man solution. We conduct experiments on different datasets and find that on CAIDA, MAWI, MACCDC, and IMC, the $F _ { 1 }$ -Scores of GapFilter-AO are respectively 1.61, 1.50, 1.12, and 1.04 times higher on average than those of the Straw-man solution, while the $F _ { 1 }$ -Scores of GapFilter-SO are on respectively 1.40, 1.38, 1.04, and 1.02 times higher on average than those of the Straw-man solution.
RR (Figure 9). According to the experiment results, the RR of GapFilter-AO and GapFilter-SO are significantly higher than that of the Straw-man solution. We conduct experiments on different datasets and find that on CAIDA, MAWI, MACCDC, and IMC, the RR of GapFilter-AO are respectively 3.26, 3.85, 1.63, and 1.22 times higher on average than those of Straw-man solution, while the RR of GapFilter-SO are respectively 2.76, 3.20, 1.38, and 1.19 times higher on average than those of the Straw-man solution.
PR (Figure 10). We can see from the experiment results that GapFilter-AO and Straw-man possess a similar precision rate with any memory limitation. The precision rate of GapFilter-SO increases rapidly with memory increasing. The precision rate of the Strawman solution is always 1 since it records the flow ID.
Analysis. It is shown in the experiment results above that GapFilterAO and GapFilter-SO can achieve a much better accuracy than the Straw-man solution. Notably, the Straw-man solution requires about 32 times more memory than GapFilter-AO to achieve the same accuracy. Several key factors contribute to the excellence of our algorithms. First, we utilize the sequence number as the index number to achieve matching, saving the memory of the 13-byte flow ID. This approach poses the challenge of avoiding 𝑠𝑒𝑞 collisions in the huge number of flows within the data stream. We effectively address this issue by grouping and using the fingerprint as assistance. Second, our 𝑐𝑖𝑣𝑖𝑙𝑖𝑎𝑛-𝑠𝑢𝑠𝑝𝑒𝑐𝑡 mechanism efficiently combines a rough monitoring on the overall flows and a meticulous monitoring on the suspicious flows organically, achieving an extraordinary performance. Third, we employ the LRU and LRD replacing policies to keep the most critical information. Furthermore, LRU and LRD are implemented without extra memory or time overhead.
# 6.4 Experiments on Processing Speed
We compare the throughput of GapFilter-SO with $w = 8 / 1 6$ , GapFilter-AO with $w = 8 / 1 6$ , and Straw-man.
Throughput (Figure 11). As the results manifest, in every dataset, the fastest algorithm is GapFilter-SO. Both GapFilter-SO and GapFilter-AO are significantly faster than the Straw-man solution, with or without SIMD. SIMD brings more improvement when the number of cells in a bucket is larger. On dataset CAIDA, MAWI, MACCDC and IMC, the throughput of GapFilter-SO is respectively 2.57, 3.03, 5.19 and 2.18 times higher on average than those of the Straw-man solution, while the throughput of GapFilter-AO is respectively 2.10, 2.51, 5.60 and 1.69 times higher than the Straw-man solution. the fastest algorithm is GapFilter-SO, being 1.6 times faster than the Straw-man solution. The speed of GapFilter-AO and the Straw-man solution are similar.
Analysis. GapFilter-SO is significantly faster than Straw-man due to several key advantages: (1) Our operations are neat and has good spatial locality. We only need to calculate at most three hash numbers for every incoming item. Once a bucket is chosen, all the following operations are conducted in the bucket. We also keep the size of a group small, so the memory taken by a bucket is small. In contrast, there are at least three hashing calculations for every incoming item in the Straw-man solution. Besides, the eviction operation in the Straw-man solution has a bad spatial locality. (2) Straw-man needs to access the fingerprint for matching and access
1.0 Straw-man 1.0 1.00 1.000 Straw-man
0.8 Ours-SAO 0.68 0.90 0.975 Ours-SAO 0.92550
0.4 0.4 SOturrasw-SA-OmOan 0.7850 SOturrasw-SA-OmOan 0.900 0.875 100 101 102 100 101 102 100 101 102 100 101 102 Memory Usage (KB) Memory Usage (KB) Memory Usage (KB) Memory Usage (KB) (a) CAIDA (b) MAWI (c) MACCDC (d) IMC
1.0 1.0 1.0 1.0 Ours-AO 0.46 Straw-man 0.46 0.46 0.46 0.2 0.2 Ours-SAO 0.2 Ours-SAO 0.2 Ours-SAO Straw-man Straw-man Straw-man 0.0 0.0 0.0 0.0 100 101 102 100 101 102 100 101 102 100 101 102 Memory Usage (KB) Memory Usage (KB) Memory Usage (KB) Memory Usage (KB) (a) CAIDA (b) MAWI (c) MACCDC (d) IMC
1.0 1.0 1.0 1.0
0.9 0.78 0.789 0.78
0.6 Ours-SAO 0.6 Ours-SAO Ours-SAO 0.6 Ours-SAO Straw-man Straw-man Straw-man Straw-man
0.5 100 101 102 0.5 100 101 102 0.5 100 101 102 0.5 100 101 102 Memory Usage (KB) Memory Usage (KB) Memory Usage (KB) Memory Usage (KB) (a) CAIDA (b) MAWI (c) MACCDC (d) IMC
1230 1 w/ SIMD 120505 11 w/ SIMD 0102030Throughput (MOPS) w/ SIMD 0102030Throughput (MOPS) 1 w/ SIMD w/o SIMD w/o SIMD w/o SIMD w/o SIMD straw SO8 SO16 AO8 AO16 straw SO8 SO16 AO8 AO16 straw SO8 SO16 AO8 AO16 straw SO8 SO16 AO8 AO16 Method Method Method Method (a) CAIDA (b) MAWI (c) MACCDC (d) IMC
the sequence number for detecting flow gaps. While GapFilterSO only needs to access the sequence number to accomplish both matching and detecting flow gaps.
# 6.5 Experiments on optimizations
In this section, we show the improvement of accuracy provided by sequence number randomizing and fingerprint.
1.0 0.6 0.8 E MACCDC w/ fp E CAIDA w/ rand MACCDC w/o fp 0.4 CAIDA w/o rand 0.7 IMC w/ofpfp MAWI w/or aranndd 100 101 102 100 101 102 Memory Usage (KB) Memory Usage (KB) (a) fingerprint (b) sequence number randomizing
Effect of fingerprint (Figure 12(a)). The experiment results show that fingerprint significantly enhances the accuracy on dataset MACCDC and IMC. Specifically, GapFilter-AO with fingerprint performs 1.18 times better than that without fingerprint.
Effect of sequence number randomizing (Figure 12(b)). The experiment results show that sequence number randomizing improves the accuracy on dataset CAIDA and MAWI. Specifically, GapFilter-AO with sequence number randomizing performs 1.04 times better than that without sequence number randomizing.
# 6.6 Experiments on Pattern of Flow Gaps
In this section, we alter the ratio $r$ of abnormal flows in a time window and the parameter $b$ determining the probability of major gap taking place in an abnormal flow in the synthetic item loss, as described in Section 6.1.1, in order to test the robustness of GapFilter.
Figure 12: Effect of optimizations on different datasets.
Figure 13: The effect of $r$ and $b$ on IMC dataset.
Effect of $r$ on performance (Figure 13(a)). As shown in the figures, GapFilter-AO and GapFilter-SO perform much better than Straw-man. The $F _ { 1 }$ -Scores of GapFilter-AO and GapFilter-SO are on average 1.07 and 1.03 times higher than that of Straw-man.
Effect of $b$ on performance (Figure 13(b)). The experiment results show that GapFilter-AO and GapFilter-SO outperform the Straw-man solution. The $F _ { 1 }$ -Scores of GapFilter-AO and GapFilterSO are on average 1.08 and 1.05 times higher than that of Strawman.
Analysis. The results above demonstrate that GapFilter can deal with various patterns of flow gap. It has high accuracy and robustness, even in extreme network circumstances. | Data stream monitoring is a crucial task which has a wide range of
applications. The majority of existing research in this area can be broadly
classified into two types, monitoring value sum and monitoring value
cardinality. In this paper, we define a third type, monitoring value variation,
which can help us detect flow gaps in data streams. To realize this function,
we propose GapFilter, leveraging the idea of Sketch for achieving speed and
accuracy. To the best of our knowledge, this is the first work to detect flow
gaps in data streams. Two key ideas of our work are the similarity absorption
technique and the civilian-suspect mechanism. The similarity absorption
technique helps in reducing memory usage and enhancing speed, while the
civilian-suspect mechanism further boosts accuracy by organically integrating
broad monitoring of overall flows with meticulous monitoring of suspicious
flows.We have developed two versions of GapFilter. Speed-Oriented GapFilter
(GapFilter-SO) emphasizes speed while maintaining satisfactory accuracy.
Accuracy-Oriented GapFilter (GapFilter-AO) prioritizes accuracy while ensuring
considerable speed. We provide a theoretical proof demonstrating that GapFilter
secures high accuracy with minimal memory usage. Further, extensive experiments
were conducted to assess the accuracy and speed of our algorithms. The results
reveal that GapFilter-AO requires, on average, 1/32 of the memory to match the
accuracy of the Straw-man solution. GapFilter-SO operates at a speed 3 times
faster than the Straw-man solution. All associated source code has been
open-sourced and is available on GitHub. | [
"cs.DB"
] |
# I. INTRODUCTION
Mining app reviews from mobile app repositories has gained significant attention in requirements engineering (RE) research over the past decade [1]. Relevant descriptors from mobile app reviews include numerical rating [2], review type [3] (e.g., bug report, feature request, praise), topic [4] (e.g., usability, design, security), and polarity [5] (e.g., positive, neutral, negative). The combined use of these descriptors has led to advanced methods for requirements elicitation [6] and validation [7] tasks, such as aspect-based sentiment analysis [8] and feature-based opinion mining [9].
Among these, polarity has emerged as one of the most widely used descriptors in app review analysis [9], [3], [10], [11], [12], and it continues to attract significant attention in recent studies [13], [14], [15]. Polarity is defined as the overall sentiment expressed in user feedback, leading to the classification of textual content into predefined sentiment categories, typically positive, neutral, or negative. Despite its popularity, automatic polarity measurement still presents significant cognitive challenges, such as subtle sentiments, sarcasm, and domain-specific language. These lead to limited precision and low recall in negative feedback [2], [8].
In addition to these challenges, polarity-based opinion mining lacks the granularity to capture nuanced emotions in user feedback. Polarity labels fail to convey the depth of emotions tied to feature-based opinions, limiting their usefulness for fine-grained analysis. For instance, consider the following positive reviews:
$\mathbf { [ R _ { 1 } ] }$ Useful app which follows Material Design $[ { \bf R } _ { 2 } ]$ Awesome team work but this application does need updating now $[ { \bf R } _ { 3 } ]$ I run it on my Dropbox cloud storage with Android, Mac and Linux, and I have had no issues In addition to its inherent positivity, $\mathbf { [ R _ { 1 } ] }$ highlights the user’s excitement about a specific characteristic. In contrast, $[ { \bf R } _ { 2 } ]$ conveys a user request or suggestion for a change or update of the app. Finally, $[ { \bf R } _ { 3 } ]$ reflects the user’s acceptance and personal experience with a particular feature.
Likewise, consider the following negative reviews:
$\mathbf { [ R _ { 4 } ] }$ My only complaint is that I sometimes have
sync issues with shared notebooks.
$\mathbf { [ R _ { 5 } ] } \ I$ really didn’t want to make this app my
default SMS messaging app on my phone.
$\mathbf { [ R _ { 6 } ] }$ Very invasive, [...] was forcing to update from
a third party store and it want to access everything
on your phone including you sim card data
In addition to its inherent negativity, $\mathbf { [ R _ { 4 } ] }$ conveys minor user disappointment due to issues or bugs with a specific feature. In contrast, $[ { \bf R } _ { 5 } ]$ reflects the user’s outright rejection and decision to stop using the app, which the user considers not suited for purpose. Finally, $\mathbf { [ R _ { 6 } ] }$ highlights the user’s lack of trust stemming from critical safety and privacy concerns.
This limitation in expressiveness motivates our research. We propose to use emotion labels to better capture the nuanced emotional states conveyed in user feedback, allowing more informed and targeted decision-making. Fine-grained emotion analysis has been explored in opinion mining tasks in various review types, such as products [16], movies [17] and books [18]. However, emotion analysis research in app reviews remains scarce, particularly in the context of mobile app reviews. This entails several limitations in the state of the art: lack of guidelines for human annotators, ignorance on the challenges they face during the annotation process, and lack of public datasets for multiclass emotion extraction, among others. Moreover, the opportunities large language models (LLMs) offer to perform this task remain also unknown.
In this research, we address all these aspects and as a result, we present the following contributions:
$\mathbf { C _ { 1 . } }$ The adaptation of an 8-emotion taxonomy, already used in software engineering, to the context of app reviews.
$\mathbf { C } _ { 2 } .$ A set of guidelines and instructions to support the manual annotation of emotions in the context of app reviews.
$\mathbf { C } _ { 3 . }$ A dataset of 1,112 sentences from app reviews annotated with human emotions, belonging to 257 mobile apps.
$\mathbf { C } _ { 4 }$ A set of challenges and design suggestions for the development of automated emotion extraction methods.
$\mathbf { C _ { 5 } } .$ A cost-efficiency and agreement analysis of LLM-based annotations with respect to human agreement.
All datasets and source code are openly shared (See Data Availability Statement at the end of the paper).
Our work lays the groundwork for fine-grained emotion extraction from mobile app reviews, providing a structured dataset, annotation guidelines, and design recommendations for automated methods. By leveraging human annotations alongside LLMs, we assess the feasibility of reducing manual effort in emotion classification while maintaining annotation quality. We expect our findings to guide future research on automating emotion extraction in software reviews more broadly, facilitating its integration into RE processes for improved user feedback analysis.
# II. BACKGROUND
# A. Emotion Analysis
Recent advances in natural language processing have sparked growing research interest in opinion mining to support software engineering processes [19]. Moreover, emotional aspects have been extensively explored in the context of RE tasks [20], including elicitation [21], [22], [23], specification [24] and validation [25]. Specifically, analysing user feedback from software-related reviews has also emerged as a key approach to integrating emotional aspects into RE [26], [17], [18]. These studies vary in scope and purpose, ranging from analysing app usage experience [27] to classifying emotional states from user feedback on social media [28].
Studies on emotion analysis rely on well-established emotion taxonomies – structured classifications of emotions based on psychological theories. These taxonomies vary in scope, granularity, and theoretical foundations across disciplines, making their selection crucial to capturing the necessary level of detail for a given task. While some frameworks define a small set of basic emotions [29], [30], [31], others introduce finer distinctions with broader emotional states [32], reflecting diverse perspectives on how emotions are structured and expressed. In the context of app reviews, opinion mining has mainly focused on polarity-based analysis [9], [33], while finegrained classifications are more common in other domains such as clinical and psychological studies [34]. Bridging this gap requires adopting emotion taxonomies that balance granularity and applicability to the analysis of app reviews, as explored by prior work (see Section VI).
# B. Annotation Strategies in User Feedback
The reliability of annotations in user feedback depends on the annotation strategy, which can be categorized into expertbased, crowdsourced, and automated methods.
Expert-based annotation, performed by domain specialists or trained annotators, provides high-quality labels through structured guidelines and domain knowledge. Studies on app review analysis, using Cohen’s and Fleiss Kappa metrics [35], report moderate $( 0 . 4 1 - 0 . 6 0 )$ to substantial $( 0 . 6 1 - 0 . 8 0 )$ inter-annotator agreement, typically with Kappa values ranging from 0.60 to 0.70 for tasks such as feature extraction [2] and sentiment analysis [2], [36]. However, expert annotation is costly and time-intensive, limiting scalability. Additionally, cognitive challenges in app review interpretation can limit the usefulness of human-annotated datasets to support automatic extraction, especially for descriptors such as review helpfulness, leading to slight $( 0 . 0 0 - 0 . 2 0 )$ agreement [37].
Crowdsourced annotation utilizes non-expert contributors to label data at scale. Prior work has used crowdsourcing to validate unsupervised features and other descriptors in mobile app reviews [38], [8]. Beyond app reviews, crowdsourcing is widely used in the context of RE [39], [40], [41]. While efficient, it often leads to moderate agreement due to annotator subjectivity and domain unfamiliarity [42], [43], [44].
Automated methods leverage machine learning models or LLMs to generate annotations without human intervention [45]. While scalable, their reliability depends on model performance and training data quality. Some studies show comparable agreement levels between model-generated annotations and expert labels, particularly when fine-tuning encoder-based models on domain-specific data [46]. However, automated approaches lack human intuition, struggle with ambiguity, and may propagate biases from training data [47]. Consequently, expert annotation remains the gold standard.
In this study, we focus on expert-based labelling, guided by structured annotation guidelines, to ensure high-quality annotations. Due to the cognitive complexity of adapting emotions to app reviews, we exclude crowdsourced annotation. Finally, as LLMs demonstrate increasing effectiveness in text classification and annotation tasks [45], we explore their potential as automated annotators in this context.
# III. METHOD
# A. Design
The goal of this research is to identify and address the challenges and limitations of fine-grained emotion analysis in mobile app reviews. Our study focuses on the adaptation of an emotion taxonomy, the analysis of the complexities of human annotation, and the reliability of LLMs for automated annotation. To achieve this goal, we issue the following research questions (RQ):
RQ1 - Literature review RQ2 - Iterative human annotation RQ3 - Human agreement
sericg studyon 88 88888 888.88 r Agreement Disagreement discussion xeaure Iclec OTaxonomy 6 Guidelines Guielins antaset aman vs.LLysis 漫 Anaotatins Forward inspection
I Scopus snowballing RQ4 - LLM-based annotation ? LLM-based Agreement 14cost-efficiency Emotion Anotein Annotated 1 annotation analysis analysis Taxonomy reviews OpenAl MISTRAL Gemini
$\mathbf { R } \mathbf { Q _ { 1 } } .$ . Which taxonomy of emotions is most suitable for annotating mobile app reviews?
$\mathbf { R } \mathbf { Q } _ { 2 }$ . How can the selected taxonomy be effectively adapted to the specific context of app reviews?
${ \bf R } { \bf Q } _ { 3 }$ . What challenges arise when humans manually annotate app reviews with emotion labels?
$\mathbf { R } \mathbf { Q } _ { 4 } .$ How does LLM-based annotation compare to human annotation in emotion classification for app reviews?
Figure 1 illustrates our research design, including the steps involved in the resolution of each RQ. The figure shows how our method has produced two actionable assets, ready to be used by the RE community: (1) annotation guidelines to drive the manual process of annotating a set of app reviews using an emotion taxonomy; (b) an annotated dataset containing 1,112 app reviews annotated with the emotion taxonomy following the annotation guidelines.
To address $\mathbf { R Q _ { 1 } }$ , we conducted a literature review on emotion and sentiment analysis in user reviews. We extended the scope beyond mobile app reviews to software-related reviews for broader generalizability. This led to the selection of a suitable emotion taxonomy as the foundation for annotation.
To address $\mathbf { R } \mathbf { Q } _ { 2 }$ , we implemented an iterative human annotation process based upon annotation guidelines using a subset of a publicly available app reviews dataset [38]. Feedback from each iteration served to refine the annotation guidelines, including definitions, instructions, and examples for identifying emotions in app reviews. These guidelines were used to generate the final annotated dataset.
To address $\mathbf { R } \mathbf { Q } _ { 3 }$ , we analysed human agreement during the annotation process. For each iteration, we computed the pairwise Kappa agreement among annotators and examined confusion matrices to detect label-specific interpretation conflicts. We analysed also the discussions held to resolve these conflicts, which served for clarifying biases or refining guidelines. This process identified key annotation challenges, guiding the design of automatic emotion extraction tasks.
To address $\mathbf { R Q } _ { 4 }$ , we designed an LLM-based annotation process, selecting and comparing state-of-the-art LLMs with advanced analytical capabilities. Our analysis focused on two dimensions: (1) cost-efficiency and (2) agreement, assessing inter-rater reliability between human and LLM annotations, as well as LLM prediction correctness against human ground truth. This resulted in a human vs. LLM annotation analysis, laying the groundwork for LLM-based emotion extraction from app reviews and AI-based annotation in related tasks.
Details of the methodology used in the four RQs follow.
# B. Literature Review
We constructed our search string by integrating two key areas: emotion analysis and software reviews. For emotion analysis, following Lin et al.’s approach [19], we adopted the generic term emotion to maximize potential matches. For software reviews, we included multiple synonyms such as app, application, and API, along with alternative phrases like user reviews. This resulted in the following search string:
("emotion\*") AND ("app\* review\*" OR "software\* review\*" OR "user\* review\*" OR "application\* review $\star$ " OR "api $\star$ review\*")
We selected Scopus for its broad coverage of high-quality, peer-reviewed research across disciplines [48], structuring the literature review into the following steps:
Step $\textcircled{1}$ – Study collection. We executed the defined search string in Scopus, exporting full references, metadata, and abstracts into a spreadsheet for further analysis. Step $\textcircled { \bullet }$ – Inclusion and exclusion criteria (IC/EC). Papers were included if they (i) proposed, used, or analysed a multi-class emotion taxonomy, and (ii) focused on software-related user reviews. Papers were excluded if they: $( i )$ lacked full-text access, (ii) were not in English or Spanish, (iii) were unrelated to opinion mining, or $( { i \nu } )$ focused solely on sentiment polarity. • Step $\bullet$ – Forward snowballing. Backward snowballing was excluded as it prioritizes older studies, which may not align with the latest advancements in emotion analysis. Instead, we applied forward snowballing through Google Scholar to identify recent developments until saturation was reached, with no new relevant studies emerging. • Step $\bullet$ – Feature extraction. For each selected study, we extracted key features, including taxonomy details (e.g., name, size, list of emotions), datasets (e.g., size, type, availability), emotion extraction methods (e.g., manual, machine learning, deep learning), and evaluation metrics (e.g., accuracy, precision, recall).
• Step $\bullet$ – Taxonomy inspection. We analysed the identified emotion taxonomies both quantitatively and qualitatively. This assessment helped identify their potential benefits and limitations, ultimately guiding the selection of the most suitable taxonomy for this study.
# C. Iterative Human Annotation
Human annotation takes two key artifacts as input: the emotion taxonomy (derived from $\mathbf { R Q } _ { 1 }$ ) and the dataset of reviews for annotation. We leveraged a dataset of user reviews from a multi-domain catalogue of popular mobile apps [38], spanning 10 Google Play categories. Additionally, it provides annotations for 198 distinct app features (e.g., instant messaging, video sharing, note-taking, GPS navigation), enabling future research in feature-based emotion analysis. We randomly selected up to 10 sentences from reviews mentioning each feature1, resulting in a total of 1,412 reviews. These were shuffled and divided into 15 subsets, each used in a sequential annotation iteration, labelled from iteration0 to iteration14. Annotations were performed by five human annotators $( A n n _ { 1 }$ to $A n n _ { 5 }$ ), all co-authors of this paper, as follows:
Step $\bullet$ – Guidelines elaboration. Based on the literature review $( \mathsf { R Q } _ { 1 } )$ , $A n n _ { 1 }$ drafted the initial version of the annotation guidelines. These guidelines include: $( i )$ formal definitions of each emotion adapted to the context of mobile app reviews, (ii) real examples of app review sentences for each emotion, and (iii) detailed instructions on the annotation process. Annotation was defined at the sentence level, with emotions assigned atomically to individual sentences. Annotators could refer to full reviews for context and ambiguity resolution. The guidelines were refined during the first two iterations. In iteration0, $\mathbf { \nabla } _ { A n n _ { 1 } }$ and $A n n _ { 2 }$ independently labelled a shared subset of sentences without prior discussion, allowing ambiguities and gaps in the guidelines to surface. • Step $\textcircled{4}$ – Guidelines refinement. Following the initial guidelines draft, all five annotators participated in a joint iteration (iteration2). Involving multiple annotators ensured diverse perspectives, helping to uncover defects, inconsistencies, ambiguities, and potential threats to validity in the annotation process. After annotation, we conducted a dedicated meeting to systematically review disagreements, linking each to a concrete action point for improving the annotation guidelines (e.g., adding examples, refining ambiguous vocabulary, clarifying criteria for distinguishing emotions).
Step $\bullet$ – Dataset annotation. Once the guidelines were stabilized, we iterated over the remaining 12 subsets of reviews $( i t e r a t i o n _ { 3 } \ \ i t e r a t i o n _ { 1 4 } )$ to create the final dataset. Three annotators $( A n n _ { 1 } , A n n _ { 2 } , A n n _ { 3 } )$ were designated as main annotators and two $( A n n _ { 4 } , A n n _ { 5 } )$ as secondary annotators. Each iteration included two main annotators and one secondary annotator, rotating across all possible pairings to systematically identify and address inconsistencies. The 12 iterations were grouped into four batches (2, 4, 4, and 2 iterations, respectively). Each annotator recorded the time spent reading the guidelines and completing each iteration. Each emotion was assessed independently, meaning that a given sentence can be annotated with multiple emotions. This possibility was used with caution, i.e. only when the sentence contained sub-sentences expressing different emotions. The final annotation retained those emotions agreed upon by at least two of the three annotators involved in the iteration.
# D. Human Agreement
After each iteration, we measured annotator agreement, analysed disagreements, and refined the guidelines to improve annotation consistency. This process was structured as follows:
• Step $\bullet$ – Pairwise Kappa analysis. We used Cohen’s Kappa to assess pairwise agreement between annotators in each iteration. Unlike Fleiss’ Kappa, pairwise Cohen’s Kappa allows us to monitor individual interpretation conflicts and cognitive biases, helping to identify challenges in emotion classification from user reviews. To further analyse disagreement patterns, we also generated confusion matrices for each pair of annotators. • Step $\textcircled{1}$ – Agreement analysis. As a quality control measure, we set a minimum agreement threshold of Cohen’s Kappa $\geq 0 . 6 0$ (i.e., substantial agreement) for each pair of annotators per iteration. Additionally, we qualitatively analysed the confusion matrices from the previous step to identify recurring disagreement patterns (e.g., frequently confused emotions, systematic biases, or inconsistencies in annotator tendencies). • Step $\textcircled{1}$ – Disagreement discussion. Using Cohen’s Kappa and confusion matrices as input, we held dedicated meetings at the end of each batch to analyse major disagreement patterns. Each pattern was linked to a specific action point, akin to those proposed in Step $\textcircled{4}$ . At this stage, we aimed to minimize guideline modifications to maintain consistency with previous annotations. Changes were kept limited and comparable in scope to ensure that the guidelines remained generalizable across different datasets and app domains while avoiding overstatements.
# E. LLM-based Annotation
After human annotation, we evaluated the performance of LLM-based agents instructed with the generated annotation guidelines. This process was designed as follows:
• Step $\textcircled { 1 6 }$ – LLM-based annotation. We evaluated the performance of three advanced LLMs with API access: GPT- $4 0 ^ { 2 }$ , Mistral Large $2 ^ { 3 }$ , and Gemini 2.0 Flash4. For each, we created an LLM assistant with a system prompt embedding the annotation guidelines with additional input-output formatting instructions. We tested the models under three temperature settings: high (1), mid (0.5), and low (0). This experimental setup enables a more comprehensive evaluation of LLM performance while aligning LLM challenges with those encountered in human annotation $( \mathsf { R Q } _ { 3 } )$ . To balance efficiency and avoid performance degradation from excessively long prompts, the dataset was processed in batches of 10 reviews. Each assistant was run three times on the full dataset.
• Step $\textcircled{6}$ – Agreement analysis. We assessed the average pairwise agreement between humans and LLM assistants using Cohen’s Kappa. Additionally, treating the human agreement as ground truth, we evaluated LLM performance in terms of precision, recall, and F-measure.
Step $\textcircled{4}$ – Cost-efficiency analysis. Using average results from three annotation runs for each LLM, we measured total token usage, including input and completion output tokens, along with the associated API cost $( \overleftarrow { \mathbf { \epsilon } } )$ . We also recorded execution times for the automated annotation process. These results were then compared to a human cost-efficiency analysis, factoring in personnel costs $( \overleftarrow { \mathbf { \epsilon } } )$ and the time required for annotation iterations.
# IV. RESULTS
# A. Emotion Taxonomy $( R Q _ { I } )$
Figure 2 reports the results of the literature review5, which identified 11 studies employing multi-class emotion taxonomies in the context of software reviews.
Fig. 2. Results from the literature review.
No single taxonomy has been universally adopted for emotion analysis in software reviews. Three studies [49], [50], [51] utilize or adapt Plutchik’s wheel of emotions [30], which defines eight basic emotions – Joy, Trust, Fear, Surprise, Sadness, Disgust, Anger, and Anticipation – while also modeling relationships between emotions and their arousal levels (i.e., intensity of emotional experience). Two studies [52], [36] employ variations of Parrott’s taxonomy [31], which identifies six basic emotions. Five of these emotions (Joy, Fear, Surprise, Sadness, and Anger) align with Plutchik’s, while Love is introduced as a distinct category. One study [53] applies Ekman’s taxonomy [29], which includes Happy, Sadness, Surprise, Fear, and Anger, overlapping with both Plutchik’s and Parrott’s models. Additionally, it incorporates Disgust, consistent with Plutchik’s classification. Another study [54] utilizes Liew and Turtle’s taxonomy, which defines 28 fine-grained emotions, including Admiration, Doubt, Pride, and Jealousy, providing a more granular categorization of emotional expressions in software reviews. Finally, four studies [55], [16], [56], [57] employed custom emotion taxonomies. These approaches included reusing emotions from existing taxonomies [56], adapting taxonomies to align with the syntax required by a third-party emotion extraction tool [16], [57], or defining a domain-specific set of emotions [55].
Fig. 3. Distribution of emotions in the literature review (including only those that appear in more than one study).
We collected all identified emotions and applied minimal normalization, which involved standardizing syntax by converting words to their root forms. Using normalized data, we generated Figure 3 to illustrate the frequency of emotions studied in software reviews. Based on this analysis, we chose Plutchik’s Wheel of Emotions not only because it is the most used in the papers found, but also for the following reasons. First, its eight basic emotions are among the nine most studied, the only exception being Love. Second, Plutchik’s model defines opposite emotion pairs and incorporates contiguous emotions, making it easier to compare emotions while allowing the identification and discussion of reviews that may fall between two emotions. Finally, emotions such as Anticipation – unique to Plutchik’s model – can provide insights into user expectations, particularly in exploring new features, a common focus in mobile app feedback analysis [3], [9].
# B. Annotation Guidelines and Annotated Dataset $( R Q _ { 2 } )$
The outcome of $\mathbf { R Q } _ { 2 }$ consists of two artifacts: (1) the annotation guidelines, and (2) the annotated dataset.
1) Annotation guidelines: A 10-page document including formal definitions of each of the Plutchnik’s eight primary emotions adapted to the mobile app review domain, alongside 48 annotated review sentence examples. Table I summarizes (partially) the content of these guidelines, which are fully detailed in our replication package. In addition to Plutchnik’s emotions, during this process we elicited two additional labels for the annotation task: Neutral , restricted to sentences reflecting purely objective content without expressing any particular emotion linked to the current state of the app; and Reject , restricted to sentences that cannot be interpreted from a linguistic standpoint.
2) Annotated dataset: A dataset of 1,272 annotated labels, including emotions, neutral, and rejected cases, assigned to 1,112 sentences from distinct reviews. Table I includes the distribution for each emotion in the final dataset, in addition to 88 neutral sentences and 22 rejected sentences.
TABLE ISUMMARY OF EMOTION ANNOTATION GUIDELINES. INCLUDES NUMBER OF ANNOTATED SENTENCES PER LABEL (#).
# C. Annotation Challenges $( R Q _ { 3 } )$ 1
Figure 4 summarizes the evolution of the average Cohen’s Kappa agreement for each annotator across iterations following the process described in Sections III-C and III-D. After the initial version of the guidelines produced by $A n n _ { 1 }$ and $A n n _ { 2 }$ in iteration1, the team inter-rater agreement grew progressively from moderate at the start of the process (average Cohen’s Kappa of 0.52 in iteration2) to substantial at the end (average Cohen’s Kappa of 0.69 for all iterations involving the dataset annotation). Notably, all pairwise agreements from iteration3 onward remained above the substantial agreement threshold (dotted red line). Fluctuations in early iterations suggest ongoing adjustments into the guidelines, while later stability reflects the effectiveness of iterative refinement. The final iteration showed higher agreement but involved only 12 reviews, making it less representative. Complementarily, we also analysed average Cohen’s Kappa scores for individual emotion labels. Results show that Joy, Anticipation, and Reject labels achieved the highest agreement, while Surprise, Anger, and Disgust exhibited the lowest consistency. Expanded label-specific agreement is available in the replication package.
During this process, we experienced a number of challenges in adapting Plutchik’s taxonomy to mobile app reviews and developed mitigation strategies, either incorporated into the guidelines or applied during annotation. We summarize the key outcomes below:
Fig. 4. Evolution of the average Cohen’s Kappa agreement across iterations.
Challenge 1. Defining boundaries between contiguous emotions. Contiguous emotions in Plutchik’s taxonomy share overlapping attributes, leading to consistent disagreements among annotators. For instance, in the sentence ${ } ^ { \ast } I$ just love this notebook”, $A n n _ { 2 }$ and $A n n _ { 3 }$ labelled it as Joy, reflecting app appraisal, while $A n n _ { 5 }$ assigned Trust, interpreting it as a personal involvement. Similarly, in “[...] the app is saying I need to keep signing in and my notebooks aren’t retrievable”, all annotators marked it as Sadness, emphasizing disappointment, while $A n n _ { 4 }$ labelled it as Disgust, focusing on rejection. To address this, we refined the guidelines with explicit disambiguation criteria for conflicting emotion pairs reporting the lowest label-specific agreement.
Challenge 2. Addressing mixed or conflicting emotions in a single sentence. A sentence may express multiple, potentially conflicting emotions, such as Joy and Sadness (e.g., “I really appreciate this app but lately I’ve been having issues with the cloud sync.”) or Sadness and Anger (e.g., “It wasn’t what I needed, and I absolutely HATE that they don’t even tell you the timers are for premium memberships only.”). This complexity challenges the assumption that emotions are mutually exclusive. To address this, we allowed multiple emotions to be assigned to a single sentence, ensuring that mixed - or even conflicting - emotions are properly captured without enforcing artificial exclusivity.
Challenge 3. Establishing thresholds for subtle emotional intensity (arousal). Implicit emotional expressions may be overlooked or exaggerated due to subjective interpretation. For instance, Sadness or disappointment can be difficult to assess (e.g., “No calendar sync”), and Surprise may depend on whether an event was truly unexpected (e.g., “The app decided to add a folder to my gallery $I { \ldots } I ^ { \prime \prime } )$ . To address this, we refined our guidelines with additional examples to clarify when an emotion’s intensity meets the labeling threshold, reducing both underreporting and overinterpretation.
Challenge 4. Handling lack of context and linguistic ambiguities in sentence-level emotion extraction. Eliciting emotions at the sentence level is challenging due to missing contextual information. This could potentially lead to inconsistencies, as identical sentences yield different emotions depending on their context, particularly when linguistic ambiguities (e.g., pronouns, acronyms, or elliptical subjects) were present. To address this, annotators were instructed to label individual sentences, consulting the full review only to resolve linguistic ambiguities.
# D. Human vs. LLM Annotation Analysis $( R Q _ { 4 } )$ )
As described in Section III, annotation analysis between human and LLM-based performance focuses on two main dimensions: agreement and cost-efficiency analysis.
Human annotators LLM-based annotators 8 Agreement(GPTA) 黑 品 品 GPT-40 OpenAl Ann1 O ↓ O Ann2 ↑→ Mistral Larne2 国 巴 巴 LMistral U MISTRAL Ann3 (MistralA) Mistral1Mistral2 Mistral3 T 1 8 Ann4 Gemini 2.0Flash 国 巴 中 2.Geminh Gemini Ann5 (GeminiA) Gemini Gemini2Gemini3
1) Agreement Analysis: Figure 5 illustrates the evaluation setup, comparing human agreement $( A n n _ { A } )$ with LLM-based agreement $( L L M _ { A } )$ . For each LLM included in this study, we conducted three annotation runs, deriving an agreement annotation using the same criteria applied to human agreement (see Section III-C). We then computed the agreement among individual LLMs (GPT-4o, Mistral Large 2, Gemini 2.0 Flash) to obtain the overall LLM agreement $( L L M _ { A } )$ , which we compared against human agreement $( A n n _ { A } )$ . Since the lowest temperature setting (0) yielded the best results, we limit the reported results to this setting.
Figure 6 reports the inter-rater pairwise Cohen’s Kappa agreement between all annotators, including five humans6 $( A n n _ { i } )$ and LLMs $G P T _ { A }$ , MistralA, GeminiA). It also includes agreement between each annotator and the overall human $( A n n _ { A } )$ and LLM-based $( L L M _ { A } )$ agreements. Agreement between different runs from the same LLM is excluded from this analysis, as they exhibited near-perfect consistency (pairwise Cohen’s Kappa agreement was always $\ge 0 . 8 7 \$ ). On average, humans show similar agreement among themselves (0.69) as LLMs (0.68). However, LLM annotations $( G P T _ { A }$ ,
Fig. 5. Evaluation of human agreement vs. LLM-based agreement
Fig. 6. Inter-rater pairwise agreement (human and LLM-based)
Mistra $l _ { A }$ , GeminiA) exhibit moderate agreement with human annotations $( 0 . 5 6 ~ - ~ 0 . 6 2 )$ , indicating some deviation from human judgment. Among individual models, Gemini 2.0 Flash $( G e m i n i _ { A } )$ aligns more closely with human agreement $( A n n _ { A } )$ , although LLM agreement $( L L M _ { A } )$ is slightly higher. This suggests that combining multiple LLM instances helps mitigate biases, similar to how human annotators vary in judgment. GPT-4o $( G P T _ { A } )$ shows the highest consistency with overall LLM agreement $( L L M _ { A } )$ , while Mistral and Gemini introduce more variability across different runs.
Using human agreement $( A n n _ { A } )$ as ground-truth, Table II reports the average precision, recall and F1 metrics of: individual LLM runs $G P T _ { i }$ , M istrali, Geminii); agreement between LLM runs $G P T _ { A }$ , MistralA, GeminiA); and agreement between all LLMs $( L L M _ { A } )$ . These results reinforce the idea that agreement between multiple LLM instances $( L L M _ { A } )$ is the best setting, reporting the highest F1 measure. However, the highest recall is reported by $G e m i n i _ { A }$ . This also opens the need to analyse alternative quality metrics, properly weighting precision and recall based on task criticality and the cost of missing relevant emotion labels [58].
TABLE II LLM CORRECTNESS VS. HUMAN $( A n n _ { A } )$ )
Table III presents correctness metrics for each annotation label defined in Table I, revealing substantial differences in annotation performance across labels. The best-performing classes —- Joy, Sadness, Anticipation, and Reject — - exhibit high recall and balanced precision, suggesting that these are easier to distinguish based on the guidelines used by the LLM annotator. Notably, these labels also show the highest human agreement $( \mathsf { R Q } _ { 3 } )$ , reinforcing their clearer annotation boundaries. Conversely, Surprise, Disgust, Anger, and Fear show the lowest performance, both in terms of correctness and labelspecific human agreement, with frequent misclassifications. Additionally, Anger has relatively high recall but very low precision, indicating it is often over-predicted, whereas Surprise suffers from poor recall, suggesting the model struggles to identify it. These discrepancies highlight the challenges LLMs face in distinguishing subtle or less frequently expressed emotions (Challenge 3 in ${ \bf R } { \bf Q } _ { 3 }$ ). Overall, these findings emphasize the need for refined annotation guidelines and model adaptations, such as class-specific confidence thresholds or multi-label classification approaches.
Additionally, when interpreting correctness, it is important to note that our LLM annotation process was designed to rely on independent LLM instances, without mechanisms for discussion or disagreement resolution, unlike human annotators in our study. As a result, 39 sentences $( 3 . 5 \% )$ in LLM agreement $( L L M _ { A } )$ were left without an assigned label, since interLLM discussions, akin to human deliberation, were beyond the scope of this study. Similarly, before discussion, human annotations led to 34 sentences $( 3 . 1 \% )$ without an assigned label, which were later resolved through deliberation. Integrating discussion mechanisms into LLM annotation workflows could enhance performance [59], particularly for ambiguous emotions where disagreement is more prevalent.
TABLE IIILLM AGREEMENT $\cdot L L M _ { A }$ ) CORRECTNESS PER ANNOTATION LABEL
2) Cost-efficiency Analysis: Table IV compares the costefficiency of human and LLM-based annotators7. For better comparability, the table reports average time and cost per annotator. For human annotators $( A n n _ { i } )$ , we compute the average across the three annotators participating in each iteration. For LLM annotators ${ G P T } _ { i }$ , M istrali, Geminii), we take the average across the three annotation runs. On average, a human annotator takes more than $7 \times$ longer to annotate the whole dataset $\approx 7$ hours) than the slowest LLM-based annotator8, Gemini $\approx 1$ hour). Cost disparity is even more pronounced – on average, one human annotator is $2 3 3 \times$ more expensive9 than the most costly LLM annotator, GPT-4o. These findings highlight the scalability of LLM-based annotation, offering a cost-effective alternative to manual annotation, particularly for iterative and large-scale annotation tasks.
TABLE IV COST-EFFICIENCY ANALYSIS (AVERAGE PER ANNOTATOR)
# V. DISCUSSION
# A. Research Findings
1) Which taxonomy of emotions is most suitable for annotating mobile app reviews? $( R Q _ { I } )$ : We establish Plutchik’s emotion taxonomy as the most effective framework for annotating mobile app reviews due to its structured categorization and relevance to key app-related emotions. Our selection is grounded by a systematic literature review (see Section IV-A), focusing on emotion popularity, frequency, and relevance to our domain, particularly for emotions such as Anticipation, which helps identify feature requests; Trust, which aligns with user goals and experiences; and Disgust, which signals critical issues leading to user rejection of an app or feature. While our study is based on this taxonomy, the proposed framework is fully adaptable. Researchers can refine annotation guidelines $( \mathsf { R Q } _ { 2 } )$ and modify input/output formats for LLMbased annotation $\left( \mathrm { R Q } _ { 4 } \right)$ . Additionally, while our focus is on emotions in app reviews, the literature review was conducted within the broader scope of software analysis. While our guidelines and taxonomy adaptations are tailored to the mobile app context $( \mathsf { R Q } _ { 2 } )$ , the same approach could be extended to other software-related user feedback domains, such as software product reviews or issue tracking systems.
2) How can the selected taxonomy be effectively adapted to the specific context of app reviews? $( R Q _ { 2 } )$ : Our adaptation of Plutchik’s taxonomy addresses the unique characteristics of mobile app reviews by combining structured guidelines with practical examples (Table I, Section IV-B). The dataset illustrates how these guidelines can be applied effectively, supporting both replication of our study and reuse of our guidelines and dataset in future research.
Developing clear, practical and unambiguous annotation criteria proved essential. To enhance practical usability, we iteratively refined our guidelines based on feedback from human discussions $( \mathsf { R Q } _ { 3 } )$ , incorporating disambiguations, additional examples, and specific criteria to distinguish overlapping emotions. For instance, we established clear distinctions between Sadness and Surprise (disappointment vs. unexpectedness) and Joy and Trust (appraisal vs. user engagement), helping annotators make more consistent decisions. Finally, our findings suggest that merging closely related emotions into broader categories could enhance classification performance while preserving interpretability.
3) What challenges arise when annotating app reviews with emotional labels? $( R Q _ { 3 } )$ : We identified four key challenges in emotion annotation (see Section IV-C). While these challenges primarily pertain to human annotation, they are also assessed in the context of LLM-based automated annotation (see discussion on $\mathrm { R Q } _ { 4 }$ ). These challenges serve as groundwork for identifying design strategies to support automated emotion extraction approaches. Mainly, we propose the following:
1) Incorporating confidence scores and human-in-theloop mechanisms to prioritize high-certainty emotions while flagging low-confidence predictions. This mitigates the difficulty of defining boundaries between contiguous emotions (Challenge 1) and helps establish thresholds for subtle emotional intensity by reducing overinterpretation and underreporting (Challenge 3).
2) Leveraging attention-based architectures (e.g., BERT [60]) with explainability techniques (e.g., SHAP [61]) to improve transparency and traceability of mixed or conflicting emotions. This ensures better interpretability of overlapping emotional signals within a single sentence (Challenge 2).
3) Employing context-enhanced input pipelines that integrate full reviews and external metadata (e.g., app name, category, or user rating) to improve emotion prediction accuracy. This helps address the lack of context and linguistic ambiguities that hinder sentence-level emotion extraction (Challenge 4).
4) Using multi-label classifiers or ensembles of binary classifiers to better capture complex emotional expressions. This ensures that non-mutually exclusive emotions can be effectively modeled, particularly when multiple emotions co-occur in the same sentence (Challenge 2).
4) How does LLM-based annotation compare to human annotation in emotion classification for app reviews? $( R Q _ { 4 } )$ : While LLMs provide a cost-effective alternative, human annotation remains the most reliable reference point, reinforcing their role as ground truth. Although LLMs exhibit internal consistency, they do not perfectly align with human interpretations, needing calibration or fine-tuning for domain-specific tasks. Using LLM annotations as a substitute for human annotation requires validation, as their agreement with human labels is moderate but not exact.
LLM-based agreement and correctness assessment demonstrates that the challenges affecting human annotation $( \mathsf { R Q } _ { 3 } )$
also apply to LLMs. Poor performance in Anger and Disgust correlates with the difficulty of delineating overlapping negative emotions (Challenge 1). Similarly, frequent misclassification of Surprise suggests ambiguity in emotional intensity and context, reflecting human annotators’ struggles in defining arousal thresholds (Challenge 3). Furthermore, the instability in low-precision classes highlights the limitations of sentencelevel annotation without broader contextual cues, emphasizing the need for context-enhanced pipelines (Challenge 4).
Although we evaluate LLM correctness relative to a humanannotated ground truth $( A n n _ { A } )$ , our study does not assess the performance of LLMs in fully automated emotion extraction within user feedback analysis pipelines. Instead, we focus on the accuracy of LLMs as an alternative to human annotators, examining the cognitive challenges and limitations humans face when constructing foundational resources (e.g., an annotated dataset) for emotion classification.
Future research can reuse our dataset $( A n n _ { A } )$ to support automatic emotion extraction in various LLM-based settings. Encoder-only LLMs, such as BERT, RoBERTa, or XLNet, could be applied in a supervised text classification setting. Alternatively, decoder-only – generative – LLMs could be employed via fine-tuning or few-shot prompt engineering using partial subsets of our dataset. Further investigations should explore and compare multiple LLM-based annotation strategies, capitalizing on the ground truth, challenges, and design recommendations derived from this study.
# B. Threats to Validity
Concerning internal validity [62], annotators relied on the full review to resolve linguistic ambiguities, which may have introduced unintended dependencies on contextual information. While this approach improved consistency, it also introduced the risk that certain emotional cues may have been interpreted differently when considering broader review contexts. Additionally, personal bias during human annotation posed a threat, particularly when annotators encountered ambiguous or overlapping emotional expressions. While involving five annotators and the generation, discussion, and refinement of the guidelines reduced this risk, differences in annotator interpretations may still have led to slight variations in the annotated dataset. Furthermore, annotation challenges such as distinguishing subtle variations in intensity and resolving conflicting emotions (e.g., Joy vs. Trust, Sadness vs. Disgust) may have affected annotation reliability. To address this, we specifically addressed these challenges through dedicated discussions and mitigation actions to improve the guidelines. Furthermore, we report these challenges to assist further research in considering the limitations of our dataset properly.
For construct validity, the selection of Plutchik’s taxonomy might not fully capture all possible emotional states present in app reviews. We mitigated this risk by conducting a thorough literature review before selecting the taxonomy. In addition, as our research design remains agnostic to a specific taxonomy, our replication package enables adaptation to alternative emotion taxonomies and annotation schemes. Another limitation stemming from the annotation process (both human and LLM) is the assessment of emotional intensity, where subtle variations in arousal remain difficult to measure consistently.
Finally, concerning external validity, our dataset may not generalize to all app review domains. Although reviews span diverse categories and features, further validation is needed to assess applicability across other domains, languages, and cultural contexts. Linguistic differences and user demographics may affect how emotions are conveyed in app reviews, suggesting the need for cross-domain evaluation. Regarding LLM selection, our results may not fully generalize to other architectures or newer models. Exploring alternatives like DeepSeek10 (with restricted API access during this research) or OpenAI reasoning models (e.g., o1), now available via API, could yield different outcomes, potentially improving agreement or introducing new annotation biases.
# VI. RELATED WORK
# A. Emotion Annotation of App Reviews
The use of multi-class, fine-grained emotion taxonomies in app reviews remains limited, though some related work exists. Riccosan published an Indonesian dataset of app reviews annotated with Parrott’s taxonomy [36]. While relatively large (20K reviews), it is not available in English. Moreover, their annotation relied on two annotators, reporting a Cohen’s Kappa of 0.61. They lack details on disagreement resolution and insights into how specific emotions were adapted to the app review domain. Moreover, emotions like Anticipation and Trust, relevant to app reviews, are not considered.
Several studies have explored methods for inferring emotions from app reviews using proprietary datasets. Malgaonkar et al. developed a tool integrating a WordNet-based lexicon method to identify Ekman’s emotions from a dataset of $5 3 \mathrm { K }$ reviews [53]. Similarly, Keertipati et al. applied a lexicon-based method using the LIWC dictionary for three negative emotions to analyse their correlation with app features [52]. Singh et al. manually annotated 2K mobile learning app reviews using also a lexicon-based approach aligned with Plutchik’s taxonomy [51], linking emotions to review descriptors like ratings, technical quality and usefulness. Beyond lexiconbased methods, Savarimuthu et al. employed IBM Watson’s Tone Analyzer to extract emotions as descriptors for assessing data waste in mobile app reviews [57]. Lastly, Cabellos et al. [54] manually analysed video game reviews using Liew & Turtle’s taxonomy to align emotions with moral aspects. However, these datasets are unavailable, lack emotion annotations, and do not evaluate extraction methods empirically. This underscores the relevance of emotion analysis but highlights the scarcity of annotated datasets.
# B. LLM-based Annotation
The potential of LLMs for human-like reasoning tasks, combined with the need for large domain-specific datasets to reduce hallucinations and errors, has driven research into their use as data annotators [45]. Heseltine et al. analysed the performance of multiple annotation runs using OpenAI’s GPT-4 for political text annotation [63]. Their findings suggest that while LLM-assisted tagging achieves high accuracy for simple tasks in cost-efficient settings, it struggles with complex and subjective analyses, such as sentiment annotation, where interpretation often varies between annotators [64]. Similarly, Sayeed et al. evaluated Gemini for text classification in materials science, reaching comparable conclusions [65]. Research has further explored LLM annotation across various fields, including mathematics [66], finance [67], and linguistics [68]. To address these limitations, Kim et al. proposed MEGAnno+ [59], a human-LLM collaborative framework designed to enhance the reliability and robustness of LLMgenerated labels. Their approach integrates a human-in-theloop mechanism to verify LLM annotations, concluding that fully autonomous annotation remains prone to errors, requiring human oversight for reliability. Similar studies investigate additional dimensions, such as the explainability [69] and costeffectiveness [70] of human-LLM collaboration in annotation tasks. While several domain-specific studies have been conducted, further research is needed to assess the reliability of these agents and explore improvements through alternative annotation mechanisms. To this end, and in line with our findings, hybrid approaches that combine expert validation with automated annotations may provide a balanced solution for generating datasets to support supervised extraction methods. | Opinion mining plays a vital role in analysing user feedback and extracting
insights from textual data. While most research focuses on sentiment polarity
(e.g., positive, negative, neutral), fine-grained emotion classification in app
reviews remains underexplored. This paper addresses this gap by identifying and
addressing the challenges and limitations in fine-grained emotion analysis in
the context of app reviews. Our study adapts Plutchik's emotion taxonomy to app
reviews by developing a structured annotation framework and dataset. Through an
iterative human annotation process, we define clear annotation guidelines and
document key challenges in emotion classification. Additionally, we evaluate
the feasibility of automating emotion annotation using large language models,
assessing their cost-effectiveness and agreement with human-labelled data. Our
findings reveal that while large language models significantly reduce manual
effort and maintain substantial agreement with human annotators, full
automation remains challenging due to the complexity of emotional
interpretation. This work contributes to opinion mining by providing structured
guidelines, an annotated dataset, and insights for developing automated
pipelines to capture the complexity of emotions in app reviews. | [
"cs.IR",
"cs.SE"
] |
# 1 Introduction
Aquaculture plays a crucial role in satisfying the growing global demand for fish and providing a sustainable food source (Boyd et al., 2022). According to the Food and Agriculture Organization of the United Nations, global fisheries and aquaculture production has reached 223.2 million tons, with aquaculture surpassed capture fisheries in aquatic animal production for the first time (FAO, 2024). However, as the scale of aquaculture continues to expand, the problems of greater management difficulties, serious feed wastage and frequent occurrence of diseases have become more and more prominent (Garlock et al., 2020; Naylor et al., 2023). Studies have shown that the behavioral changes of fish during feeding reflect their desire to feed (MacGregor et al., 2020; Assan et al., 2021; Syafalni et al., 2024), and further quantification of their feeding intensity can determine whether the baits being fed are excessive or insufficient. However, the quantification of fish feeding intensity in actual production depends on the observation and recording of farmers. Although this method is intuitive and easy to operate, it is time-consuming and labor-intensive and has large errors. In addition, feed feeding also depends on the experience and habits of the breeders, which is highly subjective and difficult to be standardized. Although machine-controlled feeding can save labor costs, it lacks the ability to dynamically adjust according to the real-time feeding needs of fish, which is easy to cause a waste of feed resources. Therefore, realizing real-time accurate quantification of fish feeding intensity has become the key to solving the problem of precise feeding and promoting the high-quality development of aquaculture industry.
In recent years, the booming development of new-generation information technology has brought new opportunities for aquaculture. Researchers have gradually used artificial intelligence and advanced instruments to identify and analyze the fish feeding behavior, making the interpretation of fish behavior more accurate and objective. Currently, intelligent analysis technologies for fish behavior have made great progress, including computer vision (Ubina et al., 2021; Wang, Yu, et al., 2023; Wu et al., 2024), acoustic (Zeng et al., 2023; Du, Xu, et al., 2023; Iqbal et al., 2024) and sensor (Adegboye et al., 2020; Ma et al., 2024). Among them, computer vision has become the mainstream method for the current quantitative research of fish feeding intensity due to its advantages of low cost, non-invasiveness and reliability. However, this technology is easily affected by the target environment when collecting optical images of fish. The image quality of fish bodies varies under different backgrounds, which in turn affects the recognition of key features such as color, texture and shape of the image. Therefore, the analysis effect of image and spectral data depends largely on the optimization of the algorithm, and the anti-interference ability is insufficient in complex and diverse environments. Compared with computer vision, acoustic technology is not limited by light and water turbidity, and shows great application potential in the field of fish feeding behavior analysis (S. Zhang et al., 2025). In acoustic monitoring, hydrophones are often used as signal acquisition devices, which can display the monitored frequency, energy and waveform data in real time, but they are also susceptible to the interference of non-feeding sounds. In high-density aquaculture environments, it is necessary to pay attention to the impact of the fish body on the hydrophone to affect the monitoring results. In addition, the fish feeding behavior can be effectively monitored by implanting accelerometers and other motion information acquisition devices, but the results of individualized behavioral tests are difficult to be used as a true reflection of group behavior. With the popularization of the concept of fish welfare farming, this invasive monitoring method is increasingly incompatible with the requirements of modern intensive farming.
Multimodal fusion technology can usually achieve significantly better generalization performance than single-modal models, and this fusion process also greatly improves model applicability (W. Li et al., 2024). Although existing studies have achieved effective quantification of fish feeding intensity through multimodal data fusion (Gu et al., 2025; Yang et al., 2024; Z. Zhang et al., 2025), they still face multiple challenges. First, the existing framework's over-reliance on audio-visual channels amplifies system vulnerability. The visual modality is susceptible to light attenuation, water scattering changes, and target occlusion effects, while the acoustic signal is extremely sensitive to water flow noise, device self-interference, and multipath propagation effects, so that the superposition of the two may increase failure probability of the existing system. Although changes in water quality parameters can indirectly reflect feeding conditions (K. Zhang et al., 2025), local water quality changes have limited impact on overall environmental parameters. Moreover, in recirculating aquaculture systems, water quality parameters usually remain relatively stable, which further weakens their practical value in quantifying fish feeding intensity. Second, existing feature fusion paradigms have obvious limitations. Mainstream methods generally adopt strategies such as channel cascade splicing, static weight allocation mechanism and late fusion (Du et al., 2024; Zheng et al., 2024). These methods essentially treat multimodal features as independent information units and perform linear combinations, ignoring the higher-order semantic associations and dynamic complementarities between modalities. This discretization method makes the fusion process unable to capture the nonlinear interaction relationship between cross-modal features, such as the spatiotemporal coupling pattern of visual kinematics and acoustic energy spectrum, which seriously restricts the robustness of feature expression. Finally, existing methods generally ignore the multimodal joint reasoning mechanisms at the decision level. The feature fusion is limited to the data layer or feature layer, lacking cross-modal decision-level co-optimization. Relevant studies have shown that the cognitive ambiguity between modalities can be effectively eliminated by constructing a cross-modal confidence allocation mechanism, thereby improving quantization accuracy (Z. Zhao et al., 2025).
Therefore, aiming at the applicability limitations of single-modality models in complex scenarios and the deficiencies of multimodal models in the information interaction mechanism in existing studies on the fish feeding intensity quantification, and considering the practical needs of factory-based recirculating water aquaculture systems, this study innovatively proposes a MAINet model to further improve the accuracy and reliability of the quantification results of fish feeding intensity. The main contributions of this study are as follows:
1) Novel multimodal dataset: This study innovatively integrates visual images, acoustic signals and water wave data, and constructs a multimodal dataset containing 7089 sets of spatiotemporal synchronous annotations, which provides a richer information dimension for the analysis of fish feeding behavior.
2) General feature extraction network: This study proposes a multimodal feature extraction framework based on a unified architecture. The framework uses the large-scale convolutional kernel model UniRepLKNet as the feature extractor for image, audio and water wave time-series data of feeding, and achieves the co-optimization of the feature space through the architecture consistency design.
3) Multimodal feature interaction module: A novel Auxiliary-modality Reinforcement Primary-modality Mechanism (ARPM) is designed to capture the correlation between modalities. By quantifying the influence of the auxiliary modality on the primary modality, more refined intermodal information interaction is achieved. Additionally, the downsampling layer is used for intramodal feature fusion to obtain the intra-modal long-range spatial dependence, so as to generate highquality fused feature vector.
4) Decision fusion strategy: A decision fusion method based on Evidential Reasoning rule (ER) is introduced, which achieves more accurate and robust fusion decision by weighing the conflict and consistency between the output results of each modality.
5) The experimental results show that the performance of MAINet is significantly higher than that of the comparison models, effectively improving the accuracy and stability of the quantification results of fish feeding intensity.
# 2 Related work
In quantifying fish feeding intensity using computer vision technology, Hu et al.(2015) analyzed the aggregation degree and the splash area produced by the fish during feeding, and used the area ratio of the both as a characteristic parameter to characterize the hunger level of fish. Zhou et al.(2017) took the average perimeter of the Delaunay triangle as the aggregation index of the fish school to quantify the feeding intensity. Although the method resulted in a correlation coefficient of 0.945 with the expert scores, it was affected by the interference of fish overlap. To this end, W. Hu et al.(2022) develop a computer vision-based intelligent fish farming system that determines whether to continue or stop feeding by recognizing the size of waves caused by fish eating feed. Wu et al.(2024) proposed a new method for assessing the feeding intensity using the fish feeding splash thumbnails, effectively eliminating the influence of water surface reflections, light spots and ripples on the quantification results. But it's not suitable for fish fry farming or low-density farming environments due to the inconspicuous splashing phenomenon produced by fish. L. Zhang et al. (2024) proposed a quantification method based on dual-label and MobileViT-SENet by considering the dynamic changes of biomass, density and feeding intensity of fish, which showed excellent performance in quantifying the feeding intensity of fish under different density conditions. In addition, to address the issue of limited accuracy in lightweight models, Xu et al. (2024) improved the lightweight neural network MobileViT by introducing convolutional block attention module and Bi-directional long short-term memory, achieving an accuracy of $9 8 . 6 1 \%$ in recognizing the fish feeding intensity. H. Zhao et al. (2024) proposed a new method for assessing appetite based on individual fish behavior, which utilized ByteTrack model and spatiotemporal graph convolutional neural network for tracking and motion feature extraction of individual fish, avoiding data loss caused by fish school stacking.
Audio information is an important carrier for the fish feeding behavior research, and its characteristic differences in different satiation states provide a scientific basis for the quantification of feeding intensity. Cao et al.(2021) obtained the feeding acoustic signals of largemouth bass in circulating aquaculture using passive acoustic techniques, and successfully filtered out the characteristic parameters that could measure the feeding activity from the mixed signals. Cui et al. (2022) further converted the acoustic signals into Mel Spectrogram (MS) features, and used a Convolutional Neural Network (CNN) model to classify the feeding intensity of fish with a mean average precision of 0.74. Although the CNN model has advantages in the partial field of vision, it has limitations in dealing with global features. Therefore, Zeng et al.(2023) proposed an audio spectrum Swin Transformer model based on the attention mechanism, reaching an accuracy of $9 6 . 1 6 \%$ in the task of quantifying the fish feeding behavior. Du, Cui, et al.(2023) extracted MS feature maps using multiple steps including preprocessing, fast Fourier transform and Mel filter bank, and input them into the lightweight network MobileNetV3-SBSC to complete the quantification of fish feeding intensity. This method has a fast recognition speed, but is not applicable to low breeding density scenarios. Further, Du, Xu, et al.(2023) proposed a novel fish feeding intensity detection method fusing MS, short-time Fourier transform and constant Qtransform feature maps, which had significantly better accuracy than the scheme using a single feature, but the combination of multiple strategies makes the model complexity higher. To address this problem, Iqbal et al.(2024) introduced a novel involutional neural network that can automatically capture label relationships and self-attention in the acquired feature space, resulting in a lighter architecture and faster inference time.
In addition to computer vision and acoustic, sensors have also been applied to the fish feeding intensity quantification. Biosensors are surgically inserted into the abdominal cavity or immobilized on the body surface to continuously monitor the fish behaviors and physiological parameters such as heart rate, temperature, orientation and acceleration over time (Makiguchi et al., 2012; Clark et al., 2013; Brijs et al., 2021). However, the invasiveness of implantable sensors poses a potential hazard to fish, limiting their practical application. Subakti et al.(2017) utilized sensors suspended on the water surface to sense the acceleration caused by the surface wave as a way to monitor the feeding activities of fish near surface water. Ma et al.(2024) introduced a six-axis inertial sensor to increase the data of angular velocity and angle, and proposed a Time-domain and requency-domain fusion model for quantifying the fish feeding intensity. The method can avoid the interference of equipment vibration noise, fish overlap, water turbidity and complex lighting. In addition, water quality parameters such as water temperature, dissolved oxygen and ammonia nitrogen compounds interact with the feeding behavior of fish (D. Li et al., 2020; K. Zhang et al., 2025). For example, the feeding activity of fish will lead to the localized decrease of dissolved oxygen concentration, and changes in dissolved oxygen concentration will directly affect fish appetite and food intake (D. Li et al., 2017). S. Zhao et al.(2019) took water temperature and dissolved oxygen concentration as input parameters of the adaptive neuro-fuzzy inference system model to determine fish feeding, and used a hybrid learning approach to optimize the parameters and fuzzy rule base. The Nash-Sutcliffe efficiency coefficient and root mean squared error of the model outperformed traditional fuzzy logic control and artificial feeding methods. Chen et al.(2020) proposed a fish intake prediction model based on back propagation neural network and mind evolutionary algorithm, which successfully established the mapping relationship between fish intake and environmental factors and biomass by using temperature, dissolved oxygen, weight and number of fish as input variables, avoiding the subjectivity of traditional methods.
The rapid development of multimodal fusion technology has also provided new ideas for quantifying fish feeding intensity. Syafalni et al.(2024) proposed a multimodal sensor-based method for fish appetite detection, which used residual $( 2 + 1 )$ -dimensional CNN and dense network to process video and accelerometer data, with an accuracy rate of up to $9 9 . 0 9 \%$ on the validation set. Du et al.(2024) developed a multi-modal fusion framework called multimodal fusion of fish feeding intensity, which combines deep features from audio, video and acoustic data and outperforms mainstream single-modality methods. X. Hu et al.(2023) added a multimodal transfer module and adaptive weights to the MulT algorithm to achieve effective fusion of feature vectors and dynamic adjustment of modal contributions, and further optimized the number of cross-modal transformers. $\boldsymbol { \mathrm { J . X u } }$ et al.(2023) proposed a multi-level fusion model based on sound and visual features to identify fish swimming and feeding behaviors under complex conditions, which fuses modal features from different stages through a designed jump connection module. Yang et al.(2024) designed a U-shaped bilinear fusion structure to achieve more interaction between sound and visual features, and introduced a time aggregation and pooling layer to retain the optimal feature information of fish. In addition, Zheng et al.(2024) used near-infrared images and depth maps to characterize fish feeding behavior, combining the feature information of feeding dynamics, water level fluctuation and feeding audio by weighted fusion. Gu et al.(2025) developed an audio-video aggregation module consisting of self-attention and cross-attention mechanisms and introduced a lightweight separable convolutional feedforward module to reduce model complexity, achieving a balance between the speed and accuracy of quantifying fish feeding intensity.
# 3 Proposed method
# 3.1 An overview of the architecture
This study proposes a novel multimodal fusion model MAINet, which aims to strengthen the interaction and fusion process between modalities with different information density characteristics, thereby improving the accuracy of quantifying fish feeding intensity. The overall architecture of MAINet is shown in Figure 1, which mainly consists of a general feature extraction module, a multimodal feature progressive interaction module, and a Decision Fusion Module (DFM). As the basic component of the model, the general feature extraction module focuses on extracting core information elements that can fully reflect the original data from each modal data, providing highquality feature input for subsequent multimodal interactions. The multimodal feature progressive interaction module consists of ARPM and downsampling layers to ensure that the low-level and high-level features of each modality can effectively integrate the complementary information of other modalities, thereby achieving further enrichment and enhancement of features. DFM adopts a new evidence fusion strategy, which uses the potential conflict between the output results of different modalities to improve the consistency of the results of each modality, thereby providing more reliable analysis results for the quantification of fish feeding intensity.
Figure 1. Overall architecture of the MAINet. $F _ { 0 } - F _ { 3 }$ represents the outputs of the four feature extraction stages.
# 3.2 Multimodal feature extraction
Faced with the need for multimodal data processing in the task of quantifying fish feeding intensity in complex scenarios, existing studies mostly use modality-specific heterogeneous models for independent feature extraction. However, the differences between heterogeneous models limit the optimization space of feature fusion strategies, resulting in cross-modal information interaction that can only be achieved through simple feature concatenation or late decision fusion, which restricts the performance improvement of quantification models. To this end, this study proposes a multimodal feature extraction framework based on a unified architecture, which uses a large-scale convolution kernel model UniRepLKNet as the feature extractor for feeding images, feeding audio and water wave time series data, and achieves collaborative optimization of feature space through architectural consistency design. The model structure is shown in Figure 2.
Figure 2. The structure of UniRepLKNet.
Although UniRepLKNet is originally designed for image tasks, it has demonstrated excellent performance in multi-task scenarios such as audio, point cloud and time series data (Ding et al., 2024). Its success lies in the guiding principles followed in designing the large-core CNN architecture. First, some efficient components like Squeeze-and-Excitation (SE) or bottleneck are used to increase the model depth in the local structure design, enabling it to better learn and represent the complex features of the input data while maintaining computational efficiency. Second, a module called Dilated Re-param Block is proposed. Using the idea of structural Re-parameterization, the module is equivalently converted to large-kernel convolution, which enables the model to more effectively capture sparsely distributed features in space and significantly enhance its ability to perceive complex patterns. In addition, the choice of kernel size should fully consider the downstream tasks and the specific framework used. Although low-level features obtaining excessively large receptive fields too early may have negative effects, this does not mean that large kernels will reduce the representation ability of the model or the quality of the final features. The conclusion proposed by RepLKNet that “increasing the kernel size will not worsen performance” has been revised to some extent, but for the fish feeding intensity quantification task in this paper, a kernel size of $1 3 { \times } 1 3$ is enough to meet the requirements. Finally, when expanding the model depth, depthwise $3 { \times } 3$ convolution blocks are used instead of more large convolution kernel layers. Although the receptive field is already large enough, using efficient $3 { \times } 3$ operations can still improve the abstraction level of features. This strategy ensures that the model can understand and express input data at a higher level while maintaining computational efficiency.
In terms of multimodal data processing, UniRepLKNet demonstrates high simplicity and versatility. For non-image data, it only needs to be processed into a $\mathrm { C } { \times } \mathrm { H } { \times } \mathrm { W }$ embedding map format without modifying the main model architecture. In this paper, image data is represented as a $3 \times 2 2 4 \times 2 2 4$ tensor. Audio data is converted into a Mel-spectrogram (Kong et al., 2020), and the dimension size is adjusted using the adaptive average pooling. The processed two-channel audio data is represented as a $2 \times 2 2 4 \times 2 2 4$ tensor. Additionally, following the minimalist processing method of UniRepLKNet, the water wave data is converted into a tensor in the latent space and then directly reshaped into a single-channel image format. The same method is used for dimensionality adjustment to ensure consistency with the image and audio data for the fusion of multimodal features. After the processed multimodal data is input into UniRepLKNet, the output of each feature extraction stage will be fusion, ultimately outputting three feature vectors with a size of 512.
# 3.3 Multimodal feature interaction
Based on a multimodal feature extraction framework with a unified architecture, this study proposes an innovative ARPM module, which is mainly composed of a Channel Attention Fusion Network (CAFN) and a Dual-mode Attention Fusion Network (DAFN), as shown in Figure 3. The CAFN is used for adaptive fusion of input features. The DAFN contains two functional variants: DAFN-1 focuses on the initial fusion of original input features, and adopts a serial structure that combines self-attention and cross-modality attention; DAFN-2 adopts a parallel cross-modality attention structure to achieve deep integration of mixed features.
Figure 3. The structure of ARPM.
ARPM adopts a two-stage progressive fusion architecture. In the first stage, the modality $F a$ is selected as the primary modality feature, and the remaining two modalities $F b$ and $F c$ are used as the auxiliary modality features. In DAFN-1, the internal correlation of $F b$ (or $F \overset { \cdot } { c }$ ) is first modeled through a multi-head self-attention mechanism to generate self-enhancement feature to retain useful information and reduce redundancy (Vaswani et al., 2017). Then, the self-enhancement feature is used as query, and $F a$ is used as key and value for multi-head cross-modal attention calculation to generate cross-modal fusion features. Subsequently, the self-enhancement feature is fused with the cross-modal fusion feature, and the information is refined by the multi-head self-attention mechanism to form the attention fusion feature Aab (or $A a c \mathrm { ~ \ r ~ { ~ R ~ a ~ c ~ } ~ }$ . Where, for the feature $F b$ , the formula for the multi-head self-attention mechanism is:
$$
M u l t i H e a d S A ( F _ { b } ) = C o n c a t ( h e a d _ { 1 } , \dotsb \qquad ) W ^ { O }
$$
$$
h e a d _ { i } = S e l f A t t e n t i o n ( { F _ { b } } ) = s o f t m a x ( { \frac { { Q _ { b i } } { K _ { b i } } ^ { T } } { \sqrt { d _ { k } } } } ) V _ { b i }
$$
$$
Q _ { b i } = F _ { b } W _ { i } ^ { Q } , K _ { b i } = F _ { b } W _ { i } ^ { K } , V _ { b i } = F _ { b } W _ { i } ^ { V } \ ( W _ { i } ^ { Q } , W _ { i } ^ { K } , W _ { i } ^ { V } \in \mathbb { R }
$$
where: $\scriptstyle Q , K$ and $V$ denote the query matrix, key matrix and value matrix, respectively; $\iiint _ { i } ^ { Q } , W _ { i } ^ { K } , W _ { i } ^ { V }$ are the projection matrices of the $i$ -th head; $d _ { k }$ is the dimension of the key vector, $d _ { k } = d _ { m } / h$ . In this study, $d _ { m } = 2 5 6$ ; $h = 4$ .
Meanwhile, in order to solve the problem of modal weight solidification caused by manually specifying the primary modality(Wang, Li, et al., 2023), CAFN is introduced to dynamically calibrate the input modalities, and generate the reconciliation fusion feature Gab (or Gac) to eliminate the inconsistency between the primary and auxiliary modalities. Finally, Aab (or Aac) and Gab (or Gac) are merged to form the shallow interaction feature Fab (or Fac), realizing the preliminary integration of multimodal information dominated by the primary modality. CAFN is an extension of SENet (J. Hu et al., 2020). It first concatenates two features in the channel dimension, then uses Squeeze and Excitation to model the interdependence between channels, and finally multiplies the output weights by channel-by-channel to obtain the fused features.
In the second stage, the hierarchical distinction between primary and auxiliary modalities is eliminated, and DAFN-2 with a symmetric dual cross-modal attention structure is adopted. Specifically, Fab and Fac are alternately used as query and key-value pairs for multi-head crossmodal attention interactions, which can deeply mine complementary information in mixed features from different perspectives. For example, for features $F a$ and $F b$ , the multi-head cross-attention mechanism is formulated as:
$$
M u l t i H e a d S A ( F _ { a } , F _ { b } ) = C o n c a t ( h e a d _ { 1 } , \dotsb \qquad ) W ^ { O }
$$
$$
h e a d _ { i } = C r o s s { A t t e n t i o n ( F _ { a } , F _ { b } ) } = s o f t m a x ( { \frac { Q _ { b i } K _ { a i } ^ { \textit { \texttt { T } } } } { \sqrt { d _ { k } } } } ) V _ { a i }
$$
$$
Q _ { b i } = F _ { b } W _ { i } ^ { Q } , K _ { a i } = F _ { a } W _ { i } ^ { K } , V _ { a i } = F _ { a } W _ { i } ^ { V } ( W _ { i } ^ { Q } , W _ { i } ^ { K } , W _ { i } ^ { V } \in \mathbb { R }
$$
Considering the inconsistency between modalities lurking in cross-modal attention(Wang, Li, et al., 2023), CAFN is used to adaptively fuse the interacted features, and then the depth-enhanced feature Aabc is generated by the multi-head self-attention mechanism. Finally, Aabc and $F a$ are residually fused to form the primary modality enhancement feature $F a ^ { * }$ with both modal specificity and complementarity. This stage overcomes the limitation of unidirectional information flow through the bidirectional symmetric interaction mechanism and realizes the deep synergistic expression of multimodal features.
# 3.4 Decision fusion module
In the decision fusion stage, this study proposes a decision enhancement strategy based on multimodal confidence evidence synthesis (as shown in Figure 4), which aims to address the potential semantic representation limitations in the process of cross-modal heterogeneous data fusion. The strategy also achieves the deep integration of multimodal decision information through a two-stage reasoning mechanism.
Figure 4. The structure of DFM.
Specifically, the fused enhanced features $( F a ^ { * } , F b ^ { * } , F c ^ { * } )$ generated after multimodal feature interaction of each modality data are independently input into the corresponding decision network to produce modality-specific classification results $( R a , R b , R c )$ . This process ensures that the different modal data can complete primary decision inference while maintaining their own semantic integrity. The ER is introduced after obtaining the multimodal independent decision results(Yang et al., 2018; X. Xu et al., 2020) , which converts the classification confidence of each modality into basic probability distribution and performs higher-order evidence synthesis through the orthogonal sum rule. This fusion mechanism can quantify the conflict degree and increase the consistency of inter-modal decisions, thus obtaining a more robust global decision output $R$ . The evidential reasoning process is as follows:
1) Considering the independent classification results of each modality as evidence in the ER, there are $M$ independent evidence ( $M$ is the number of modalities).
2) Convert each piece of evidence into the form of confidence distribution. The $i$ -th evidence is represented as:
$$
e _ { m } = \left\{ \left( \theta _ { n } , p _ { n , m } \right) , n = 1 , 2 , \cdots \qquad p _ { \Theta , m } \right\}
$$
where: $\theta _ { n }$ is the quantitative level of feeding intensity; $p _ { n , m }$ denotes the confidence level that the $m$ -th evidence is assessed as level $\theta _ { n }$ , which satisfies ${ \sum } _ { n = 1 } ^ { N } p _ { n , m } = 1 ; \Theta = \{ \theta _ { 1 } , \theta _ { 2 } , \cdots , \theta _ { N } \}$ is the discriminative framework; $p _ { \Theta , m }$ denotes global ignorance.
3) In the ER, the evidence weight $w _ { i }$ is regarded as the preference degree of the decision maker for an evidence item, and the evidence reliability $r _ { i }$ is regarded as the reliability degree of the source of the evidence item. The two correspond to the subjective and objective attributes of the evidence, respectively, and are generally set by simple calculation or subjectively. In this study, $w _ { i }$ and $r _ { i }$ are set as learnable parameters, which are obtained by adaptive optimization of the model during the training process. The weighted confidence distribution of the evidence after increasing the reliability is denoted as:
$$
\begin{array} { r l } & { \quad \left( 0 \qquad , \theta = \emptyset \right. } \\ { \tilde { i } \quad } & { \quad \left. \sum _ { \theta \in \Theta } p _ { \theta , m } \right. , \theta \subseteq \Theta , \theta = \emptyset } \\ & { \quad \left. \left\lfloor c _ { r \nu , m } ( 1 - r _ { m } ) \ , \ \theta = P ( \Theta ) \right. \right. } \\ & { \quad \left. \sum _ { \theta \subseteq \Theta } \tilde { i } ^ { \prime } \quad \sim \quad \tilde { \mathbf { \Gamma } } \right. } \end{array}
$$
where: $\varnothing$ denotes the empty set; $P ( \Theta )$ denotes the power set; $p _ { \theta , m } = \psi _ { m } p _ { \theta , m }$ ; $c _ { r w , m } = 1 / 1 + w _ { m } - r _ { m }$ is the normalization factor.
4) The joint confidence of the intensity quantification class $\theta _ { n }$ of the $M$ pieces of evidence for the feeding sample $x$ is obtained through equation (11):
$$
P _ { \theta _ { n } } ( x ) = \frac { L \Bigg [ \displaystyle \prod _ { m = 1 } ^ { M } c _ { r w , m } ( 1 - r _ { m } + \alpha _ { \theta _ { n } , m } ) - \displaystyle \prod _ { m = 1 } ^ { M } c _ { r w , m } ( 1 - r _ { m } ) \Bigg ] } { 1 - L \displaystyle \prod _ { m = 1 } ^ { K } c _ { r w , m } ( 1 - r _ { m } ) }
$$
Where: $L$ denotes the normalization factor, which is calculated as follows:
$$
L = \left[ \sum _ { n = 1 } ^ { N } \Biggl ( \prod _ { m = 1 } ^ { M } c _ { r w , m } ( 1 - r _ { m } + \alpha _ { \theta _ { n } , m } ) \Biggr ) - ( N - 1 ) \Biggl ( \prod _ { m = 1 } ^ { M } c _ { r w , m } ( 1 - r _ { m } ) \Biggr ) \right] ^ { - 1 }
$$
5) The joint confidence $( P _ { \theta _ { 1 } } ( x ) , P _ { \theta _ { 2 } } ( x ) , \cdots , P _ { \theta _ { N } } ( x ) )$ for each intensity of the fish feeding sample $x$ can be obtained by the above calculation process, and the intensity level $\theta _ { n }$ corresponding to the maximum value of the confidence will be used as the intensity quantification result for this sample.
# 4 Experiment
# 4.1 Datasets
# 4.1.1 Data collection
To verify the effectiveness of MAINet in quantifying fish feeding behavior, a real multimodal dataset is constructed. The data collection was conducted at the Guoyu Green Smart Aquaculture Factory in Shangluo City, Shaanxi Province, China. The experimental platform configuration is shown in Figure 5.
Figure 5. Data collection platform
The experimental system consists of four core modules: a recirculating aquaculture unit, an optical imaging unit, an acoustic monitoring unit and a water wave detection unit. The aquaculture unit employs a standardized recirculating water system, equipped with circular aquaculture pond with a diameter of $4 \mathrm { m }$ (water depth gradient $1 \pm 0 . 2 \mathrm { m }$ ). The experimental samples are adult rainbow trout (Oncorhynchus mykiss) with an average body length of $3 5 \pm 5$ cm and a weight of $1 . 3 5 \pm 0 . 1 5$ kg. The optical imaging system uses an industrial-grade 4K camera ( $3 8 4 0 \times 2 1 6 0$ resolution, 30 fps frame rate), mounted vertically on an adjustable telescoping tripod at a height of $3 ~ \mathrm { m }$ above the aquaculture pond. The acoustic monitoring system uses a high-frequency hydrophone (bandwidth $2 0 \mathrm { H z } { - } 5 0 \mathrm { k H z }$ ) fixed at the geometric center of the pond. The water surface fluctuation detection unit uses a six-axis accelerometer (WT9011DCL-BT50, sampling rate $2 0 0 ~ \mathrm { H z } )$ , which is waterproof-sealed and installed on a floating platform on the water surface.
During the collection period, water quality management strictly adhered to environmental control standards. Water temperature was maintained at $1 7 { \pm } 2 ^ { \circ } \mathrm { C }$ , dissolved oxygen concentration at was $1 2 { \pm } 2 ~ \mathrm { m g / L }$ , and $\mathsf { p H }$ was $6 . 7 { \pm } 0 . 2 \$ . Feeding strictly followed the farm's standardized operating procedures, with scheduled feedings at 8:00 and 16:00 daily. The single feeding amount is precisely calculated as $1 . 5 \%$ of the total fish weight (Azim & Little, 2008). The feeding process is divided into three rounds, with each round lasting 3 minutes. A 1-minute buffer interval is set between rounds, and the feeding amount decreases gradually in a 4:3:3 ratio. The feeding process is manually observed throughout, and the distribution of the feeding area is adjusted in real time to ensure the uniformity of feed diffusion. Concurrently, multi-modal data collection is strictly synchronized with the feeding operations.
# 4.1.2 Data processing
First, strictly adhere to the principle of spatiotemporal consistency, aligning video, audio, and water wave data at the millisecond level. The water wave acceleration sensor array can capture the surface wave characteristics triggered by fish feeding in real time, forming a spatiotemporal dual verification mechanism with the behavioral observation data from the optical system, effectively eliminating the limitations of single-modality observation. Next, a sliding window with a fixed width of 1 second is used for segmentation sampling, with a $50 \%$ overlap rate to preserve the temporal correlation of feeding behavior. Each sample group consists of synchronously collected three-modal data: a single frame of RGB images, 1 second of audio, and time-series data recorded by the water wave sensor along the X/Y/Z axes (including acceleration, angular velocity and angle).
Figure 6. Multimodal data visualization of different feeding intensities.
Finally, combined with the actual farming scene, fish feeding habits, the experience of aquaculture experts and existing feeding intensity assessment standards (Øverli et al., 2006), the feeding intensity is categorized into three levels: strong, weak and none. Strong feeding intensity corresponds to high-density fish aggregation, significant splashing and turbulence caused by intense feeding competition. Weak feeding intensity is manifested by scattered foraging behavior and localized water splash disturbance. The none feeding state is characterized by regular patrolling of fish schools and a calm water surface. This classification standard achieves objective quantification through multi-modal data features. The multimodal data features for different categories are shown in Figure 6. Strong feeding samples show high-density fish body overlapping features in the image, high-amplitude impact sounds in the audio spectrum, and high-frequency large-amplitude vibrations in the water wave data. Weak feeding samples correspond to lower amplitude values in the modality features. After strict screening and annotation, a fish feeding behavior dataset containing 7089 sets of synchronous multimodal data is finally constructed and divided into training, validation and test sets in a ratio of 8:1:1. The distribution statistics of the dataset are shown in Table 1.
Table 1. Distribution of fish feeding intensity quantification datasets.
# 4.2 Experiment setup
# 4.2.1 Experimental environment and parameter settings
The hardware environment for this experiment includes CPU:13th Gen Intel $\textsuperscript { \textregistered }$ CoreTM i9- $1 3 9 0 0 \mathrm { K } { \times } 3 2$ , RAM: 128GB, and GPU:2×NVIDIA GeForce $\mathrm { R T X ^ { \mathrm { { T M } } } } 4 0 9 0$ . The operating system is Ubuntu 23.04, and the code is implemented using the PyTorch framework. During the experiments, the batch size was set to 32 and the number of iterations was set to 100. The learning rate is 0.001. The model is trained for 100 epochs using a cross-entropy loss function and the Adam optimizer. A learning rate dynamic adjustment strategy was set to promote better convergence of the model, and when the validation set performance did not improve after 5 rounds, the learning rate was reduced by half.
# 4.2.2 Model evaluation metrics
This study uses the Accuracy, Precision, Recall and F1-Score derived from the confusion matrix to evaluate the performance of the model. As the base performance metric for the multiclassification task, accuracy reflects the overall correctness of the model in all category predictions. Precision represents the credibility of the model's positive predictions. Recall reflects the model's ability to identify positive samples. F1-Score is the reconciled average of precision and recall, which evaluates the comprehensive performance of the model by balancing the two. The specific formulas are as follows.
$$
A c c u r a c y = \frac { T P + T N } { T P + T N + F P + F N } \times 1 0 0 \%
$$
$$
P r e c i s i o n = \frac { T P } { T P + F P } { \times } 1 0 0 \%
$$
$$
R e c a l l = \frac { T P } { T P + F N } { \times 1 0 0 \% }
$$
$$
F 1 - S c o r e = \frac { 2 \times P r e c i s i o n \times R e c l l } { P r e c i s i o n + R e c l l } \times 1 0 0 \%
$$
Where: $T P$ and $F N$ denote the number of samples in which the actual positive category is predicted to be positive and negative, respectively; $T N$ and $F P$ denote the number of samples in which the actual negative category is predicted to be negative and positive, respectively.
# 4.3 Results and discussion
# 4.3.1 Analysis of experimental results
The accuracy and loss curves of the MAINet on the training and validation sets are shown in Figure 7. It can be clearly seen that the training process of the model can be divided into three stages. In the initial stage of training, the model loss decreases rapidly, while the accuracy increases rapidly. This indicates that during the initial learning process, the key features and patterns in the data can be captured quickly, and the parameter update direction is correct and effective, which significantly improves the model performance in a short period of time. In the second stage, the loss gradually decreases with slight oscillations. This is because the slight fluctuation of the parameter update direction when the model is close to the local optimal solution, causing the loss to fluctuate within a certain range. However, the overall loss still shows a decreasing trend and stabilizes at around 0.017 after 50 epochs of training. Meanwhile, on the validation set, the model performance also goes through three phases of rapid increase, slow increase and stabilization. This shows that the MAINet can not only effectively learn data features on the training set, but also maintain good generalization ability on the validation set, proving the effectiveness and stability of the model.
Figure 7. Accuracy and loss curves of MAINet on training and validation sets.
This study visualizes the quantitative results of the MAINet on the test set based on the confusion matrix, as shown in Figure 8(a). It can be seen that only a few samples of the "None" class are misclassified as “Week”, and there is no case where "Strong" and "Week" are identified as “None”, which indicates that the model performs well in identifying feeding and non-feeding behaviors. Meanwhile, in the more fine-grained feeding intensity quantification process, the model also shows strong stability. Only a small number of samples are misjudged, and these misjudgments are mainly concentrated between adjacent feeding intensity categories. In addition, Figure 8(b) shows the performance metrics of the model at different feeding intensity. It can be seen that the MAINet has excellent classification performance at different feeding intensity, and the classification accuracy of each level are more than $96 \%$ .
Confusion Matrix 105 Accuracy Precision Recall F1-Score
n 232 1 0 -200 99.86 99.5710099.79 95 96.7695.76 96.9 150 95.8295.42
0 226 10 100 1 Tm
A 0 12 229 -50 -0 85 None Week Strong None Weak Strong Predicted Label Feeding intensity (a) Confusion matrix (b) Quantitative results for different feeding intensity.
# 4.3.2 Comparison of different models
To verify the performance of the MAINet in the task of fish feeding intensity quantification, it is compared with several state-of-the-art models, including homomorphic models using the same feature extractor for all three modalities and heterogeneous models using different feature extractors, such as MobileNet V4 (Qin et al., 2025), ConvNeXt V2 (Woo et al., 2023), RepLKNet (Ding et al., 2022) and UniRepLKNet (Ding et al., 2024). The results are shown in Table 2.
Table 2. Performance comparison of different models.
Note: MBNet represents MobileNet V4; CNext represents ConvNeXt V2; RCNet represents RepConvNet; URNet represents UniRepLKNet.
As shown in Table 2, MAINet achieves $9 6 . 7 6 \%$ , $9 6 . 7 8 \%$ , $9 6 . 7 9 \%$ and $9 6 . 7 9 \%$ in Accuracy, Precision, Recall and F1-Score respectively, which is better than other comparison models, indicating that the MAINet has good comprehensive performance in the task of quantifying fish feeding intensity. It is worth noting that the RepConvNet in the homogeneous model also achieves relatively outstanding performance, only slightly inferior to the MAINet. In the task of quantifying fish feeding intensity studied in this paper, large convolution kernels can play a significant advantage and help the model better capture feature information in the data. The performance of heterogeneous models varies significantly. For example, the accuracy of the heterogeneous models using RCNet, URNet and CNext on image, audio and water wave modalities can reach $9 5 . 9 2 \%$ , while the accuracy of the heterogeneous models using CNext, MBNet and URNet is only $7 6 . 3 4 \%$ . This demonstrates that different combinations of feature extractors have a significant impact on model performance. A reasonable combination of feature extractors give full play to the advantages of each modality, achieve efficient fusion and utilization of information, and thereby improve the classification accuracy of the model. Conversely, an inappropriate combination may lead to a decline in model performance. In addition, the number of parameters of the MAINet is 41.48M, which is higher than some of the comparison models. A higher number of parameters usually means that the model has stronger learning ability, but it may also cause the model to require more computing resources and time during training and inference. From the perspective of performance metrics, the impact of increasing the number of parameters is worthwhile because its performance improvement is more significant. In summary, the MAINet has good application prospects in solving the problem of quantifying fish feeding intensity.
# 4.3.3 Comparison of different modalities
To validate the effectiveness of multimodal fusion in quantifying fish feeding intensity, this study compares the performance of single-modal and multimodal fusion models, and the results are shown in Table 3. Among them, the features of each modality are extracted using UniRepLKNet, and the feature fusion method for dual-modality uses the DAFN-2 module proposed in this paper.
Table 3. Performance comparison between single-modal and multi-modal fusion models.
Note: $\surd$ indicates that this modality is used, $\times$ indicates that it is not used.
As shown in Table 3, the multimodal models significantly outperform single-modal models in overall performance. This result shows that there is complementarity between different modalities, and integrating multiple modal information can effectively improve the accuracy of feeding intensity quantification. Compared with the image modality, the performance of audio and water wave modalities is poor when used alone, with accuracy of only $5 3 . 6 6 \%$ and $5 0 . 2 8 \%$ , respectively. Although combining audio with water wave can increase the amount of information to a certain extent, the effect on improving model performance is limited due to the relatively weak correlation between the two. However, when the image is fused with audio and water waves respectively, the model performance is significantly improved, with accuracies reaching $9 4 . 6 5 \%$ and $9 5 . 2 1 \%$ respectively. This indicates that the image modality provides rich and intuitive feature information, and has strong complementarity with the audio and water wave modalities. Furthermore, the combination of the image, audio and water wave modalities achieves the best performance, making full use of the advantages of different modalities. Image information can make up for the deficiencies of audio and water wave modalities in feature expression, while audio and water wave modalities provide dynamic or specific environmental information that image modalities lack. The three complement each other and jointly enhance the model's ability to quantify fish feeding intensity.
# 4.3.4 Ablation experiment results
To explore the impact of the proposed feature and decision fusion modules on the performance of the multimodal fish feeding intensity quantification model, this paper conducted detailed ablation experiments, including no improvement (modal features using Concat fusion (Du et al., 2024)), independent improvement strategies (using only ARPM or ER), and combined improvement strategies (using both ARPM and ER simultaneously). The specific results are shown in Table 4.
Table 4. Ablation experiment results of the MAINet.
Note: √ indicates that the corresponding improvement strategy was adopted, while $\times$ indicates that it was not adopted.
As can be seen from Table 4, both proposed improvement strategies play a positive role in improving the performance of the MAINet. Among them, ARPM has a particularly significant effect on improving the model performance. Compared with the traditional concatenated fusion, the method of extracting and fusing features simultaneously can mine the effective information between different modalities more efficiently. Specifically, after adopting ARPM, the accuracy of models using image, audio and water wave modalities as the primary modality increase by $7 . 1 9 \%$ , $2 . 4 \%$ and $7 . 3 2 \%$ , respectively, which are more significant performance improvements compared to using a single modality. Among all the evaluation indicators, Recall has the largest improvement, which shows that ARPM plays a crucial role in enhancing the model's recognition ability for positive samples. The experimental results also reveal that there are significant differences in model performance when different modalities are used as the primary modality, indicating that the selection of the primary modality has a high sensitivity to the results. Additionally, the model using only ER reaches an accuracy of up to $8 9 . 4 4 \%$ . ER effectively reduces conflicts between modalities through dynamic weight allocation and rule optimization, thereby improving model performance, which further validates the necessity of decision fusion. Ultimately, with the synergistic effect of ARPM and ER, the MAINet achieves excellent performance in Accuracy, Precision, Recall and F1-Score, reaching $9 6 . 7 6 \%$ , $9 6 . 7 9 \%$ , $9 6 . 7 9 \%$ and $9 6 . 7 9 \%$ respectively, significantly enhancing the stability and robustness of the model.
# 4.3.5 Decision fusion method
This study further compares different decision fusion methods to verify the effectiveness of the ER in the fusion stage of fish feeding intensity quantification results, including Majority Voting (MV), Probability Averaging (PA), Learning-based Fusion (LF) (this study employs a fully connected layer), Dempster–Shafer evidence theory (DST) (K. Zhao et al., 2022), and ER. The comparison results are shown in Figure 9.
As shown in Figure 9, the ER performs best in the fusion stage of the quantitative results of fish feeding intensity. Compared with MV, PA, LF and DST, the ER improves Accuracy by $2 . 5 3 \%$ , $0 . 7 \%$ , $0 . 7 \%$ and $1 . 9 7 \%$ , respectively; Precision by $2 . 5 \%$ , $0 . 7 \%$ , $0 . 6 9 \%$ and $1 . 9 6 \%$ , respectively; Recall by $2 . 2 9 \%$ , $0 . 6 9 \%$ , $0 . 6 8 \%$ and $1 . 9 7 \%$ , respectively; and F1-Score by $2 . 4 \%$ , $0 . 7 \%$ , $0 . 6 9 \%$ and
$1 . 9 7 \%$ , respectively. Among these, MV performs relatively poorly on all evaluation metrics, which may be due to its reliance on majority opinion and is only applicable to simple situations or situations with high consistency between information sources. The limitation is more obvious in the application scenario of this paper. In contrast, PA averages the probabilities from various sources to better integrate information and improve the accuracy of judgments. Although LF can achieve good accuracy by performing secondary learning on the results, the improvement is not significant. DST achieves good results on multiple evaluation metrics. Although it is slightly lower than LF, it is still an effective fusion method, especially when dealing with uncertain information. Additionally, the overall performance of ER is better than that of DST, which shows that the introduction of weight and reliability has a positive effect on decision fusion and effectively solves the evidence conflict phenomenon that DST cannot solve. Although ER achieves the best performance, PA is a good alternative if computational complexity or implementation difficulty is considered.
Figure 9. Performance comparison of different decision fusion methods. | In recirculating aquaculture systems, accurate and effective assessment of
fish feeding intensity is crucial for reducing feed costs and calculating
optimal feeding times. However, current studies have limitations in modality
selection, feature extraction and fusion, and co-inference for decision making,
which restrict further improvement in the accuracy, applicability and
reliability of multimodal fusion models. To address this problem, this study
proposes a Multi-stage Augmented Multimodal Interaction Network (MAINet) for
quantifying fish feeding intensity. Firstly, a general feature extraction
framework is proposed to efficiently extract feature information from input
image, audio and water wave datas. Second, an Auxiliary-modality Reinforcement
Primary-modality Mechanism (ARPM) is designed for inter-modal interaction and
generate enhanced features, which consists of a Channel Attention Fusion
Network (CAFN) and a Dual-mode Attention Fusion Network (DAFN). Finally, an
Evidence Reasoning (ER) rule is introduced to fuse the output results of each
modality and make decisions, thereby completing the quantification of fish
feeding intensity. The experimental results show that the constructed MAINet
reaches 96.76%, 96.78%, 96.79% and 96.79% in accuracy, precision, recall and
F1-Score respectively, and its performance is significantly higher than the
comparison models. Compared with models that adopt single-modality,
dual-modality fusion and different decision-making fusion methods, it also has
obvious advantages. Meanwhile, the ablation experiments further verified the
key role of the proposed improvement strategy in improving the robustness and
feature utilization efficiency of model, which can effectively improve the
accuracy of the quantitative results of fish feeding intensity. | [
"cs.CV",
"cs.AI",
"cs.ET"
] |
# 1 Introduction
As Large Language Models (LLMs) continues to scale, LLMs exhibit emergent In-Context Learning (ICL) capabilities (Brown et al., 2020), enabling them to perform target tasks by conditioning on a few exemplars without any additional parameter updates. Furthermore, the use of Chain-of-Thought (CoT) exemplars (Wei et al., 2022) in ICL guides models to reason step-by-step. This approach is commonly referred to as Few-shot CoT. Kojima
GSM8K 100 Accuracy 80 60 MATH 670 1 50 Qwen2.5-7B Qwen2.5-72B 8shot Fast-votek Qwen-8shot Topk R1-4shot Zero-shot
et al. (2022) further showed that simply appending the instruction “Let’s think step by step” can trigger multi-step reasoning even without exemplars, giving rise to the Zero-shot CoT paradigm, an overview of them is shown in Figure 2.
Existing research primarily focuses on how the quality, order, and number of exemplars influence ICL performance, proposing various strategies for exemplar construction and selection to enhance model performance across different task settings (Lu et al., 2022; Chen et al., 2023; Kim et al., 2022; Purohit et al., 2024). In addition, several studies have investigated the underlying mechanisms and influencing factors of ICL from either theoretical or empirical perspectives (Ren and Liu, 2024; Xie et al., 2022; Min et al., 2022; Wei et al., 2023; Wang et al., 2023). However, most of these strategies and experimental conclusions are based on earlier, weaker models. As foundation models become increasingly powerful, it is necessary to revisit a central question: In mathematical reasoning tasks, can CoT exemplars still improve the reasoning performance of recent strong models?
Demonstration
Question: $^ +$ “Please reasoning step by step ...” $^ +$ Answer: {𝑨𝟏} CoT-Answer
Question: $\cdot$ + “Please reasoning step by step ...” $^ +$ Answer: $\left\{ A _ { 2 } \right\}$ Step 1 ... Step 2 ...
Question: $\cdot$ + “Please reasoning step by step ...” $^ +$ Answer: $\bigstar$ ⋮ Step n ... So the final
Test Input answer is xxx
Question: {𝑸𝒕𝒆𝒔𝒕} + “Please reasoning step by step ...” + Answer: LLM
In this paper, we aim to investigate the actual role of CoT exemplars in mathematical reasoning tasks. We conduct systematic experiments on two representative math reasoning datasets, GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), using several recent open-source LLMs. We first identify a common evaluation bias in open-source evaluation frameworks (Contributors, 2023; Lambert et al., 2024) in GSM8K, which significantly underestimates the performance of Zero-shot CoT, as discussed in Section 4. After correcting for this issue, we compare Few-shot CoT with Zero-shot CoT prompting. Our results show that recent strong models already exhibit strong reasoning capabilities under the Zero-shot CoT setting, and the primary role of Few-shot CoT exemplars is to align the output format with human expectations. Subsequent analysis confirms that adding traditional CoT exemplars does not improve reasoning performance (See Section 5.1). Inspired by recent advances in reasoning models with more sophisticated capabilities (Guo et al., 2025; Jaech et al., 2024), we then examine the effectiveness of enhanced CoT demonstrations constructed using answers generated by advanced models such as Qwen2.5-Max and DeepSeek-R1. Experimental results indicate that, regardless of enhancement, models tend to ignore the content of exemplars in mathematical reasoning tasks and fail to acquire advanced capabilities such as self-reflection (See Section 5.3). As a result in figure 1, CoT exemplars do not lead to improved reasoning performance in recent models.
To summarize, our main empirical findings in mathematical reasoning tasks are as follows:
1. The primary function of CoT exemplars is to align the output format, and this effect persists regardless of the model’s reasoning ability.
2. Traditional CoT exemplars do not enhance the reasoning performance of strong models, although they may benefit weaker models.
3. Enhanced CoT exemplars also fail to improve reasoning ability in strong models, as these models tend to ignore the CoT content.
# 2 Related Work
CoT Prompting ICL enables LLMs to perform tasks without fine-tuning (Brown et al., 2020), but it often falls short in complex reasoning scenarios.
To address this, CoT prompting (Wei et al., 2022) introduces intermediate reasoning steps to guide model outputs. Building on CoT, researchers have proposed various extensions to enhance reasoning capabilities. For instance, Tree-of-Thought (Yao et al., 2023) generalizes CoT to tree-structured reasoning, while Graph-of-Thought (Besta et al., 2024) further expands it to graph-based structures. The Least-to-Most framework (Zhou et al., 2023) decomposes complex problems into simpler subproblems and solves them sequentially.
Exemplar Selection In addition to improving CoT itself, numerous studies have explored how exemplar quality, quantity, diversity, and ordering affect ICL performance (Lu et al., 2022; Li et al., 2023; Ma et al., 2023; Zhang et al., 2022). A variety of exemplar selection strategies have been proposed. Fu et al. (2023) recommend selecting exemplars with higher reasoning complexity (i.e., involving more intermediate steps), while Hongjin et al. (2022) emphasize diversity and introduce the VoteK algorithm. Other representative methods include DPP (Ye et al., 2023a), which formulates selection as a subset optimization problem; MMR (Ye et al., 2023b), which balances relevance and diversity via marginal relevance scoring; and EXPLORA (Purohit et al., 2024), which evaluates exemplar subsets without relying on confidence score from models.
Understanding CoT Prompting Beyond methodology, a growing body of research has sought to understand the mechanisms behind ICL and CoT prompting. Theoretical investigations (Dai et al., 2023; Li et al., 2024; Ren and Liu, 2024; Mahankali et al., 2023) offer insights into the learning dynamics of ICL, while empirical studies probe the effectiveness of CoT. For instance, Min et al. (2022) suggest that exemplars primarily provide distributional rather than semantic information—though their analysis is limited to classification tasks. In the context of reasoning, Levy et al. (2024) report that longer input contexts may hurt performance, and Sprague et al. (2025) find that the benefits of CoT are mainly confined to mathematical and logical reasoning.
Figure 3: Accuracy of different models on the GSM8K dataset under varying numbers of exemplars. Few-shot examples are taken from Wei et al. (2022). Only Zero-shot-fixed applies evaluation bias correction, as described in Section 4; all other settings retain uncorrected results for comparison.
Figure 4: Accuracy of different models on the GSM8K dataset under various ablation settings. Replace_Q denotes replacing the question in each exemplars with “xxx”. Replace_QA replaces both the question and answer with “xxx” but retains the final phrase “So the answer is ...”. Replace_ALL replaces the question, answer, and the final phrase with “xxx”. See figure 16, 17, and 18 for input examples, respectively. Other settings follow those in Figure 3.
Our work complements these lines of research through a systematic empirical study on mathematical reasoning. While prior studies have provided important insights, they are mostly based on earlier and weaker models, whose conclusions may not fully extend to recent, stronger models. We find that, for recent strong models, CoT exemplars primarily function to align output format rather than enhance reasoning ability. This challenges the prevailing assumption that CoT-based ICL reliably improves performance in math reasoning tasks.
# 3 Experimental Setup
Models To thoroughly validate our conclusions, we evaluate a variety of open-source language models, including the Qwen2.5 series (ranging from 0.5B to 72B parameters) (Yang et al., 2024), the LLaMA3 series (1B to 70B) (Grattafiori et al., 2024), the Gemma2 series (2B and 9B) (Team et al., 2024), and Ministral-8B (Mistral AI, 2024). In addition, to examine the effectiveness of CoT prompting on earlier and weaker models, we include
LLaMA2-7B (Touvron et al., 2023) and Qwen7B (Bai et al., 2023) for comparative analysis. All models used in our experiments are instructiontuned variants. More details can be found in Appendix A.1.
Datasets We focus on mathematical reasoning tasks and conduct experiments on two datasets of varying difficulty: GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). To ensure accuracy, we perform inference and evaluation on the full test sets of both datasets and report the complete results. More details can be found in Appendix A.2.
Environment and Hyperparameters We utilize the open-source inference framework OpenCompass (Contributors, 2023) and vLLM (Kwon et al., 2023) as the backend to run all experiments. Notably, all experiments incorporate a CoT instruction in the prompt: "Please reason step by step, and put your final answer within \boxed{}." For reproducibility, all experiments are conducted using a fixed random seed of 42. Notably, since greedy decoding is deterministic, the fixed seed does not influence the inference results under a fixed hardware setup. Hence, we do not report the mean or standard deviation of the results. More details can be found in Appendix A.3.
Figure 5: Accuracy of different models under various retrieval methods with a fixed number of 8 retrieved exemplars. The top figure shows results on the MATH dataset, and the bottom figure shows results on the GSM8K dataset.
# 4 Exemplars Help Mitigate Evaluation Bias
Evaluation Bias in GSM8K Existing evaluation frameworks for GSM8K (e.g., OpenCompass (Contributors, 2023), Open-Instruct (Lambert et al., 2024)) typically extract the last number from model outputs as the predicted answer. However, in Zero-shot CoT prompting, answers are often enclosed in “\boxed{}" expressions. This mismatch leads to misjudgments during evaluation, as illustrated in Figure 20. To address this, we modify the evaluation script to extract the number inside \boxed{}, reducing artificially low accuracy caused by output-format misalignment. We consider this a form of evaluation bias that affects fair assessment, either due to oversight or simplification.
Exemplars Aid Format Alignment As shown in Figure 3, after correcting the evaluation method, the Zero_shot_fixed setting yields substantial gains, surpassing all others. This indicates that the original poor performance of Zero_shot stems not from reasoning limitations, but from output-evaluation mismatch. Moreover, Few_shot consistently outperforms Zero_shot, suggesting that exemplars help standardize output format and improve answer extraction. Thus, in math reasoning tasks, the primary benefit of exemplars lies in aligning the model’s output format. Interestingly, for Mistral-8B, exemplars can induce overfitting to simplified reasoning paths, diminishing their effectiveness.
Complete Answer Structure is Key Factor Ablation results in Figure 4 show a consistent performance drop as more content is masked—from Replace_Q to Replace_QA to Replace_All. This highlights the importance of preserving the full answer structure for effective format alignment. Even partial cues (e.g., “So the answer is . . . ”) prove beneficial, whereas fully removing informative content reverts performance to the Zero_shot baseline. This confirms that exemplars primarily guide answer formatting rather than reasoning itself.
# 5 CoT Exemplars can’t improve reasoning ability of strong models
The preceding sections have shown that the primary contribution of exemplars lies in aligning the output format rather than enhancing reasoning ability. However, since we previously used a fixed set of 8 exemplars, an open question remains: Can exemplars improve the reasoning ability of recent LLMs if we consider different impact factors such as retrieval method, model’s intrinsic ability and the quality of exemplars?
# 5.1 The Impact of the retrieval method
In this section, we revisit the classical CoT prompting paradigm, where in-context exemplars are retrieved from the training set of the original dataset. This setup aligns with prior work and allows us to evaluate whether recent LLMs still benefit from exemplars under this traditional configuration. To ensure consistency, we apply our corrected evaluation method across a variety of models and compare their performance on GSM8K and MATH using several established exemplar selection strategies, including Complexity-based (Fu et al., 2023), Fast-Votek (Hongjin et al., 2022), DPP (Ye et al., 2023a), MMR (Ye et al., 2023b), and EXPLORA (Purohit et al., 2024), along with simple TopK and Random baselines.
Figure 6: Accuracy of different weaker models under various retrieval methods with a fixed number of 8 retrieved exemplars. The top figure shows results on the MATH dataset, and the bottom figure shows results on the GSM8K dataset.
Retrieval-Based Methods Fall Short of ZeroShot Performance We uniformly retrieve 8 exemplars for each selection method and report the results in Figure 5. Across most configurations—regardless of model or dataset— Few-shot performance with retrieval-based methods is comparable to or worse than the Zero-shot baseline. This observation suggests that for advanced language models, in-context exemplars do not enhance reasoning ability, but primarily function is align output formats. Notably, there are a few exceptions. For example, LLaMA3.1-8B exhibits marginal improvements under the 8-shot setting. However, we attribute this to inherent experimental variance rather than genuine reasoning gains. A detailed analysis is provided in Appendix A.4.
Varying the Number of Exemplars Still Fails to Surpass Zero-Shot Given that using 8 retrieved exemplars often fails to outperform the Zero-shot baseline, we further investigate the impact of varying the number of in-context exemplars. As shown in Figure 7, Zero-shot prompting achieves the highest accuracy in most settings. Nevertheless, certain retrieval methods occasionally yield slightly better performance, particularly on GSM8K. For example, the Complexity-based retrieval method marginally outperforms Zero-shot when retrieving 4 or 6 exemplars on two different models. However, the improvements are minimal—around $0 . 2 \%$ in accuracy. It can be reasonably attributed to inherent evaluation variance. Such small fluctuations are more likely to occur on relatively simpler datasets like GSM8K. In contrast, on the more challenging MATH dataset, nearly all retrieval-based configurations consistently underperform relative to the Zero-shot baseline.
Figure 7: Accuracy variation with different numbers of retrieved exemplars under various retrieval methods, evaluated using Qwen2.5-7B and $\mathrm { Q w e n } 2 . 5 { \cdot } 7 2 \mathrm { B }$ . The top figure shows results on the MATH dataset, and the bottom figure shows results on the GSM8K dataset.
Overall, these results reinforce the conclusion that Zero-shot prompting remains the most effective approach in the vast majority of cases. This supports the emerging perspective that traditional CoT prompting paradigms no longer significantly enhance the reasoning capabilities of recent LLMs.
# 5.2 The Impact of exemplars Is Determined by the Model’s Intrinsic Capability
In the previous experiments, we observed that in-context exemplars do not enhance the reasoning ability of recent models such as Qwen2.5 series. Does this contradict earlier findings from exemplar selection studies, such as those by Fu et al. (Fu et al., 2023)? To further investigate the role of exemplars, we conducted experiments on relatively weaker models. Specifically, we evaluated a set of smaller but recent models (LLaMA3.2-1B, LLaMA3.2-3B, Qwen2.5-0.5B, Qwen2.5-1.5B), as well as several older models (LLaMA3-8B, LLaMA2-7B, Qwen-7B). The same prompt templates were used as in previous experiments, and all model responses were post-processed to eliminate evaluation artifacts and isolate the true effect of exemplars.
Figure 8: Accuracy under different numbers of exemplars when using DeepSeek R1 responses(marked as R1-nshot) and Qwen2.5-max responses(marked as Qwen-nshot) as exemplars. The left figure shows results on the MATH dataset, and the right figure shows results on the GSM8K dataset.
Since all outputs were corrected prior to evaluation, the only potential benefit of in-context exemplars in this experiment lies in improving reasoning ability, not output alignment. As shown in Figure 6, model performance varies significantly. For relatively strong models such as LLaMA3.2-3B and Qwen2.5-1.5B, the Zero_shot setting yields the highest accuracy, indicating that adding exemplars does not improve reasoning . This is consistent with our findings on stronger models, reaffirming that for capable models, exemplars primarily serve as output format guides rather than improve reasoning.
However, for weaker models (e.g., LLaMA3.2- 1B) and older models with larger parameter counts (e.g., LLaMA2-7B and Qwen-7B), we observe a significant improvement in accuracy when exemplars are provided. This suggests that for such models, in-context exemplars indeed help augment reasoning by supplying intermediate steps that the model struggles to generate on its own. We hypothesize that these weaker or older models lack the complex reasoning patterns that more recent models have acquired through pretraining and instruction tuning, and thus rely more heavily on external exemplars.
Therefore, we conclude that the effectiveness of CoT exemplars depends on the model’s inherent capabilities. Traditional CoT exemplars do not improve the reasoning ability of already-strong models but can play a supportive role for weaker models. Hence, our findings are not in conflict with previous work; rather, they offer a complementary perspective by showing that the utility of exemplars is model-dependent.
# 5.3 Is traditional CoT exemplars too easy for strong models?
Previous experiments suggest that traditional CoT prompting strategies are largely ineffective for current open-source LLMs. A natural intuition is that the implicit reasoning paths embedded in standard CoT exemplars may be less sophisticated than the models’ own Zero-shot reasoning capabilities. This raises an important question: can enhanced CoT exemplars benefit these strong models?
With the emergence of high-performing Reasoning Large Language Models (RLLMs) such as OpenAI o1 (Jaech et al., 2024) and DeepSeek R1 (Guo et al., 2025), Long Chain of Thought have shown potential in guiding model reasoning. Motivated by this, we consider two enhanced settings: (1) using responses from DeepSeek-R1 as exemplars, and (2) using responses from a stronger LLM, Qwen2.5- Max, as exemplars. We conduct experiments across the Qwen2.5 family of models (7B, 14B, and 72B). Detailed examples of the input formats are provided in Appendix A.6.
Quality Helps, but Zero-Shot Still Dominates For each enhanced configuration, we further vary the number of exemplars. Due to the relatively long responses generated by DeepSeek-R1, we accordingly limit the number of exemplars to a maximum of four shots to ensure comparability in input length. The corresponding results are shown in
Figure 9: Ablation study on noise injection for three types of exemplars, shown from left to right: exemplars answered by Qwen-max, exemplars answered by R1, and traditional CoT (Chain-of-Thought) exemplars. The top figure shows results on the MATH dataset, and the bottom figure shows results on the GSM8K dataset. Base denotes the original exemplars without noise. Noise50 randomly replaces $50 \%$ of the tokens with “XXX”. Shuffle completely shuffles the words. Replace-xxx replaces all words with “XXX”.
Figure 8. We observe that enhanced exemplars generally outperform the standard 8-shot CoT setting. In certain configurations, performance may even exceed that of the Zero-shot baseline, such as Qwen2.5-72B on the MATH dataset with the Qwen6shot setting. Nevertheless, Zero-shot prompting consistently achieves strong accuracy across both datasets without introducing additional context overhead. These findings indicate that while improving exemplar quality is indeed helpful, the reasoning capability of modern large language models is already sufficiently strong that changes in exemplar formatting yield only limited or no improvement over Zero-shot prompting.
# 6 Why CoT exemplars is not useful for strong models?
In this section, we further investigate the reasons behind the ineffectiveness of CoT exemplars. We begin with ablation studies, followed by an analysis of attention visualization results.
# 6.1 Ablation Study on Noisy Exemplars
To further investigate why exemplars fail to improve performance, we conduct ablation experiments across three types of CoT exemplars: Traditional CoT, R1-enhanced CoT (from DeepSeekR1), and Qwen2.5-Max-enhanced CoT. Specifically, for the R1-enhanced configuration, we use 4-shot exemplars, while 8-shot is used for the other settings. We introduce varying levels of noise into the exemplars and evaluate their impact on model performance. Experiments are conducted on the Qwen2.5 series (7B, 14B, and 72B) across both the GSM8K and MATH datasets.
Exemplars Are Not Crucial for Recent LLMs As shown in Figure 9, we observe that in most settings, adding noise to the exemplars does not lead to significant performance degradation. This is especially evident for the larger Qwen2.5-72B model, where even the Noise50 configuration can match or slightly outperform the Base setting. These findings suggest that the models may selectively ignore the exemplars and instead rely on their intrinsic reasoning ability. Thus, the performance observed under Few-shot settings may not arise from the informative content of the exemplars, but rather from the model’s inherent Zero-shot capabilities.
# 6.2 Attention Visualization
The previous results suggest that neither standard CoT prompts nor enhanced exemplars substantially improve model reasoning, and that models may not actively attend to these exemplars during inference. To investigate this further, we analyze the attention distribution of the Qwen2.5-7B model on GSM8K under Few-shot settings. Transformer-based models (Vaswani et al., 2017) rely on multi-head selfattention, where each head in each layer computes a separate attention matrix. We randomly select a test instance and visualize head 0 in the final (27th) layer. Full visualizations are provided in Appendix A.5.
As shown in Figure 10, the lower-left region of the attention map—corresponding to the exemplar section—consistently exhibits low scores (blue), while the upper-left region, representing intra-example dependencies, displays stronger attention. The red and green lines mark the ends of the exemplar section and input sequence, respectively; generation begins after the green line.Each attention row reflects how a generated token attends to prior tokens. The weak attention to the exemplars (before the red line) and strong focus on the prompt and test question (between the red and green line) indicate that the model largely ignores exemplars during inference, relying more on the prompt template.
Figure 10: Attention visualizations under various settings. The red line indicates the end of the exemplar section, and the green line marks the end of the entire input. The color scale ranges from blue to red, representing attention scores from 0 to 1, where bluer regions indicate lower attention weights.
Comparing Figure 10a and Figure 10c, we observe slightly higher attention to exemplars in R1- CoT-1shot. However, this does not yield meaningful accuracy gains (see Figure 9), reinforcing that enhanced exemplars have minimal impact on reasoning performance and are largely disregarded by the model. | In-Context Learning (ICL) is an essential emergent ability of Large Language
Models (LLMs), and recent studies introduce Chain-of-Thought (CoT) to exemplars
of ICL to enhance the reasoning capability, especially in mathematics tasks.
However, given the continuous advancement of model capabilities, it remains
unclear whether CoT exemplars still benefit recent, stronger models in such
tasks. Through systematic experiments, we find that for recent strong models
such as the Qwen2.5 series, adding traditional CoT exemplars does not improve
reasoning performance compared to Zero-Shot CoT. Instead, their primary
function is to align the output format with human expectations. We further
investigate the effectiveness of enhanced CoT exemplars, constructed using
answers from advanced models such as \texttt{Qwen2.5-Max} and
\texttt{DeepSeek-R1}. Experimental results indicate that these enhanced
exemplars still fail to improve the model's reasoning performance. Further
analysis reveals that models tend to ignore the exemplars and focus primarily
on the instructions, leading to no observable gain in reasoning ability.
Overall, our findings highlight the limitations of the current ICL+CoT
framework in mathematical reasoning, calling for a re-examination of the ICL
paradigm and the definition of exemplars. | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
# 1 Introduction
In recent years GPUs have emerged as mainstream processing units, more than just accelerators [73,67,66,29]. Modern GPUs provide support for more finegrained shared memory access patterns, allowing programmers to optimize performance beyond the traditional lock-step execution model typically associated with SIMT architectures. To this end, GPU programming languages such as CUDA and OpenCL [2,5], as well as libraries [4,3], have adopted C/C++ shared memory concurrency primitives.
Writing correct and highly efficient shared-memory concurrent programs is already a challenging problem, even for CPUs. GPU concurrency poses further challenges. Unlike CPU threads, the threads in a GPU are organized hierarchically and synchronize via barriers during execution. Moreover, shared-memory accesses are scoped, resulting in more fine-grained rules for synchronization, based on the proximity of their threads. Although these primitives and rules play a key role in achieving better performance, they are also complex and prone to errors.
GPU concurrency may result in various types of concurrency bugs – assertion violations, data races, heterogeneous races, and barrier divergence. While assertion violations and data race errors are well-known in CPU concurrency, they manifest in more complicated ways in the context of GPU programs. The other two types of errors, heterogeneous races and barrier divergence, are GPU specific. To catch these errors, it is imperative to explore all possible executions of a program.
The set of possible executions of a GPU concurrent program is determined by its underlying consistency model. State-of-the-art architectures including GPUs follow weak consistency, and as a result a program may exhibit extra behaviors in addition to the interleaving executions or more formally sequential consistency (SC) [49]. However, as the weak memory concurrency models in GPUs differ from the ones in the CPUs, the state-of-the-art analysis and verification approaches for programs written for CPUs do not suffice in identifying these errors under GPU weak memory concurrency. As a result, automated reasoning of GPU concurrency, particularly under weak consistency models, even though a timely and important problem, has remained largely unexplored.
To address this gap, in this paper we develop the GPUMC model checker for a scoped-C/C++ programming languages [61] for GPUs. Scoped-C/C++ has all the shared memory access primitives provided by PTX and Vulkan, and in addition, provide SC memory accesses. The recent work of [61] formalizes the scoped C/C++ concurrency in scoped-RC11 memory model (SRC11), similarly to the formalization of C/C++ concurrency in RC11 [48]. Consequently, GPUMC is developed for the SRC11 model. The consistency properties defined by SRC11, scoped C/C++ programming language follows catch fire semantics similar to traditional C/C++, that is, a program having a SRC11 consistent execution with a data race has undefined behavior. In addition, scoped C/C++ defines heterogeneous race [61,78,30,35] based on the scopes of the accesses, and a program having a SRC11-consistent execution with heterogeneous race also has undefined behavior.
Stateless Model Checking (SMC) is a prominent automated verification technique [23] that explores all possible executions of a program in a systematic manner. However, the number of executions can grow exponentially larger in the number of concurrent threads, which poses a key challenge to a model checker. To address this challenge, partial order reduction (POR) [24,32,68] and subsequently dynamic partial order reduction (DPOR) techniques have been proposed [28]. More recently, several DPOR algorithms are proposed for different weak memory consistency models to explore executions in a time and space-efficient manner [43,64,10,7,6,83]. For instance, GenMC-Trust [44] and POP [8] are recently proposed polynomialspace DPOR algorithms. While these techniques are widely applied for programs written for CPUs (weak memory) concurrency models [43,64,10,7,45,44], to our knowledge, DPOR-based model checking has not been explored for GPU weak memory concurrency.
GPUMC extends the GenMC-TruSt [44] approach to handle the GPU-specific features that the original GenMC lacks. More specifically, GPUMC implements an exploration-optimal, sound, and complete DPOR algorithm with linear memory requirements that is also parallelizable. Besides efficient exploration, GPUMC detects all the errors discussed above and automatically repairs certain errors such as heterogenous races. Thus GPUMC progressively transforms a heterogeneousracy program to generate a heterogeneous-race-free version. We empirically evaluate GPUMC on several benchmarks to demonstrate its effectiveness. The benchmarks range from small litmus tests to real applications, used in GPU testing [77,51], bounded model checking [52], and verification under sequential consistency [40,39]. GPUMC explores the executions of these benchmarks in a scalable manner and identifies the errors. We compare GPUMC with Dartagnan [78], a bounded model checker for GPU weak memory concurrency [52]. GPUMC identifies races which are missed by Dartagnan in its benchmarks and also outperforms Dartagnan significantly in terms of memory and time requirements in identifying concurrency errors.
Contributions $\&$ outline To summarize, the paper makes the following contributions. §2 and §3 provide an overview of GPU weak memory concurrency and its formal semantics. Next, §4 and §5 discuss the proposed DPOR algorithm and its experimental evaluation. Finally, we discuss the related work in §6 and conclude in §7.
# 2 Overview of GPU Concurrency
A shared memory GPU program consists of a fixed set of threads with a set of shared memory locations and thread-local variables. Unlike in the CPU, the GPU threads are structured in hierarchies at multiple levels: cooperative thread array(CTA) (cta), GPU (gpu), and system (sys), where cta is a collection of threads and gpu is a group of cta, and finally sys consists of a set of gpus and threads of other devices such as CPUs. Thus, a thread can be identified by its (cta, gpu) identifiers and its thread identifier. The system (sys) is the same for all threads.
Shared memory operations are one of read, write, atomic read-modify-write (RMW), fence (fnc) or barrier (bar). Similar to the $\mathrm { C / C + + }$ concurrency [37,36], these accesses are non-atomic read or write, or atomic accesses with memory orders. Thus accesses are classified as: non-atomic (na), relaxed (rlx), acquire (acq), release (rel), acquire-release (acq-rel), or sequentially consistent (sc). In increasing strength, na ⊏ rlx ⊏ {rel, acq} ⊏ acq-rel ⊏ sc.
The shared memory accesses of the GPU are further parameterized with a scope $\mathsf { s c o } \in \{ \mathsf { c t a } , \mathsf { g p u } , \mathsf { s y s } \}$ . The scope of an operation determines its role in synchronizing with other operations in other threads based on proximity. Thus, shared memory accesses are of the following form where $o _ { r }$ , $o _ { w }$ , $o _ { u }$ , $o _ { f }$ denote the memory orders of the read, write, RMW, and fence accesses respectively.
Fig. 1: Example of GPU concurrency errors. In (a), we have two threads $T _ { 1 } , T _ { 2 }$ from the CTAs $\mathsf { c t a } _ { 1 } , \mathsf { c t a } _ { 2 }$ . In (b) all threads are in the same CTA.
$$
r = X _ { o _ { r } } ^ { \mathrm { s c o } } \mid X _ { o _ { w } } ^ { \mathrm { s c o } } = E \mid r = \mathsf { R M W } _ { o _ { u } } ^ { \mathrm { s c o } } ( X , E _ { r } , E _ { w } ) \mid \mathsf { f n c } _ { o _ { f } } ^ { \mathrm { s c o } } \mid \mathsf { b a r } ^ { \mathrm { s c o } } ( \mathsf { i d } )
$$
A read access $r = X _ { o _ { r } } ^ { \mathsf { s c o } }$ returns the value of shared memory location/variable $X$ to thread-local variable $r$ with memory order $o _ { r }$ selected from $\{ \mathrm { N A , R L X , A C Q , S C } \}$ . A write access $X _ { o _ { w } } ^ { \mathsf { s c o } } = E$ writes the value of expression $E$ to the location $X$ with memory order $o _ { w }$ selected from $\{ \mathrm { N A , R L X , R E L , S C } \}$ . The superscript sco refers to the scope. An RMW access $r = \mathsf { R M W } _ { o _ { u } } ^ { \mathsf { s c o } } ( X , E _ { r } , E _ { w } )$ , atomically updates the value of location $X$ with the value of $E _ { w }$ if the read value of $X$ is $E _ { r }$ . On failure, it performs only the read operation. The memory order of an RMW is $o _ { u }$ selected from rlx, rel, acq, acq-rel, sc $\}$ . A fence access fnc is performed with a memory order $o _ { f }$ selected from rel, acq, acq-rel, sc . GPUs also provide barrier operations where a set of threads synchronize and therefore affect the behaviors of a program. For a barrier operation barsco(id), sco refers to the scope of the barrier and $i d$ denotes the barrier identifier. We model barriers as acquirerelease RMWs $( \mathsf { R M W } _ { \mathrm { A C Q - R E L } } ^ { \mathsf { s c o } } )$ parameterized with scope sco on a special auxiliary variable (similar to [46]).
# 2.1 GPU Concurrency Errors
Traditionally, two key errors in shared memory concurrency are assertion violations and data races. In addition, concurrent programs for GPUs may contain heterogeneous races and barrier divergence errors. The behavior of a program with data race or heterogeneous race is undefined, while divergence errors may lead to deadlocks [2, Section 16.6.2], [61], [78].
Assertion violation: In our benchmarks assertion violations imply weak memory bugs. Assertions verify the values of the variables and memory locations in a program. If the intended values do not match, it results in an assertion violation. Consider the program in Figure 1a having the assertion forall $b = 0$ ? which checks whether, for all executions, $b$ is 0. If the value of $X$ read into $a$ in ${ \sf T } _ { 2 }$ is $^ { 1 }$ , then $b$ cannot read a stale value $0$ from $X$ and the assertion fails.
Data race: Two operations $a$ and $b$ in an execution are said to be in a data race [61] [78] if (i) $a$ and $b$ are concurrent, that is, not related by happens-before, (ii) they access the same memory location, (iii) at least one of the accesses is a write operation, and (iv) at least one of the accesses is a non-atomic operation. In Figure 1a, if $\mathsf { c t a } _ { 1 } = \mathsf { c t a } _ { 2 }$ , the threads are in the same cta. In that case, if the acquire-read of $X$ in the second thread reads from the release-write in the first thread, then it establishes synchronization. Hence, the release-write of $X$ happens-before the non-atomic read of $X$ , and the program has no data race.
Heterogeneous race: Two operations $a$ and $b$ in an execution are in a heterogeneous race if (i) $a$ and $b$ are concurrent, (ii) they access the same memory location, (iii) at least one of the accesses is a write operation, and (iv) both accesses are atomic with non-inclusive scope, that is, the scopes of each access includes the thread executing the other access. Note that a heterogeneous race may take place between atomic accesses. In Figure 1a, if $\mathsf { c t a } _ { 1 } \neq \mathsf { c t a } _ { 2 }$ then the acquire-read and release-write do not synchronize and consequently are in a heterogeneous race. Then the program also has a data race between the non-atomic read of $X$ and release-write of $X$ .
Barrier divergence: Given a barrier, the threads within the given scope of the barrier synchronize. During execution, while a thread reaches the barrier, it waits for all the other threads to reach the barrier before progressing the execution further. Consider the program in Figure 1b, where all threads execute the function ${ \mathsf { f } } ( )$ . The threads with even thread identifiers synchronize to bar(1) and the thread with odd thread identifiers synchronize to bar(2). Hence the threads are diverging and not synchronizing to a single barrier. Modern GPUs consider it as a divergence error as the non-synchronizing threads may result in a deadlock. Following the definition from [2, Section 16.6.2], we report barrier divergence if at least one of the threads participating in the barrier is blocked at the barrier at the end of execution (no next instruction to execute).
# 3 Formal Semantics
In this section, we elaborate on the formal semantics of GPU concurrency. A program’s semantics is formally represented by a set of consistent executions. An execution consists of a set of events and various relations between the events.
Events An event corresponds to the effect of executing a shared memory or fence access in the program. An event $e = \langle i d , \mathrm { t i d } , e v , \mathsf { l o c } , \mathsf { o r d } , \mathsf { s c o } , \mathsf { V a l } \rangle$ is represented by a tuple where $i d$ , tid, $_ { e v }$ , loc, ord, sco, Val denote the event identifier, thread identifier, memory operation, memory location accessed, memory order, scope, read or written value. A read, write, or fence access generates a read, write, or fence event. A successful RMW generates a pair of read and write events and a failed RMW generates a read event. A read event $\mathsf { R } _ { o } ^ { \mathsf { s c o } } ( X , v )$ reads from location $X$ and returns value $v$ with memory order $o$ and scope sco. A write event $\mathsf { W } _ { o } ^ { \mathsf { s c o } } ( X , v )$ writes value $v$ to location $X$ with memory order $o$ and scope sco. A fence event $\mathsf { F } _ { o } ^ { \mathsf { s c o } }$ has memory order $o$ and scope sco. Note that for a fence event, $\mathsf { l o c } = \mathsf { V a l } = \perp$ . The set of read, write, and fence events are denoted by R, $\mathsf { W }$ , and $\sf { F }$ respectively.
Relations The events of an execution are associated with various relations. The relation program-order (po) denotes the syntactic order among the events. In each thread po is a total order. The relation reads-from (rf) relates a pair of same-location write and read events $w$ and $r$ having the same values to denote that $r$ has read from $w$ . Each read has a unique write to read from ( $\mathsf { r f } ^ { - 1 }$ is a function). The relation coherence order (co) is a total order on the same-location write events. The relation rmw denotes a successful RMW operation that relates a pair of same-location read and write events $r$ and $w$ which are in immediate-po relation, that is, no other event $a$ exists such that $( r , a )$ and $( a , w )$ are in po relations. We derive new relations following the notations below.
Notation on relations Given a binary relation $B$ , we write $B ^ { - 1 }$ , $B ^ { ? }$ , $B ^ { + }$ , $B ^ { * }$ to denote its inverse, reflexive, transitive, reflexive-transitive closures respectively. We compose two relations $B _ { 1 }$ and $B _ { 2 }$ by $B _ { 1 } ; B _ { 2 }$ . Given a set $A$ , $\mathrm { \lfloor A \rfloor }$ denotes the identity relation on the set $A$ . Given a relation $B$ , we write $B = 1 0 0$ and $B \neq 1 0 0$ to denote relation $B$ on same-location and different-location events respectively. For example, ${ \mathsf { p o } } _ { = \mathsf { l o c } }$ relates a pair of same-location events that are po-related. Similarly, $p o _ { \neq | \infty }$ relates po-related events that access different locations. Relation from-read (fr) relates a pair of same-location read and write events $r$ and $w ^ { \prime }$ . If $r$ reads from $w$ and $w ^ { \prime }$ is co-after $w$ then $r$ and $w ^ { \prime }$ are in fr relation: fr $\triangleq { \mathsf { r f } } ^ { - 1 }$ ; co.
Execution $\&$ consistency An execution is a tuple $\mathcal { G } = \langle \mathsf { E } , \mathsf { p o } , \mathsf { r f } , \mathsf { c o } , \mathsf { r m w } \rangle$ consisting of a set of events $\mathsf { E }$ , and the sets of po, $\mathsf { r f }$ , co, and rmw relations. We represent an execution as a graph where the nodes represent events and different types of edges represent respective relations. A concurrency model defines a set of axioms or constraints based on the events and relations. If an execution satisfies all the axioms of a memory model then the execution is consistent in that memory model.
SRC11 consistency model We first explain the relations of the RC11 model [48] which is extended to SRC11 [61] for GPUs, defined in Figure 3.
RC11 relations Relation extended-coherence-order (eco) is a transitive closure of the read-from $( \boldsymbol { \mathsf { r f } } )$ , coherence order (co), and from read (fr) relations, that is, ${ \mathsf { e c o } } \triangleq ( { \mathsf { r f } } \cup { \mathsf { c o } } \cup { \mathsf { f r } } ) ^ { + }$ . Note that the eco related events always access the same memory location.
Relation synchronizes-with (sw) relates a release event to an acquire event. For example, when an acquire read reads from a release write then the pair establishes an sw relation. In general, sw uses release-sequence rseq that starts at
$\begin{array} { r l } { \mathsf { T } _ { 1 } \langle \mathsf { c t a } _ { i } , \mathbf { \Pi } _ { - } \rangle } & { \left| \mathsf { T } _ { 2 } \langle \mathsf { c t a } _ { j } , \mathbf { \Pi } _ { - } \rangle \right. \qquad [ X = Y = 0 ] \qquad } \\ { \left. X _ { \mathrm { R L X } } ^ { \mathrm { c t a } } = 1 ; \qquad \right| a = Y _ { \mathrm { a c Q } } ^ { \mathrm { c t a } } ; \qquad } & { \underset { \mathrm { W L X } } { \overset { \mathrm { C t a } } { \sim } } ( X , 1 ) \qquad \mathsf { \hat { R } } _ { \mathrm { A c Q } } ^ { \mathrm { c t a } } ( Y , 0 ) } \\ { \mathsf { Y } _ { \mathrm { R E L } } ^ { \mathrm { c t a } } = 1 ; \qquad } & { \left| \mathsf { I f } ( a = = 1 ) \qquad \mathsf { p o } \right. \qquad } \\ { \qquad b = X _ { \mathrm { R L X } } ^ { \mathrm { c t a } } ; \quad } & { \mathsf { W } _ { \mathrm { R E L } } ^ { \mathrm { c t a } } ( Y , 1 ) } \end{array}$ [X = Y = 0] Wcrtlax(X, 1) Rcatcaq(Y, 1) rf rf Wcrteal(Y, 1) Rcrtlax(X, 1) (a) SMP (b) (c)
i : [X = Y = 0] [X = Y = 0] i : [X = Y = 0]
co co
Wcrtlax(X, 1) Rcatcaq(Y, 1) Wcrtlax(X, 1) Rcatcaq(Y, 1) Wcrtlax(X, 1) Rcatcaq(Y, 1) rf rf swrf
Wcrteal(Y, 1) fr Rcrtlax(X, 0) Wcrteal(Y, 1) Rcrtlax(X, 1) Wcrteal(Y, 0) frRcrtlax(X, 0) (d) (e) (f)
a release store or fence event and ends at an acquire load or fence event with an intermediate chain of $\mathsf { r f }$ -related rmw relations. Finally, relation happens-before (hb) is the transitive closure of the po and sw relations.
To relate the SC memory accesses and fences, the RC11 model defines the scb relation. A pair of events $a$ and $b$ is in scb relation in one of these cases: (1) $( a , b )$ is in po, co, or fr relation. (2) $a$ and $b$ access the same memory location and are in hb relation, that is $\scriptstyle \mathtt { h b \_ l o c } ( a , b )$ holds. (3) $a$ has a different-location po-successor $c$ , and event $b$ has a different-location po-predecessor $d$ , and $( c , d )$ is in happens-before relation.
Based on the scb relation, RC11 defines $\mathsf { p s c } _ { \mathsf { b a s e } }$ and pscF. Relation $\mathsf { p s c } _ { \mathsf { b a s e } }$ relates a pair of SC (memory access or fence) events and pscF relates a pair of SC fence events. Finally, RC11 defines psc relation by combining $\mathsf { p s c } _ { \mathsf { b a s e } }$ and pscF relations.
RC11 to SRC11 The SRC11 model refines the RC11 relations with inclusion (incl). Relation $\operatorname { i n c l } ( a , b )$ holds when (i) $a$ and $b$ are atomic events, (ii) if the scope of $a$ or $b$ includes the thread of $b$ or $a$ respectively, and (iii) if both $a$ and $b$ access memory then they access the same memory location. Note that the incl relations are non-transitive, that is, $\operatorname { i n c l } ( a , b )$ and ${ \mathrm { i n c l } } ( b , c )$ does not imply an
$$
\begin{array} { r l } & { \mathrm { r e g e } \frac { \partial } { \partial \tau } \mathbf { W } ; \mathrm { p o } _ { \alpha = \alpha } \frac { \partial } { \partial \tau } \mathbf { W } ; ( \mathrm { f i n e d } \mathrm { \cap } \mathrm { r e f } ) \cdot \mathrm { r m } ^ { \alpha } } \\ & { \mathrm { p e t } \overset { \Delta } { = } \mathbb { E } ( \mathbb { F } _ { \leq : \tau } \mathrm { b o } ) ^ { 2 } } \\ & { \mathrm { p a c } \frac { \partial } { \partial \tau } \mathrm { P o } \mathrm { E } ; \mathrm { E } \mathrm { a } _ { \alpha = \alpha } } \\ & { \mathrm { S i v e } = \mathrm { p e r } \mathrm { i n e g } _ { \mathrm { i n } } / \mathrm { \partial r } \mathrm { p a c q } } \\ & { \mathrm { h p } \triangleq \mathrm { p o } \mathrm { i n c } { i n c } \mathrm { p t } \mathrm { p a c q } } \\ & { \mathrm { h p a c k } ( \mathrm { p o } \cup \mathrm { i n c } \mathrm { s t o } } \\ & { \mathrm { s c b } \frac { \partial } { \partial \tau } \mathrm { \partial \tau } \mathrm { b o } \mathrm { b a } _ { \mathrm { e p t } } \mathrm { \partial \cdot } \mathrm { b u } _ { \mathrm { p } } ^ { \prime } \mathrm { \partial \varphi } _ { \partial \varphi \omega } \mathrm { U h b } \mathrm { b } _ { \mathrm { - } \mathrm { b e } } \cup \mathrm { c o } \cup \mathrm { f f } \mathrm { \quad } } \\ & { \mathrm { p s c } \frac { \partial } { \partial \tau } \mathrm { E u c } \mathrm { i n f } \mathrm { F o r e } \mathrm { s t h } \mathrm { b } ^ { \prime } ; \mathrm { F a c } \mathrm { i n h } \mathrm { b } ^ { \prime } ; \mathrm { F a c } } \\ & { \mathrm { p s c } \frac { \partial } { \partial \tau } \mathrm { S e c } \mathrm { B a } _ { \mathrm { i n } } / \mathrm { s t } \mathrm { p c e } } \\ & { \mathrm { p s c } \frac { \partial } { \partial \tau } \mathrm { B a } _ { \mathrm { i n } } / \mathrm { s t } \mathrm { B e c } \mathrm { b } ^ { \prime } \mathrm { F o } \mathrm { i n } \mathrm { a v e r f i c } \mathrm { s t } \frac { \partial } { \partial \tau } \mathrm { \rbrace } \mathrm { s t } } \end{array}
$$
Fig. 3: SRC11 relations and axioms with some violation patterns.
$\mathrm { i n c l } ( a , c )$ relation. To see this, consider events $a , b , c$ having scopes $\mathsf { c t a } _ { 1 } , \mathsf { g p u } _ { 1 }$ and $\mathsf { c t a } _ { 2 }$ respectively where $\mathsf { c t a } _ { 1 } , \mathsf { c t a } _ { 2 }$ belong to GPU $\mathsf { g p u } _ { 1 }$ . Then we have $\operatorname { i n c l } ( a , b )$ and $\mathfrak { i n c l } ( b , c )$ but not $\mathsf { i n c l } ( a , c )$ .
Based on the incl relation, the rseq, sw, and hb relations are extended in the SRC11 model. In SRC11, the rf relation in the rseq and sw relations must also be in the incl relation. Note that, even then, the sw related events may not be in the incl relation. Finally, hb in SRC11 is the transitive closure of the po and incl-related sw relations.
SRC11 axioms An execution in SRC11 is consistent when it satisfies the axioms in Figure 3. The (Coherence) axiom ensures that the hb relation or the combination of hb and eco relations is irreflexive and does not create any cycle in the execution graph. The (Atomicity) axiom ensures that there is no intermediate event on the same memory location between a pair of events that are rmw-related. The $S C$ axiom forbids any cycle between the SC events which are both in the psc relation and the incl relation. Finally, the (No-Thin-Air) axiom forbids any cycle composed of po and rf relations. These axioms essentially forbid the patterns shown in Figure 3 in an execution graph. Among these scoped-RC11 axioms, (Atomicity) and (No-Thin-Air) are the same as those of RC11. The (Coherence) and (SC) axioms differ as they use more fine-grained incl relations for the scoped accesses.
Example Consider the program and its execution graphs in Figure 2. If $i \neq j$ , then the accesses on $Y$ do not synchronize, resulting in Figure 2d. If $i = j$ then the accesses on $Y$ synchronize which results in Figure 2c. The execution in Figure 2f is forbidden as it violates the (Coherence) axiom.
# 4 GPUMC : Model Checking under SRC11
In this section we discuss the GPUMC approach in §4.1 followed by a running example in §4.2. Finally, in §4.3 we discuss the soundness, completeness, and optimality of the proposed exploration algorithm.
# 4.1 DPOR Algorithm
GPUMC extends GenMC-TruSt and is in the same spirit as other well known dynamic partial order reduction (DPOR) algorithms [28,43,64,10,7,45,44].
It verifies a program by exploring all its executions in a systematic manner, ensuring that no execution is visited more than once. Like [44], our algorithm also takes only polynomial space.
Outline Algorithm 1 invokes the Explore procedure to explore the executions of input program under SRC11. The Explore procedure uses Algorithm 2 to enable a read operation to read-from possible writes and thereby explore multiple executions, Algorithm 3 to ensure no execution is explored more than once, and Algorithm 4 to identify and fix errors.
Explore procedure The Explore procedure explores executions $\mathcal { G }$ , starting from an empty execution $\mathcal { G } _ { \emptyset }$ where $E = \emptyset$ , as long as they are consistent for a given memory model, in this case SRC11 (see Lines 3 to 6 of Algorithm 1). Next, if some of the threads are waiting at a barrier, while all other threads have finished execution, then we observe a barrier divergence, and the execution is said to be Blocked. In a blocked execution,
# Algorithm 1: DPOR( )
Input: program 1 Explore( , ) 2 Procedure Explore( , ) 3 if PoRfAcyclic() then return 4 if Coherent() then return 5 if ViolateAtomicity() then return 6 if InclPscAcyclic() then return 7 if Blocked $( { \mathcal { G } } )$ then output “divergence in $\mathcal { G } "$ 8 switch $e \gets N e x t E v e n t ( \mathcal { P } , \mathcal { G } ) \mathrm { ~ } \dot { \mathbf { c } }$ o 9 case assertion violation do 10 output “Error in $\vec { \mathcal { G } } ^ { \mathfrak { N } }$ 11 case do // no next event 12 output “ ” 13 case $e = W ( x , v )$ do //add W(x, v) to $\mathcal { G }$ 14 CheckAndRepairRace $( \mathcal { G } , e )$ 15 ′ = addco( , , e) 16 $\mathrm { E x p z o R E } ( \mathcal { P } , \mathcal { G } ^ { \prime } )$ 17 $\mid \mathrm { D E L A Y E D R F S } ( \mathcal { G } , e )$ 18 case $e = R ( x , \underline { { \ O } } _ { - } )$ do 19 reversible(e) = true //Wx is set of writes on $x$ 20 for $w \in \mathsf { W } ^ { x }$ do //add rf from $w \in \mathsf { W } ^ { x }$ 21 G′ = addRF(G, w, e) 22 CheckAndRepairRace( ′, e) 23 Explore(P, G′)
different threads may be waiting at different barriers. In this case (line 7), we report the divergence and terminate. Otherwise, we continue exploration by picking the next event (line 8). This schedules a thread and the next enabled event of that thread. We use the total order $< _ { e x e }$ to denote the order in which events are added to the execution.
The exploration stops if an assertion is violated (line 10), or when all events from all threads are explored (line 12). The algorithm reports an error in the first case and in the second case outputs the graph $\mathcal { G }$ .
If the exploration is not complete and the current event $e$ is a write (line 13), then the procedure CheckAndRepairRace detects races due to events conflicting with $e$ (line 14), and also offers to repair them. On detecting a race, the algorithm chooses one of the following based on user choice – (i) announce the race and stop exploration, or (ii) announce the race and continue exploration, or (iii) announce the race and repair the race.
Apart from calling Explore recursively (Line 16) after adding the necessary co edges (line 15) to $\mathcal { G }$ , we check if $e$ can be paired with any existing read in $\mathcal { G }$ (line 17). These reads are called “reversible” as we can reverse their order in the execution by placing them after the writes they read from. On a read event $r$ , we consider all possible rfs for $r$ and extend the execution $\mathcal { G }$ to a new execution $\mathcal { G } ^ { \prime }$ (addRF, Line 21).
Algorithm 3: CheckOptimal( , Deleted, w, r)
DelayedRFs procedure The procedure pairs all reversible reads $r$ in $\mathcal { G }$ with all same-location write events $w$ (line 1) provided $r$ is not in the $\mathsf { p o U r f }$ prefix of $w$ in $\mathcal { G }$ (line 2), to preserve the (No-Thin-Air) axiom. Moreover, a new execution $\mathcal { G } ^ { \prime }$ is obtained from $\mathcal { G }$ where $r$ reads from $w$ (line 5), and all events between $r$ and $w$ which are not $\mathsf { p o U r f }$ before $w$ are deleted (line 3).
CheckOptimal procedure To ensure that no execution is explored twice, the CheckOptimal procedure ensures that all writes in the deleted set are comaximal wrt their location, and all reads in the deleted set read from co-maximal writes. This is done by lines 2 to 5 in CheckOptimal.
CheckAndRepairRace procedure We check for races while adding each write $w$ to the execution. For instance, assume that all the reads and writes have been explored (Line 1). For each event $e ^ { \prime }$ in this set which is not related to $w$ by $\mathsf { h b }$ , we check if any one of them is non-atomic to expose a data race. If both have atomic accesses, we check if they are not scope-inclusive to report a heterogeneous race (Line 3). Likewise, for each read event added, we consider all explored writes (line 2), and repeat the same check to expose a data race or a heterogeneous race.
In addition, we also have an option of repair. In Repair (line 6, CheckAndRepairRace), we either skip and return to Explore, or do the following repairs and terminate. First, if $e$ and $e ^ { \prime }$ respectively have atomic and non-atomic accesses with non-inclusive scopes, then we update their scope to make them inclusive : for instance, if $_ { e , e ^ { \prime } }$ are in different CTAs, we update their scopes to GPU-level. Second, if at least one of ${ \boldsymbol { e } } , { \boldsymbol { e } } ^ { \prime }$ is a non-atomic access, then we update the non-atomic access to relaxed atomic, and update the scopes so that $e$ , $e ^ { \prime }$ have the same scope to prevent a heterogeneous race between them later. However, currently, we do not repair on non-atomic location data types.
Comparison with State-of-the-Art. We discuss how our algorithm differs from the existing DPOR algorithms. The first departure comes in the Explore procedure where we perform consistency checking: Lines 3 to 6 are specific to the Scoped RC11 model which is not handled by any of the existing algorithms including the most recent [44,8], since none of them handle scoped models. The DelayedRFs procedure is standard in all DPOR algorithms and checks if we can pair reads with eligible writes which have been explored later. Next we have CheckOptimal, which ensures that we are optimal while exploring executions: here, the optimality check of [8] is tailored for sequential consistency; we extend the optimality checking algorithm for RC11 [44] to SRC11. While optimality is achieved by ensuring co-maximality on writes [44], there could be optimal co orderings that are inconsistent in the non-scoped setting, which are consistent in the scoped case which need to be considered to achieve completeness. This needed careful handling to achieve polynomial space just as [44]. Finally, our CheckAndRepairRace algorithm is novel and differs from all existing approaches as it reports and also repairs heterogeneous races.
# 4.2 Exploring the Executions of SEG
We now illustrate the GPUMC algorithm on program SEG as a running example. The assertion violation to check is exists( $a = 1 \land b = 1$ ). This program has 4 consistent executions under SRC11.
The exploration begins with the empty execution, with no events and all relations being empty. As we proceed with the exploration, we use numbers $^ { 1 , 2 , \dots }$ to denote the order in which events are added to the execution. Among the enabled events, we have the read from $Y$ , namely, $a = Y _ { \mathrm { N A } }$ in thread ${ \mathsf { T } } _ { 2 }$ and the write to $X$ in ${ \sf T } _ { 1 }$ . We add two events for these accesses to the execution (lines 18, 21, 13 in Explore). The read on $Y$ has only the initial value 0 to read from; this is depicted by the $\mathsf { r f }$ edge to $^ 1$ , obtaining $\mathcal { G } _ { 1 }$ . On each new call to Explore, the partial execution is checked for consistency (lines 3-6). $\mathcal { G } _ { 1 }$ is consistent.
$$
\begin{array} { r l } { X = Y = 0 ; } & { \qquad \quad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \quad g _ { 1 } \vdots } \\ { \mathsf { T } _ { 1 } \langle \mathsf { c t a } _ { 1 } , \underbrace { \vphantom { \mathrm { T r e a } } } _ { \mathrm { X E L } } , \rangle } \\ { X _ { \mathrm { R E L } } ^ { \mathrm { c t a } } = 1 ; } \\ { Y _ { \mathrm { R E L } } ^ { \mathrm { c t a } } = 1 ; } & { \qquad \quad \downarrow \qquad \mathsf { S E G } } \end{array} \qquad \begin{array} { r l } { \underbrace { \mathsf { S } : \qquad } } & { \underbrace { \mathsf { S } : \qquad } } \\ { \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad } \\ { \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad } \\ { \vdots \qquad \vdots \qquad } \\ { \vdots \qquad } & { \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad } \\ { \vdots \qquad } & { \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad \vdots \qquad } \\ { \vdots \qquad } & { \vdots \ddots \qquad \vdots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots \dots } \\ { \underbrace { \mathsf { S } : \qquad } } & { \vdots \qquad \vdots } \\ { \underbrace { \mathsf { S } : \qquad } } & { \underbrace { \mathsf { S } : \qquad } } & { \underbrace { \mathsf { S } : \qquad } } \end{array}
$$
Next, the read event on $X$ from ${ \mathsf { T } } _ { 2 }$ is added (line 18) having two sources to read from $X$ (line 20): the initial write to $X$ , and the write event 2 . This provides two branches to be explored, with consistent executions $\mathcal { G } _ { 2 }$ and $\mathcal { G } _ { 3 }$ respectively.
Next, we add write on $Y$ from ${ \sf T } _ { 1 }$ to $\mathcal { G } _ { 2 } , \mathcal { G } _ { 3 }$ which results in executions $\scriptstyle { \mathcal { G } } _ { 7 }$ and $\mathcal { G } _ { 4 }$ respectively. Both $\mathcal { G } _ { 4 }$ and $\scriptstyle \mathcal { G } _ { 7 }$ are consistent executions.
$$
\begin{array} { r l r } { \left[ i n i t \right] _ { \ast } \quad } & { { } \mathcal { G } _ { 7 } \quad } & { \left[ i n i t \right] _ { \ast } \quad } & { \mathcal { G } _ { 4 } } \\ { 2 : \mathsf { W } _ { \mathrm { R E L } } ^ { \mathrm { c t a } } \left( X , 1 \right) _ { \ast } ^ { \prime } \quad } & { { } 1 : \mathsf { R } _ { \mathrm { N A } } ^ { \mathrm { c t a } } \left( Y , 0 \right) \quad } & { \quad } & { 2 : \mathsf { W } _ { \mathrm { R E L } } ^ { \mathrm { c t a } } \left( X , 1 \right) \quad } & { 1 : \mathsf { R } _ { \mathrm { N A } } ^ { \mathrm { c t a } } \left( Y , 0 \right) } \\ { \downarrow \quad } & { { } \setminus \quad } & { \downarrow \quad } & { \quad } \\ { 4 : \mathsf { W } _ { \mathrm { R E L } } ^ { \mathrm { c t a } } \left( Y , 1 \right) \quad } & { { } 3 : \mathsf { R } _ { \mathrm { A C Q } } ^ { \mathrm { c t a } } \left( X , 0 \right) \quad } & { \quad } & { 4 : \mathsf { W } _ { \mathrm { R E L } } ^ { \mathrm { c t a } } \left( Y , 1 \right) \quad } & { 3 : \mathsf { R } _ { \mathrm { A C Q } } ^ { \mathrm { c t a } } \left( X , 1 \right) } \end{array}
$$
Reversible reads. In $\dot { \mathcal { G } } _ { 4 }$ , we observe that the read on $Y$ ( 1 ) can also read from the write $^ 4$ which was added to the execution later. Enabling $^ 1$ to read from $^ { 4 }$ involves swapping these two events so that the write happens before the corresponding read. Since $^ 2$ is po-before $^ 4$ , both of these events must take place before the read from $Y$ ( 1 ) for the $\mathsf { r f }$ to be enabled. The read from $X$ ( 3 ) however, has no dependence on the events in the first thread and happens after $^ 1$ . Therefore, we can delete (line 3 in DelayedRFs) $^ 3$ , and add the read from $X$ later, after enabling the $\mathsf { r f }$ from $\overline { { 4 } }$ to $^ 1$ (line 5 in
DelayedRFs). The optimality check (line 4 in DelayedRFs) is passed in this case (see also the paragraph on optimality below) and we obtain execution $\mathcal { G } _ { 5 }$ .
$$
\begin{array} { r l } & { \mathrm { ~ o n ~ a n t m e ~ c a s ~ p o n ~ m o ~ a n ~ g ~ n o m ~ y ~ s ~ a n a m m g ~ s ~ u n c ~ n e a s ~ } } \\ & { \mathrm { ~ \mathcal ~ \mathcal ~ \ell ~ ( L i n e ~ 1 ) ~ s ~ i n ~ i n ~ E S R 2 ~ o R E ) ~ f r o m ~ T _ 2 ~ H e r e , ~ X ~ } } \\ & { \mathrm { ~ r e a d ~ f r o m ~ ( L i n e ~ 2 0 , ~ E x P L O R ) ~ t h e ~ i n i t i a l } } \\ & { \mathrm { ~ \mathcal ~ \ell ~ 2 ~ T _ \ell ~ T h i s ~ r e s u l t s ~ i n ~ e x e c u t i o n s ~ \mathcal ~ \mathcal { G } _ 6 ~ a n d ~ } } \\ & { \mathrm { ~ \mathcal ~ \ell ~ h i c h ~ a r e ~ b o t h ~ c o n s i s t e n t . ~ } } \\ & { \mathrm { ~ \mathcal ~ \ell ~ \to ~ \ell - c o r . ~ \ell - c o r . ~ \ell - c o r . ~ \ell - c o r . ~ \ell - c o r . ~ \ell - c o r . ~ \ell - c o r . \ell - c o r . \ell - c o r . \ell - c o r . \ell } } \\ & { \mathrm { ~ \mathcal ~ \ell ~ [ ~ i n i t ] ~ } } \\ & { \mathrm { ~ \mathcal ~ \geq ~ \mathcal { N } { ~ R e a } ( \mathcal { X } , \ell 1 ) ~ } ^ { \mathrm { t h } } \texttt { 1 } \texttt { 1 } \cdot \mathbb { R e a } _ { \mathrm { \mathcal { X } } } ^ { \mathrm { c a } } ( Y , 1 ) } \\ & { \mathrm { ~ \mathcal ~ \geq ~ \mathcal { \cdot } ~ \mathcal { S } _ \mathrm { ~ \mathcal { R } } ~ \cdot } } \\ & { \mathrm { ~ \mathcal ~ \leq ~ \mathcal { F r } \ ' _ \mathrm { e a s } ~ ( \mathcal { Y } , \ell - \cdot ^ { - \chi } , ~ \ell ) ~ \in ~ \mathcal { S } _ \mathrm { ~ \mathcal { R } } \cap \mathcal { F } _ \mathrm { e a s } ( Y , 1 ) ~ } } \\ & { \mathrm { ~ \mathcal ~ \leq ~ \mathcal { F r } \ ' _ \mathrm { e a s } ~ ( \mathcal { Y } , \ell - \cdot ^ { \mathrm { ~ c o r ~ } } , ~ \ell ) ~ } } \\ & { \mathrm { ~ \mathcal ~ \geq ~ \mathcal ~ \cdot ~ Y \ ' _ \mathrm { e a s } ( \mathcal { Y } , \ell ) ~ } } \end{array}
$$
Optimality. From $\scriptstyle { \mathcal { G } } _ { 7 }$ , we do not consider the possibility of $Y$ reading from $\boldsymbol { \mathsf { 4 } }$ as it would result in an execution identical to $\mathcal { G } _ { 8 }$ , and consequently violate optimality. The CheckOptimal procedure checks it to ensure that no execution is explored more than once. This check enforces a “co-maximality” criterion on the events that are deleted while attempting a swap between a read event and a later write event: this is exactly where $\dot { \mathcal { G } } _ { 4 }$ and $\scriptstyle { \mathcal { G } } _ { 7 }$ differ. In $\scriptstyle { \mathcal { G } } _ { 7 }$ , while considering the later write on $Y$ ( 4 ) to read from for the read event ( 1 ), the deleted (line 3, DelayedRFs) read event on $X$ ( 3 ) reads from the initial write of $X$ which is not co-maximal since it is co-dominated by $^ 2$ (lines 3-5 in CheckOptimal). Hence, the check-in line 5 of CheckOptimal fails. In $\beta _ { 4 }$ however, the deleted read on $X$ ( 3 ) reads from a ${ \mathrm { C O } } _ { x }$ -maximal write, and the test passes. Thus, the algorithm only considers the possibility of the $Y$ reading from $^ 4$ in $\dot { \mathcal { G } } _ { 4 }$ , avoiding redundancy.
Program repair The exploration algorithm detects the assertion violation in $\mathcal { G } _ { 6 }$ (since both $a , b$ read values 1) and detects a data race between $^ 1$ and $^ 4$ .
If GPUMC exploration encounters a heterogeneous race between a pair of accesses then GPUMC automatically repairs the race. To do so, GPUMC changes the scope of the accesses to enforce an inclusion relation. After fixing a heterogeneous race GPUMC terminates its exploration.
Consider a variant of the SEG program where $T _ { 1 }$ and $T _ { 2 }$ are in different CTAs, GPUMC fixes the heterogeneous race by transforming the scope from cta to gpu.
$$
\begin{array} { r l } { X = Y = 0 ; } & { \ X = Y = 0 ; } \\ { \mathsf { T } _ { 1 } \langle \mathsf { c t a } _ { 1 } , \_ \cdot \rangle } \\ { X _ { \mathtt { R E L } } ^ { \mathtt { c t a } } = 1 ; } \\ { Y _ { \mathtt { R E L } } ^ { \mathtt { c t a } } = 1 ; } \end{array} \left\| \begin{array} { l l } { \mathsf { T } _ { 2 } \langle \mathsf { c t a } _ { 2 } , \_ \cdot \rangle } & { \quad \mathsf { T } _ { 1 } \langle \mathsf { c t a } _ { 1 } , \_ \cdot \rangle } \\ { a = Y _ { \mathtt { N A } } ^ { \mathtt { c t a } } ; } & { \quad \mathsf { X } _ { \mathtt { R E L } } ^ { \mathtt { g p u } } = 1 ; } \\ { b = X _ { \mathrm { A C Q } } ^ { \mathtt { c t a } } ; } & { \quad \mathsf { Y } _ { \mathtt { R E L } } ^ { \mathtt { c t a } } = 1 ; } \end{array} \right\| \begin{array} { l } { \mathsf { T } _ { 2 } \langle \mathsf { c t a } _ { 2 } , \_ \cdot \rangle } \\ { a = Y _ { \mathtt { N A } } ^ { \mathtt { c t a } } ; } \\ { b = X _ { \mathrm { A C Q } } ^ { \mathtt { g p u } } ; } \end{array}
$$
# 4.3 Soundness, Completeness and Optimality
Theorem 1. The DPOR algorithm for SRC11 is sound, complete and optimal.
Soundness. The algorithm does not continue exploration from any inconsistent execution as ensured by Lines 3 to 6 in Algorithm 1, and is therefore sound.
Completeness. The DPOR algorithm is complete as it does not miss any consistent and full execution. We prove this in the following steps:
We first show that starting from any consistent execution $\mathcal { G }$ , we can uniquely roll back to obtain the previous execution $\boldsymbol { \mathcal { G } } _ { p }$ (see the supplement for the algorithm to compute $\mathcal { G } _ { p }$ from $\mathcal { G }$ ). This is proved using the fact that we have a fixed order in exploring the threads, along with the conditions that allow a swap between a read and a later write to take place. To allow a swap of a read $r$ on some variable (say $x$ ), all events in Deleted respect “ ${ \mathsf { c o } } _ { x }$ -maximality”. This is enforced by CheckOptimal and allows us to uniquely construct the previous execution $\boldsymbol { \mathcal { G } } _ { p }$ .
Second, we show that $\mathrm { E x p z o R E } ( \mathcal { P } , \mathcal { G } _ { p } )$ leads to the call of Explore $( { \mathcal { P } } , { \mathcal { G } } )$ . This shows that if $\boldsymbol { \mathcal { G } } _ { p }$ is reachable by the DPOR algorithm, then $\mathcal { G }$ is also reachable (see Supplement .2, Lemma 3).
• In the final step, we show that walking backward from any consistent $\mathcal { G }$ we have a unique sequence of executions $\mathcal { G } _ { p } , \mathcal { G } _ { p - 1 } , \mathcal { G } _ { p - 2 } , . . .$ , till we obtain the empty execution $\mathcal { G } _ { \emptyset }$ . Thus, starting from $\mathrm { E x p z o R E } ( \mathcal { P } , \mathcal { G } _ { \emptyset } )$ , we obtain $\mathcal { G } ($ Supplementary .2, Lemma 5, Lemma 6).
Optimality. The algorithm is optimal as each full, consistent execution $\mathcal { G }$ is generated only once. Lines 23 and 15 of the Explore procedure ensure that each recursive call to Explore generates an execution that has a different rf edge or a different co edge. Also, during the DelayedRFs procedure, the swap of a read $r$ with a write $w$ is successful only when the deleted events respect “ ${ \mathsf { C O } } _ { x }$ -maximality”. As argued in completeness, for every (partial) consistent execution $\mathcal { G }$ , there exists a unique previous consistent execution $\boldsymbol { \mathcal { G } } _ { p }$ .
If the algorithm explores $\mathcal { G }$ twice, it means that there are two different exploration sequences with respective previous executions $\boldsymbol { \mathcal { G } } _ { p }$ and $\mathcal { G } _ { q }$ . This is a contradiction as we have a unique previous execution (see Supplementary Appendix .3, Theorem 3).
Polynomial Space The DPOR algorithm explores executions recursively in a depth-first manner, with each branch explored independently. Since the recursion depth is bounded by the size of the program, this approach ensures that the algorithm uses only polynomial space.
# 4.4 Exploring the Reads-From Equivalence
For simplicity, we have focused our presentation on exploring executions that contain the co relation explicitly. However, Algorithm 1 can be easily adapted to explore executions where co is not given explicitly. This corresponds to exploring the reads-from partitioning [22], a setting that is also supported by GenMC [44]. This is often a desirable approach, because it may significantly reduce the search space: there can be exponentially many executions, differing only in their co, all of which collapse to a single class of the reads-from partitioning.
Exploring the reads-from partitioning requires that every time a new execution is explored, the algorithm performs a consistency check to derive a co, so as to guarantee that the execution is consistent. If the program has no SC accesses, this check is known to be efficient for RC11 [11,47], taking essentially linear time [79]. These results easily extend to scoped RC11, by adapting the computation of the happens-before relation so as to take the scope inclusion incl into consideration. On the other hand, the presence of SC accesses makes the problem intractable [31,62], though it remains in polynomial time with a bounded number of threads [31,9].
# 5 Experimental Evaluation
We implement our approach as a tool (GPU Model Checker GPUMC) capable of handling programs with scopes. GPUMC is implemented in GenMC-Trust [44], and takes scoped C/C++ programs as input and works at the LLVM IR level. Similar to existing approaches, we handle programs with loops by unrolling them by a user-specified number of times. We conduct all our experiments on an Ubuntu 22.04.1 LTS with Intel Core i7-1255U $\times$ 12 and 16 GiB RAM.
We experiment with GPUMC on a wide variety of programs starting from litmus tests to larger benchmarks. We mainly compare its performance with Dartagnan [52,50], a state-of-the-art bounded model checker, which also handles programs with scope [78]. Dartagnan has recently integrated the PTX and Vulkan GPU consistency models into its test suite. Even though the consistency model considered by Dartagnan are different from SRC11, which GPUMC considers, Dartagnan is closest available tool to the kind of work we report in this paper. Two other tools that also handle programs with scopes are iGUARD [39] and Scord [40]. However, these tools do not reason about weak memory concurrency in GPUs. which makes their benchmarks not directly usable by GPUMC. In order to still experiment with them, we change their shared accesses to atomics.
# 5.1 Comparison with Dartagnan
We compare the performance of GPUMC with Dartagnan [52] on the implementation of four synchronization primitives (caslock, ticketlock, ttaslock, and XF-Barrier), taken from [52,81]. These benchmarks use relaxed atomics, which is a very important feature of real GPU APIs. All the 1 (caslock1, ticketlock1, ttaslock1, and XF-Barrier1) and 2 (caslock2, ticketlock2, ttaslock2, and XFBarrier2) variants are obtained by transforming the release and acquire accesses to relaxed accesses, respectively. Moreover, the XF-Barrier benchmark uses CTAlevel barriers for synchronization. Table 1 shows the results of the evaluation of these applications. We parameterize these applications by increasing the number of threads in the program, the number of CTAs, and the number of threads in a CTA. For comparing with Dartagnan, we focus on race detection.
In Table 1 the Grid and Threads columns denote the thread organization, and the total number of threads respectively. The Result column shows the observed result – whether a race was detected (R), or whether the program was declared safe and no race was reported (NR). The Time and Memory columns show the time taken in seconds and the memory consumed in MB taken by Dartagnan and GPUMC.
Table 1: Data race detection: Evaluating on parameterized, single kernel code Time Out (TO) =30 minutes. (Time in Seconds and Memory in MB respectively). The number of events per execution is less than 120. In column Result, R denotes race detected and NR denotes no race. The \* on two NR entries shows a wrong result in Dartagnan. In Grid column, X,Y represent X CTAs and Y threads per CTA.
We observe that in all examples except XF-Barrier, GPUMC and Dartagnan produce the same results, and GPUMC outperforms Dartagnan significantly in time and memory requirements. For the benchmarks XF-Barrier1 and XF-Barrier2 with grid structure (6,4) respectively, GPUMC successfully detects the underlying data race within a fraction of a second. The time and memory requirements we have reported for Dartagnan is with loop bound 12 as Dartagnan is unable to find the race even after unrolling to loop bound 12. On increasing the loop bound to 13, Dartagnan kills the process after showing a heap space error. In conclusion, in all the benchmarks in Table 1, GPUMC significantly outperforms Dartagnan.
# 5.2 Verification of GPU Applications
We evaluate GPUMC on medium to large real GPU applications, particularly for heterogeneous race and barrier divergence errors.
Table 2: Heterogenous race detection using GPUMC on GPU Applications . (Time in Seconds and Memory in MB respectively). Events column represents the maximum number of events across all executions.
Heterogeneous Races We experiment with four GPU applications – OneDimensional Convolution (1dconv), Graph Connectivity (GCON), Matrix Multiplication (matmul) and Graph Colouring (GCOL) from [40,39]. Each program has about 250 lines of code. For our experiments, we transform the accesses in these benchmarks with SC memory order and gpu scope. Finally, all these transformed benchmarks have SC accesses except GCON which has only relaxed accesses. We do not execute Dartagnan on these programs, as they are multikernel and involve CPU-side code, which makes it unclear how to encode them in Dartagnan.
Table 2 shows the 4 variants of each program by varying the grid structure. For instance, ldconv12 represents the version having 12 CTAs. The last two columns show the time and memory taken by GPUMC in detecting the first heterogeneous race. The detection of the first heterogeneous races in the ldconv, GCON, GCOL, and matmul benchmarks takes 4, 455, 11, and 18 executions respectively. In all cases, GPUMC detects the first race within 6 minutes.
Barrier Divergence Next, we evaluate GPUMC for detecting barrier divergence, with the results shown in Table 3. We consider four GPU applications – histogram [72], XF-Barrier, arrayfire:select-matches (arrayfire-sm) and arrayfire:warp-reduce(arrayfire-wr) [82,80], as well as GkleeTests1 and GkleeTests2 kernels from the GKLEE tests [55,80]. All these benchmarks except Histogram use SC accesses and have barrier divergence. Histogram has a mix of SC and relaxed accesses. In our experiments, we introduce a barrier divergence bug in the original histogram program [72, Chapter 19]. We vary the grid structures, similar to the benchmarks created for experimenting with the heterogeneous race detection.
Table 3: Barrier Divergence using GPUMC on various grid-structured programs (Time in seconds, Memory in MB). Events column represents the maximum number of events seen across executions.
Table 4: Race Repair using GPUMC on various grid-structured programs. #Race denotes the number of races detected and #Fix represents the number of changes made to fix the race. Events column represents the maximum number of events seen across executions.
# 5.3 Race Repair
Apart from detecting, GPUMC also repairs heterogeneous races as shown in Table 4 on five micro-benchmarks and three GPU applications [40,39]. The #Race column shows the number of races detected and fixed and the #Fix column shows the number of lines of code changes required to fix the detected races. In all cases, GPUMC detects and repairs all races within 3 secs. After repair, we let GPUMC exhaustively explore all executions of corrected programs (bench1, bench2, bench5, matmul finish within 10 minutes and bench3, bench4, GCOL and 1dconv finish within 6 hours). Finally, the Executions column shows the number of executions explored on running the corrected program, and the Events column shows the maximum number of events for all explored executions post-repair.
Fig. 4: 1dconv GCON SB Scalability: GPUMC execution time and the memory consumed to detect the heterogeneous race for 1dconv and GCON and the assertion violation in SB. The x-axis shows the total number of threads for GCON and SB, and CTAs for 1dconv. The y-axis measures the memory in megabytes (MB) and the time in seconds.
Table 5: Scalability of GPUMC on safe benchmark LB (Time in Seconds and Memory in MB). Executions column represents the executions explored.
# 5.4 Scalability
Fig. 4 shows the scalability of GPUMC for increasing number of threads on three benchmarks – SB (store buffer) and two GPU applications 1dconv, GCON. For SB, we create 24 programs with increasing threads from 2 to 25. For 1dconv, we create 30 programs with increasing CTAs from 1 to 30 with four threads per CTA. For GCON, we create 50 programs with increasing threads from 1 to 50. Fig. 4 shows the GPUMC execution time and the memory consumed to detect the heterogeneous race for 1dconv and GCON and the assertion violation in SB; the x-axis shows the total number of threads for GCON, SB and CTAs for 1dconv, and the y-axis measures the memory in megabytes (MB) and the time in seconds. We also experiment on the LB (load buffer) benchmark in Table 5. We create 21 programs with increasing threads (LB-2 to LB-22) and exhaustively explore all consistent executions. We observe that in all benchmarks GPUMC exhaustively explores more than 4 million executions within 5500 seconds.
# 6 Related Work
Semantics Weak memory concurrency is widely explored in programming languages for CPUs and GPUs [17,48,41,21], [16,65,35,1,76], compilers [70,20], and CPU and GPU architectures [14,71,13,61,60,33]. Although GPUMC follows scoped-RC11 semantics [61], it is possible to adapt our approach to several other GPU semantic models. However, developing a DPOR model checker for GPUs with all guarantees that explore executions with po rf cycle is a nontrivial problem, in general [41,21,38,63], which is future work.
GPU testing Testing of GPU litmus programs is used to reason about GPU features [74,13,77], reveal errors [74], weak memory behaviors [13], and various progress properties [77,75,76,42]. Complementarily, our model checker explores all executions to check the correctness of the GPU weak-memory programs.
Verification and testing of weak memory concurrency There are several DPOR algorithms for the verification of shared memory programs under weak memory such as TSO, PSO, release-acquire (RA) and RC11 [19,43,64,10,7,6,83]. DPOR algorithms have also been developed for weak consistency models such as CC, CCv and CM [12]. These are sound, complete, and optimal, although they incur an exponential memory usage. Recently, [45,44] proposed a DPOR algorithm applicable to a range of weak memory models incurring only polynomial space while being also sound, complete and optimal. On the testing front, we have tools such as C11-tester [59], and tsan11rec [58] for several variants of C11 concurrency. However, these tools do not address the verification of programs with scopes.
GPU analysis and verification Several tools propose analysis and verification of GPU programs including GPUVerify [18], G-Klee [55], GPUDrano [15], Scord [40], iGUARD [39], Simulee [80], SESA [56] for checking data races [39,40,27,18,85,69,34,57,84], divergence [26,27,18]. Other relevant GPU tools are PUG [53,54] and Faial [25]. However, these do not handle weak memory models. | GPU computing is embracing weak memory concurrency for performance
improvement. However, compared to CPUs, modern GPUs provide more fine-grained
concurrency features such as scopes, have additional properties like
divergence, and thereby follow different weak memory consistency models. These
features and properties make concurrent programming on GPUs more complex and
error-prone. To this end, we present GPUMC, a stateless model checker to check
the correctness of GPU shared-memory concurrent programs under scoped-RC11 weak
memory concurrency model. GPUMC explores all possible executions in GPU
programs to reveal various errors - races, barrier divergence, and assertion
violations. In addition, GPUMC also automatically repairs these errors in the
appropriate cases.
We evaluate GPUMC with benchmarks and real-life GPU programs. GPUMC is
efficient both in time and memory in verifying large GPU programs where
state-of-the-art tools are timed out. In addition, GPUMC identifies all known
errors in these benchmarks compared to the state-of-the-art tools. | [
"cs.LO",
"cs.PL",
"cs.SE"
] |
# I. INTRODUCTION
Microservices have emerged as a promising architecture for implementing large-scale applications. When using microservices, applications are designed as a set of loosely coupled components that may be easily developed and maintained by independent teams [31], [21]. The adoption of this architectural style has led many companies, including large companies such as Amazon, Netflix, and Uber, to migrate applications that have been previously implemented as monoliths to microservices [19], [5], [27], [20], [10], [37], [15].
Unfortunately, migrating an application to the microservice architecture is not a trivial task [25], [16]. Functionalities that have been designed in the monolith to execute as a single ACID transaction may be required to execute as a sequence of independent transactions after the migration, each implemented by a different microservice. This breaks the isolation among functionalities that are executed concurrently, leading to anomalous application behavior. Handling anomalous behavior is costly as it often requires the implementation of additional code, such as compensating actions [17], to correct the undesired effects of the loss of isolation and atomicity. In some cases, these costs outweigh the advantages of microservices, forcing developers to revert the application to a monolith [34].
The amount of anomalies a migration may foster is heavily dependent on how the monolith is decomposed, namely, on how the domain entities are assigned to the different microservices. Therefore, before deciding on a given decomposition, it is critical to understand how many anomalies a decomposition may generate, as dealing with anomalies typically dominates the migration cost [32]. The importance of estimating the complexity associated with a monolith decomposition has been recognized in the literature [4]. However, previous works in this direction do not identify concrete anomalies. The costs of avoiding or compensating anomalies varies greatly depending on the anomaly’s type [26], [36]. This information not only helps developers to choose the most cost-efficient decomposition, but also raises awareness of the concrete challenges they will face during the decomposition process.
In this paper, we propose the Microservices Anomaly Detector $( M A D )$ , a new framework to analyze the anomalies that result from the decomposition of a monolith into microservices considering bounded executions of the system. $M A D$ takes a monolithic application, a target decomposition, and the application’s SQL schema to automatically generate, from the transaction that implements a functionality in the monolith, the set of independent transactions that execute the same functionality in the microservices. Then, $M A D$ encodes the interleavings of these transactions that correspond to non-serializable executions of the original functionalities as a Satisfiability Module Theories (SMT) formula. Using the Z3 solver [11], MAD finds the satisfiable assignments of this formula that capture the interleaving that may generate anomalous behaviors. To the best of our knowledge $M A D$ is the first system that is able to precisely detect the anomalies that will result from the decomposition of monoliths in microservices.
Because $M A D$ performs an exhaustive search on the space of all transaction interleavings (that respect a predefined bound , as discussed later), the time it takes to serially analyze a decomposition may be large. To circumvent this limitation, MAD implements a novel divide-and-conquer technique capable of parallelizing the search. This makes
$M A D$ suitable to be applied to non-trivial code bases. To illustrate the power of MAD, we have applied it to seven different benchmarks, and we show how it may guide developers to find suitable decompositions of the monolith.
In summary, the contributions of this paper are as follows:
We propose a technique to formulate the problem of finding anomalies in a microservice decomposition of a monolith as an SMT problem.
• We propose a strategy to parallelize the task of finding the satisfiable assignments that capture anomalies.
• We propose a technique to describe the satisfiable assignments to the developer in a meaningful manner, in particular, by classifying the type of anomaly that may occur and the entities and functionalities involved.
• We present an experimental evaluation of the resulting system with 7 different benchmarks.
constraints that result from the monolith decomposition into account, and; iii) it can automatically chop the functionalities from the monolith into multiple sub-transactions, for different combinations of domain entities into microservice aggregates.
Several works have addressed the problem of how to aggregate the domain entities when migrating from a monolith to microservices. Nunes et al. [29] aggregate domain entities based on the transactional contexts of the monolithic application. Brito et al. [9] aggregate domain entities using topic modeling. Mono2Micro [24] and FoME [22] use runtime traces to cluster the domain entities. None of these tools capture the anomalous behaviors that may result from concurrent executions of functionalities. MAD complements these works by identifying the anomalies that result when a given decomposition is used in the migration process.
# III. EXAMPLE
# II. BACKGROUND
The problem of detection anomalies in transactional applications has been addressed in the literature using different approaches, including testing and validation.
Relevant examples of testing tools are systems such as MonkeyDB [8] and Cobra [35], that detect anomalies by using a black box approach. These tools generate sets of test inputs and capture the application output, comparing it against the expected behavior of the application. An anomaly is detected when a mismatch is found between the obtained and the expected outputs. These approaches are generic because they do not require access to the source code. Unfortunately, there is no guarantee that all possible interleavings are tested, hence, some anomalies may pass unnoticed. More importantly, when applied to our goals, these tools require a target decomposition to be implemented before it can be tested. In opposition, we aim at detecting problematic decompositions at design time, such that programmers avoid implementing decompositions that generate an undesirable amount of anomalies.
Early validation work for transactional systems, such as [13], [23], assume a single database and a set of transactions that can be executed in any order. More recent works such as ANODE [28] and CLOTHO [30] consider distributed storage but assume that all storage nodes replicate all entities, while also assuming that transactions can execute independently of each other. When considering the decomposition of monolith into microservices, one must take into account that individual sub-transactions were originally sequential operations of a single functionality in the monolith: this imposes constrains on the order of execution of these sub-transactions and the values read and written by each sub-transaction as well, which need to be taken into account in the analysis. Also, these systems take as input a fixed set of transactions, and are not able to automatically derive the sub-transactions that result from the decomposition of a functionality. MAD does not suffer from these limitations: i) it can model systems where each microservice has its own storage; ii) it takes the ordering
Figure 1 illustrates an example of how a given microservices decomposition originates a previously unexisting interleaving. In this scenario, there are two entities (Account and Wallet), two transactions (Total and Transfer), and a decomposition where Account is managed by microservice $M _ { 1 }$ and Wallet is managed by microservice $M _ { 2 }$ . Total gets the total amount of a client’s money (its account balance plus its wallet balance). Transfer withdraws an amount of funds from the client’s account balance and deposits it in their wallet. Considering that a client’s balance can only be transferred from their account to their wallet, it is expected that their total amount of funds of a client always remains the same. Since entities Account and Wallet are in different microservices, Total and Transfer would be split into sub-transactions, each of which executes in the microservice associated with the entity they are accessing.
Transaction chopping allows for interleavings between sub-transactions, which can lead to anomalies that were not possible in the monolithic version. One example is shown in Figure 1(c). In this case, the execution of Transfer interleaves with the execution of Total. Total sees an older version of a client’s account (Account1), implying that Total is executed before Transfer. However, Total sees the new version of the client’s wallet $\left( \mathtt { W a l l e t _ { 1 } } \right) ,$ ), whose balance was already updated by Tranfer, implying that Total is executed after Transfer. There is no serial order of the two functionalities that may lead to this execution. From this execution, one could incorrectly observe that the total funds of the client have have changed, something impossible in this scenario.
# IV. MICROSERVICES ANOMALY DETECTOR
This section describes the Microservices Anomaly Detector $( M A D )$ , a framework to automatically detect anomalies that would result from implementing a given microservices decomposition of a monolithic application. MAD works by comparing the feasible executions in the monolith and the feasible executions in a target microservice architecture, to detect anomalies under bounded executions that result from a specific decomposition while avoiding false positives.
Fig. 1. Example of how two functionalities can be divided when migrating from monolith to microservices and of an interleaving that leads to a anomaly.
# A. Overview
$M A D$ builds an abstract representation of the decomposed monolith based on the original monolith code and a userprovided decomposition. Using this abstract representation, $M A D$ builds SMT formulas that encapsulate the anomalous interleavings made possible due to the decomposition. Finally, the satisfiable assignments found by the SMT solver are classified based on their anomaly types and are returned to the developer, along with other metrics. In the following, we provide more detail about each of these steps.
Input: MAD takes as input a monolithic implementation of an application (including its SQL schema and source code) and a high-level description of how the monolith is decomposed into multiple microservices. In the current version, the source code must be a Java program written using the JDBC syntax (i.e., that uses SQL queries to access the entities, which are maintained by the application in a database). As an illustration, the SQL schema and the Java code for the scenario from Figure 1 is depicted in Listings 1 and 2, respectively. Although the current prototype only supports Java, the framework has been designed such that it can be extended to support additional programming languages. The decomposition of the monolith is expressed as the clustering of the domain entities into aggregates [29] (entities grouped in the same cluster are assumed to be managed by the same microservice) and is represented by a JSON file (Decomposition File). The JSON file used in the example scenario is depicted in Listing 3. Using this input, MAD executes the pipeline presented in Figure 2, which is composed of the following sequence of steps:
Step 1 (AR Compiler): The source code is compiled to an abstract representation (AR) that captures how the functionalities access the domain entities. In the AR, each functionality is represented by a sequence of read and/or write operations on domain entities (Monolith AR Program). The AR compiler is the only component of MAD that needs to be extended if support for additional programming languages is required.
Step 2 (Transaction Chopping): From the AR of the functionalities, an AR of the microservice decomposition is automatically generated (Microservices AR Program) by chopping the functionalities code into multiple subsequences, where each sub-sequence only accesses domain entities from the same aggregate. Each sub-sequence captures a transaction that is executed in a single microservice. Note that, in the monolith, each functionality would be implemented using a single transaction. In the microservice decomposition, sub-sequences that are part of the same parent functionality are executed as a sequence of independent transactions.
Step 3 (Divide and Conquer): Often, the AR program is too complex to be represented in a single encoding that can be effectively analyzed. Employing a divide and conquer strategy, MAD generates subsets of the functionalities (Functionalities Subsets). Analyzing these subsets independently and in parallel allows the analysis of the whole problem to terminate in reasonable time.
Step 4 (Formula Construction): From the AR and the subsets of functionalities, MAD generates SMT formulas encoding the program and whose satisfiable assignments are the possible interleavings between functionalities that can lead to anomalies in the decomposition. An SMT formula is built for each subset of functionalities, and their satisfiability can be checked by an SMT solver.
Step 5 (SMT Solver): MAD uses Z3 [11] to check the satisfiability of the SMT formulas. Satisfiable assignments correspond to cyclic graphs, with the vertices being operations and the edges the relations between operations, representing unserializable executions of functionalities [2]. These cycles have their length bounded by a system
public class exampleScenario private Connection connect = null; private int _ISOLATION = Connection.TRANSACTION_READ_COMMITTED; private int id; Properties p; public exampleScenario(int id) { this.id = id; p = new Properties(); p.setProperty("id", String.valueOf(this.id)); Object o; try { o = Class.forName("MyDriver").newInstance(); DriverManager.registerDriver((Driver) o); Driver driver = DriverManager.getDriver("jdbc:mydriver://"); connect $\mathbf { \Sigma } = \mathbf { \Sigma }$ driver.connect("", p); } catch (InstantiationException | IllegalAccessException | ClassNotFoundException | SQLException e) { e.printStackTrace(); } } public void Total(int clientId) throws SQLException { PreparedStatement stmt1 $\mathbf { \Sigma } = \mathbf { \Sigma }$ connect.prepareStatement( "SELECT balance FROM Account"+ " WHERE clientId = ?"); stmt1.setInt(1, clientId); ResultSet rs = stmt1.executeQuery(); rs.next(); int account_balance = rs.getInt("balance"); PreparedStatement stmt2 = connect.prepareStatement( "SELECT balance FROM Wallet"+ " WHERE clientId = ?"); stmt2.setInt(1, clientId); ResultSet rs2 = stmt2.executeQuery(); rs2.next(); int wallet_balance = rs2.getInt("balance"); int total_money = account_balance $^ +$ wallet_balance; } public void Transfer(int clientId, int accountBalance, int walletBalance, int amount) throws SQLException { PreparedStatement stmt1 = connect.prepareStatement( "UPDATE Account SET balance = ?"+ " WHERE clientId = ?"); stmt1.setInt(1, accountBalance - amount); stmt1.setInt(2, clientId); stmt1.executeUpdate(); PreparedStatement stmt2 $\mathbf { \Sigma } = \mathbf { \Sigma }$ connect.prepareStatement( "UPDATE Wallet SET balance = ?"+ " WHERE clientId = ?"); stmt2.setInt(1, walletBalance $^ +$ amount); stmt2.setInt(2, clientId); stmt2.executeUpdate(); }
}
Listing 2. Example of MAD’s input Java file.
Listing 3. JSON file with a microservices decomposition.
parameter denoted the Maximum Cycle Length (MCL), defined as the maximum number of edges to be considered when looking for satisfiable assignments. This parameter is required to prevent Z3 from continuously trying to find new satisfiable assignments which are supersets of previously found cyclic graphs.
Step 6 (Metrics Extractor): After all satisfiable assignments are found for all SMT formulas, $M A D$ processes the anomalies found and extracts complexity metrics to report to the user. These metrics include the total number of anomalies, dividing them in core anomalies (represented by cyclic graphs with the minimum cycle length required to represent an anomaly) and their extensions (represented by cyclic graphs, which are supersets of the core anomalies graphs), the anomalies types, and the entities, functionalities, and
sub-transactions involved.
Next, we describe each of these steps in more detail.
# B. Abstract Representation
MAD receives as input the Java source code of the monolith and compiles it to an abstract representation (AR), originating the Monolith AR Program. The AR facilitates the extraction of information, including the transactions, types of parameters, and execution order. In Figure 3, we present the structure of the monolith AR.
To represent the SQL schema, MAD uses two elements, Table and Column. Table is represented by a name and a list of Columns. Column is represented by a name, and a type (int, real, string, boolean).
The application implementation is represented by a set of Original Transactions. Each Original Transaction captures the code of a functionality in the monolith and has a name, a list of Statements, a list of Expressions, and a list of Parameters. Each Statement is represented by a name, an SQL query (select, update, insert or delete), and a path condition, which is an expression that associates the execution of the statement with a condition in the program if it occurs inside a conditional block. The Expressions can be of four types: Unary Operation; Binary Operation; Value; and Variable. The Unary and Binary Operations have one operation and the Expression(s) to which the operation is applied, respectively. The Value expressions represent the static values of the Java program using a type and a value. At last, the variable expressions represent the variables used in the Java program using the variable name. These variables include the ones used for the instructions, to store the rows read, and to hold the values of columns read from the database. To conclude, the Parameters are a specific type of Value expression, which are represented additionally by their name.
# C. Microservices Decomposition AR
From the Monolith AR Program and the JSON Decomposition File, MAD proceeds to create the Microservices AR Program. This step consists of applying a transaction chopping algorithm to the original functionalities of the monolith to transform each of them in a sequence of sub-transactions. The sub-transactions are represented in the AR by the following attributes: name; list of SQL operations; their original transaction name; and their microservice name.
The transaction chopping algorithm works as follows. For each original transaction, $M A D$ iterates over the original sequence of operations and generates sub-transactions based on the entities that are accessed by these operations, assuming that each operation only accesses one entity. In each iteration, MAD evaluates if the currently accessed entity belongs to a different microservice than the previously accessed entity. If so, a new sub-transaction is created, starting with the current operation since it would execute in a different microservice. Otherwise, the current operation is added to the most recent sub-transaction created. After the algorithm finishes, the resulting Microservices AR Program is shown to the programmer. This transaction-chopping algorithm alleviates the programmers from having to divide the transactions by hand. Yet, the programmer may finetune the chopping, for instance, by re-ordering operations in the code of the original transaction (to automate such optimizations is outside the scope of this work).
Fig. 2. MAD’s pipeline.
Fig. 3. Monolith AR structure.
As an example, consider the scenario from Figure 1. By applying the previously described algorithm, from the original transactions depicted in Figure 1(a) we obtain a representation of the microservices version depicted in Figure 1(b). Each sub-transaction only accesses one microservice and is executed as an independent transaction. However, it is still part of the original transaction execution flow.
# D. Divide and Conquer
The first step of the Divide and Conquer strategy is to generate all the combinations of functionalities (original transactions) of size smaller than the system parameter Maximum Cycle Length (MCL). Note that, since an anomalous interleaving requires that at least two operations belong to the same instance of a functionality, at most $M C L - 1$ functionalities can be involved in a cycle without exceeding the MCL.
Let $n$ be the number of functionalities. The number of $\textstyle \sum _ { x = 1 } ^ { M C L - 1 } { } _ { n } C _ { x } . ~ M A D$ starts by creating one thread for each combination of size 1 (i.e., involving a single functionality). Each thread explores the possible interleavings between the operations of each functionality. When all these threads finish their analysis, the process is repeated for size 2 combinations, already excluding the anomalies involving only size 1 combinations. This process continues until all combinations of size $M C L - 1$ are analyzed.
Our approach has resemblances to the well-known Cubeand-Conquer algorithm [18], except that $M A D$ splits the search space explicitly by taking advantage of domain knowledge. Hence, our approach is closer to a classic divide-and-conquer algorithm. Note that each thread analyzes a different combination of functionalities. Moreover, constraints are added such that solutions with $k - 1$ functionalities are excluded when solving a formula involving $k$ functionalities. Although $M A D$ generates one formula per combination, the search is done incrementally. Furthermore, the SMT formula handled in each thread is much smaller than solving the whole problem (i.e., considering all functionalities at once), allowing the parallelization of the search process and providing shorter solving times.
# E. SMT Encoding
MAD encodes the AR program into an SMT formula such that any satisfiable assignment corresponds to an anomaly. We first provide a high level view of the SMT formulas and the information required to be represented in it, and later present the detailed representation.
We base our anomaly detection on Adya et al. definition of anomaly [2]. An execution is represented as a graph, where vertices correspond to operations belonging to transactions and edges represent data dependencies between operations. An anomaly is present if the graph is cyclic and contains at least two dependency edges and at least two operations belonging to the same transaction, also represented as an edge. As such, we must encode the transactions’ operations and the data dependencies that different operations may present.
Different types of dependency edges may generate different types of anomalies. As such, we encode information regarding the type of dependencies between operations, which we use to identify which type of anomaly is represented in the cycle.
Furthermore, two operations might belong to different transactions, executing in different microservices, but belong to the same functionality. Operations in the same functionality must also execute in isolation to provide equivalent execution to the monolith, and interleavings may generate new anomalies. Therefore, we encode information regarding the functionality and microservice associated with each operation.
First, we represent the basic elements of the system (operations, sub-transactions, original transactions, and microservices). Second, the associations between the basic elements (e.g., to which sub-transaction does a given operation belong). Third, the consistency guarantees offered by the environment where the microservices execute. Fourth, the possible types of relations between operations. Lastly, the format of the cycles MAD wants the SMT solver to find.
1) Representation of the Basic Elements: For the representation of the basic elements, $M A D$ uses sorts, which can be considered as types of objects. To represent instances of the operations, sub-transactions, and original transactions, MAD uses three sorts, $\begin{array} { r } { O , T , } \end{array}$ , and $F$ , respectively. Based on the AR of the program, $M A D$ defines a unique name for each operation, sub-transaction, original transaction, and microservice. These unique identifiers are declared using the following sorts: ONames for operations; TNames for sub-transactions; FNames for original transactions; and MNames for microservices. Associated with the basic elements, there are functions and predicates used in the formulas. The most relevant ones are listed in Table I and Table II.
2) Associations between Formula Components: Let FNames $\mathbf { \Omega } = \{ T x n _ { 1 } , \ldots , T x n _ { f } \}$ denote the name set of $f$ original transactions and let $T N a m e s = \{ T x n _ { 1 , 1 } , \dots , T x n _ { f , t } \}$ denote the name set of sub-transactions where $T x n _ { i , j }$ refers to the $j ^ { t h }$ sub-transaction in original transaction $T x n _ { i }$ . Moreover, let $O N a m e s = \{ O p _ { 1 } , . . . , O p _ { k } \}$ and $M N a m e s =$ $\{ M _ { 1 } , \ldots , M _ { m } \}$ denote the name set of $k$ update operations and $m$ microservices, respectively. For any two operations $O p _ { i }$ and $O p _ { j }$ from the same transaction where $O p _ { i }$ occurs before $O p _ { j }$ , then we have $j ~ > ~ i$ . Finally, let $F ( O p _ { i } )$ , $T ( O p _ { i } )$ and $M ( O p _ { i } )$ denote the names of the original transaction, sub-transaction and microservice of execution for operation $O p _ { i }$ . Note that all these name sets and name associations are defined through a simple analysis of the program. Afterwards, these are used to encode the relations between the system’s components as follows (see Figure 4):
C1: Every two instances of operations related by an WR edge have the effects of the first instance visible to the second instance (Equation 1).
C2: Every two instances of operations related by an WW edge have the first instance happening before the second instance (Equation 2).
C3: Every two instances of operations related by an $R W$ edge have the effects of the second instance not visible to the first instance (Equation 3).
C4: Every instance of an update operation must have an OName from the set of update operations names, and viceversa (Equation 4).
C5-C7: Every instance operation with a given OName needs to belong to an instance of sub-transaction (specific TName), original transaction (specific FName) and microservice (specific MName) as specified in Equations 5, 6 and 7.
C8: Every two instances of operations that have a ar relation between them follow an execution order where the first instance occurs before the second instance (Equation 8).
C9: Every two instances of operations that belong to the same original transaction need to follow a sequential execution order according to their order in the Java program (Equation 9).
C10: Every two instances of operations that have a dependency relation between them $( D )$ do not have an $S T$ or SOT relation and do have an WW, WR or $R W$ relation (Equation 10).
C11: Every two instances of operations that have any relation between them $( X )$ do have an ST, SOT, WW, WR or RW relation (Equation 11).
3) Consistency Models: To make an analysis faithful to the environment where the microservice systems will execute, we assume two consistency models: Serializability and Eventual Consistency. Between operations of the same subtransaction and other operations of the same microservice, we assume that they will respect Serializability. Between operations of different sub-transactions or that execute on different microservices, we assume Eventual Consistency, as commonly designed in a microservice architecture 1 By default, $M A D$ ’s implementation enforces Eventual Consistency between the visibility effects of the operations. However, since we want to enforce Serializability between operations of the same sub-transaction and other operations of the same microservice, we add assertions to the SMT formula that model the behavior of consistency models. The constraints for Read Committed, Repeatable Read and Linearizability [6], [1] are presented in Figure 4, and Serializability results from the conjunction of these constraints.
C12 (Read Committed): for every three instances of operations $( o _ { 1 } , o _ { 2 } , o _ { 3 } )$ , if $o _ { 1 }$ and $o _ { 2 }$ belong to the same instance of sub-transaction, the effects of $o _ { 1 }$ are visible to $O _ { 3 }$ , and $o _ { 1 }$ and $O _ { 3 }$ belong to the same microservice, then the effects of $o _ { 2 }$ are also visible to $O _ { 3 }$ (Equation 12).
C13: (Repeatable Read): for every three instances of operations $( o _ { 1 } , o _ { 2 } , o _ { 3 } )$ , if $o _ { 1 }$ and $o _ { 2 }$ belong to the same instance of sub-transaction, the effects of $O _ { 3 }$ are visible to $o _ { 1 }$ , and $o _ { 1 }$ and $O _ { 3 }$ belong to the same microservice, then the effects of $O _ { 3 }$ are also visible to $o _ { 2 }$ (Equation 13).
C14 (Linearizability): for every two instances of operations $( o _ { 1 } , o _ { 2 } )$ , if $o _ { 1 }$ happens before $o _ { 2 }$ , and $o _ { 1 }$ and $o _ { 2 }$ belong to the same microservice, then the effects of $o _ { 1 }$ are visible to $o _ { 2 }$ (Equation 14).
TABLE I KEY FUNCTIONS USED IN FORMULAS.
TABLE II KEY PREDICATES USED IN FORMULAS.
4) Types of Edges: To encode the $S T$ and $S O T$ edges, MAD expresses the following two properties, respectively:
C15: Every two instances of operations that belong to the same instance of sub-transaction are related via an $S T$ edge, and vice-versa (Equation 15).
C16: Every two instances of operations that belong to the same instance of original transaction and do not belong to the same instance of sub-transaction are related via an SOT edge, and vice-versa (Equation 16).
For the dependency edges (RW, WR, WW), constraints are defined for each pair of sub-transactions to establish whether or not it is possible to have a dependency between instances of operations of those sub-transactions. For instance, if two instances of operations $o _ { 1 }$ and $o _ { 2 }$ access different tables, then a WW can never occur. Otherwise, if both $o _ { 1 }$ and $o _ { 2 }$ write in the same table, then a constraint is added to verify if it is possible to have a row where the operations conflict.
5) Cycles Assertions: A cyclic anomalous graph is:
C17: A cycle with at least one $S T$ or SOT edge and at least two dependency edges (RW, WR, WW). This is captured in Equation 17 where $k$ denotes the size of the cycle.
Recalling the example anomaly presented in Figure 1(c). $M A D$ detects that anomaly by finding the cyclic graph that can be seen in Figure 5. The cycle contains two SOT edges and two dependency edges (RW and WR), and represents the interleaving of the original transactions Total and Transfer.
# $F .$ Metrics Extractor
$M A D$ includes a Metrics Extractor component, that gathers information regarding the possible anomalies the microservices application may face when the given decomposition is used. The information collected includes the total number of anomalies, classified in core anomalies and extensions, the number of anomalies per type and, finally, the entities, functionalities, and sub-transactions involved in each anomaly. We decided to capture these indicators as they allow the programmer to have a better understanding of the potential anomalies and of the amount of effort required to prevent each of them. Furthermore, these figures capture the types of anomalous behaviors to be expected and the combinations that originate those anomalous behaviors.
After the SMT solver finishes its analysis for each subset of functionalities, MAD obtains the Satisfiable Assignments, which represent the detected anomalies. Using these assignments, $M A D$ performs two steps. Firstly, it classifies the obtained assignments into anomaly types, following the definitions of Adya et al [2]. Secondly, it groups the anomalies by sets of entities, functionalities, and subtransactions, to understand which combinations can lead to anomalous behaviors. We now dive deeper into the implementation of each step of the Metrics Extractor.
To categorize the obtained assignments as anomalies, $M A D$ matches the dependency cycle obtained by the assignment with the generalized dependency cycles described in Adya et al [2]. Furthermore, MAD categorizes the anomalies as either core anomalies or extensions. A core anomaly is one whose execution cycle has the minimum size required to express its categorized anomaly. Extensions are anomalies whose execution cycle includes a core anomaly with additional operations that do not affect the categorized anomaly. This distinction is important since if
System Component’s Constraints:
$$
\begin{array} { r l } & { \forall o _ { 1 } , o _ { 2 } \in O \ : W R ( o _ { 1 } , o _ { 2 } ) \neq v i s ( o _ { 1 } , o _ { 2 } ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : W W ( o _ { 1 } , o _ { 2 } ) \neq a v ( o _ { 1 } , o _ { 2 } ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : R W ( o _ { 1 } , o _ { 2 } ) \neq v i s ( o _ { 2 } , o _ { 1 } ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : R W ( o _ { 1 } , o _ { 2 } ) \neq ( o n a m e ( o _ { 1 } ) = O p _ { 1 } \lor . . . \lor o n a m e ( o _ { 1 } ) = O p _ { k } ) } \\ & { \forall o _ { 1 } \in O : i s , u \mu d a t e ( o _ { 1 } ) \neq ( o n a m e ( o _ { 1 } ) = O p _ { 2 } ) \neq ( ( t u a m e ( o _ { 2 } v a t ( o _ { 1 } ) ) = T ( O p _ { 2 } ) ) } \\ & { \forall o _ { 1 } \in O , O p _ { 2 } \neq O N a m e s : ( o n a m e ( o _ { 1 } ) = O p _ { 2 } ) \neq ( ( t u a m e ( o p _ { 2 } v a t ( o _ { 1 } ) ) = T ( O p _ { 2 } ) ) } \\ & { \forall o _ { 1 } \in O , O p _ { 2 } \in O N a m e s : ( o n a m e ( o _ { 1 } ) = O p _ { 2 } ) \neq ( ( m a m e ( o r ) g _ { 2 } ) ) } \\ & { \forall o _ { 1 } \in O , O p _ { 2 } \in O N a m e s : ( o n a m e ( o _ { 1 } ) = O p _ { 3 } ) \neq ( m a m e ( o _ { 1 } ) = M ( O p _ { 2 } ) ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : a r ( o _ { 1 } , o _ { 2 } ) \neq ( o t i m e ( o _ { 1 } ) < o t i m e ( o _ { 2 } ) ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : a r ( o _ { 1 } , o _ { 2 } ) \neq ( o t i m e ( o _ { 1 } ) < o t i m e ( o _ { 2 } ) ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : o . r ( e _ { 1 } , o _ { 2 } ) \neq ( o r d e _ { 1 } \lor ( o _ { 1 } ) = F ( O p _ { 2 } ) : } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : ( o _ { 1 } ) = p o r t ( e _ { 2 } ) > ( o r d e _ { 1 } \lor ( o _ { 1 } ) = O r d e _ { 2 } \land ( o n e ( o _ { 2 } ) = O p _ { 2 } ) ) } \end{array}
$$
$$
\begin{array} { r l } & { \forall o _ { 1 } , o _ { 2 } \in O : D ( o _ { 1 } , o _ { 2 } ) \Rightarrow ( \neg ( S T ( o _ { 1 } , o _ { 2 } ) \lor S O T ( o _ { 1 } , o _ { 2 } ) ) \land ( W W ( o _ { 1 } , o _ { 2 } ) \lor W R ( o _ { 1 } , o _ { 2 } ) \lor R W ( o _ { 1 } , o _ { 2 } ) ) ) } \\ & { \forall o _ { 1 } , o _ { 2 } \in O : X ( o _ { 1 } , o _ { 2 } ) \Rightarrow ( S T ( o _ { 1 } , o _ { 2 } ) \lor S O T ( o _ { 1 } , o _ { 2 } ) \lor D ( o _ { 1 } , o _ { 2 } ) ) } \end{array}
$$
Consistency Models Constraints:
$$
\begin{array} { r l } & { \forall o _ { 1 } , o _ { 2 } , o _ { 3 } \in O : ( S T ( o _ { 1 } , o _ { 2 } ) \wedge v i s ( o _ { 1 } , o _ { 3 } ) \wedge ( m n a m e ( o _ { 1 } ) = m n a m e ( o _ { 3 } ) ) ) \Rightarrow v i s ( o _ { 2 } , o _ { 3 } ) } \\ & { \forall o _ { 1 } , o _ { 2 } , o _ { 3 } \in O : ( S T ( o _ { 1 } , o _ { 2 } ) \wedge v i s ( o _ { 3 } , o _ { 1 } ) \wedge ( m n a m e ( o _ { 1 } ) = m n a m e ( o _ { 3 } ) ) ) \Rightarrow v i s ( o _ { 3 } , o _ { 2 } ) } \\ & { \qquad \forall o _ { 1 } , o _ { 2 } \in O : ( a r ( o _ { 1 } , o _ { 2 } ) \wedge ( m n a m e ( o _ { 1 } ) = m n a m e ( o _ { 2 } ) ) ) \Rightarrow v i s ( o _ { 1 } , o _ { 2 } ) } \end{array}
$$
Edge Type Constraints:
$$
\begin{array} { l l } { \forall o _ { 1 } , o _ { 2 } \in O : ( p a r e n t ( o _ { 1 } ) = p a r e n t ( o _ { 2 } ) ) \Leftrightarrow S T ( o _ { 1 } , o _ { 2 } ) } \\ { \forall o _ { 1 } , o _ { 2 } \in O : ( ( o r i g t x ( o _ { 1 } ) = o r i g t x ( o _ { 2 } ) ) \wedge ( p a r e n t ( o _ { 1 } ) \neq p a r e n t ( o _ { 2 } ) ) ) \Leftrightarrow S O T ( o _ { 1 } , o _ { 2 } ) } \end{array}
$$
Cycle Length Constraints:
$$
\exists \partial _ { 1 } , o _ { 2 } , . . . o _ { k } \in O : D i s t i n c t ( o _ { 1 } , o _ { 2 } , . . . , o _ { k } ) \land ( S T ( o _ { 1 } , o _ { 2 } ) \lor S O T ( o _ { 1 } , o _ { 2 } ) ) \land D ( o _ { 2 } , o _ { 3 } ) \land . . . \land X ( o _ { 4 } , o _ { i + 1 } ) \land . . . \land D ( o _ { k } , o _ { 4 } )
$$
Fig. 4. MAD’s models constraints
Fig. 5. MAD’s cyclic graph for the example anomaly.
current version is not able to parse joins or implicit updates. This is not a fundamental limitation, because these operations can be represented through combinations of reads and writes, which are supported by the AR Compiler and are enough to cover all benchmarks used in the evaluation. To define a join, one can divide the query with the join operation into two or more queries, each accessing only one table. Similarly, to represent an implicit update, one can divide it into a read, to retrieve the current value, followed by a write with the update expression. These transformations could be implemented by leveraging the current AR Compiler or by adding a pre-processing step to automatically decompose the operations. However, this exercise falls outside the scope of this work.
a developer fixes the source of the core anomalies, it will also remove its related extensions.
For the second step, $M A D$ iterates over the anomalies found and checks the entities, functionalities, and sub-transactions involved for each anomaly. This metric restricts the sources of anomalous behavior to sets of entities/functionalities/sub-transactions, reducing the search space to be explored by the developer to identify the source of the anomalous behavior.
# H. Maximum Cycle Length
As noted before, MAD uses the parameter Maximum Cycle Length (MCL) to bound the search space. The default value of MCL is set to 4, given that the anomalies detected by our tool can be identified by cycles of this length [7]. By setting MCL to a value that allows detecting all serializability anomalies supported by MAD (see Section V-C), we bound the exploration without generating false negatives.
# G. Supported Syntax
# V. EVALUATION
In the current version of $M A D$ , the source code needs to be a program written in Java using the JDBC syntax. If needed, it is possible to extend the AR Compiler to support other additional programming languages. Furthermore, the
$M A D$ aims at identifying the anomalies that can emerge when migrating a monolith to microservices following a given decomposition. Our experimental evaluation focuses on the following key research questions: RQ1: Can MAD offer insights regarding the best decompositions? RQ2: Can MAD classify the anomalies to help identifying access patterns that cause the errors? RQ3: How long does it take to execute MAD? RQ4: How effective is the Divide and Conquer strategy in improving the MAD’s performance?
We gathered seven benchmarks based on GitHub applications and, using a migration tool [29], we have generated two microservices decompositions for each benchmark. By applying $M A D$ to these benchmarks and decompositions, we are able to address the research questions above.
# A. Experimental Setup
We use benchmarks inspired by applications found on GitHub for the experimental evaluation. Our process to set the benchmarks consists of three steps: 1) gathering monolithic applications from GitHub; 2) adapting them to the syntax processed by $M A D \ ; \ 3 )$ generating two microservices decompositions of each application with the help of a migration tool [29] that supports programmers on the task of grouping the entities by the microservices. The applications we have used in the evaluation are the following:
TPC- $\mathbf { C } ^ { 2 }$ is defined in the OLTP-Bench [12] project and simulates the behavior of a delivery and warehouse management system;
FindSportMates3 (findmates) is an application used as a benchmark in other microservices works [38], [33] and simulates a platform where users can manage and find events to connect with other users;
jpabook4 simulates a shop where members can order items and track the delivery process;
JPetStore5 (jpetstore) is an application highly used by previous microservices works [38], [14], [9], [22], [39] and simulates an online pet store where each user has an account and can browse through a catalog of pets to choose which pets they want to order;
spring-petclinic6 (petclinic) is an application used as a benchmark by a previous microservices work [14] and simulates the operation of a pet clinic;
myweb7 is an application that simulates the behavior of the web allowing users to have roles and manage resources;
spring-mvc-react8 (react) is a platform where users can post questions and answers with tags associated with them. Besides that, the system also allows users to upvote or downvote publications, which influences the users’ popularity.
We have selected these benchmarks because they cover a wide range of domain areas and have implementations that address real-world scenarios and, yet, they are simple enough to be processed by the MAD prototype in a reasonable time.
For each benchmark, we analyze three decompositions. First, the mono decomposition, which represents the monolithic version of the benchmark and shows the initial number of anomalies. For this decomposition, the number of sub-transactions will always be the same as the number of functionalities since there is no division of transactions in the monolithic version. We assume that the monolith is correct and contains no anomalies, so any anomaly arising in the microservices’ decompositions must have resulted from the migration. Second, the “best” decomposition, which is the one with the highest Silhouette Score (a metric used to assess how well the clustering of the entities is done) calculated by the migration tool [29]. At last, in the full decomposition, each entity is managed by a different microservice, which is the worst-case scenario in terms of potential anomalies.
In this context, we use $M A D$ considering four as the maximum cycle length. The process to choose this value consisted of starting with value three since it is the minimum number of edges required to detect a cycle with an anomaly, and incrementing it to ensure that all core anomaly types could be detected (requires length four [7], [30]), while still assuring that $M A D$ was able to analyze the benchmarks within a timeout limit of 4 hours (14400 seconds). We defined this timeout limit as the reasonable amount of time a programmer would wait for the analysis to be complete. The evaluation was performed in a virtual machine with 32 virtual CPU cores running on two Intel(R) Xeon(R) Gold 5320 CPUs at 2.2GHz and 128GB of DDR4 RAM with Intel Optane Memory configured in App Mode. The virtual machine uses Ubuntu 18.04.4 LTS, Java 8, and version 4.12.3 of Z3 with the default configuration.
# B. RQ1: Insights Gained from Using MAD
Table III presents the results obtained when applying $M A D$ to our benchmarks. By looking at the number of anomalies found in each decomposition, $M A D$ allows the programmers to assess how problematic each decomposition will be, therefore enabling them to make a more informed decision when migrating to microservices. For instance, in jpabook, jpetstore, and react, even their “best” decompositions have anomalies. This occurs because the “best” decompositions for these cases require the functionalities to be chopped into multiple sub-transactions, opening the door for more interleaving between the functionalities. Note also that all microservices decompositions, except jpetstore best and myweb full, have a fairly higher number of anomalies when compared with the number of core anomalies. Thus, a fair portion of the anomalies found (between 54 and 72 percent) are simply extensions of smaller number of core anomalies and, therefore, can be eliminated if the core anomalies are eliminated. The findmates and petclinic benchmarks have no anomalies in any of their decompositions because their operations are mostly reads and the functionalities tend to be short. This, respectively, leads to fewer conflicts between accesses and fewer interleavings between functionalities.
TABLE III ANOMALIES DETECTED.
E=entities; F=functionalities; M=microservices; ST=sub-transactions; CA=core anomalies; TA=total anomalies; ET=execution time)
Another aspect one can notice in the table is that all the “best” decompositions, except for jpetstore, have two microservices. From our understanding, this phenomenon is possibly related to Martin Fowler’s Strangler Fig pattern,9 where the migration to microservices is done incrementally by extracting a few services at a time, considering the coupling between entities. The Silhouette Score might indirectly take this into account, resulting in its value suggesting that from the monolithic implementation, the most appropriate decomposition to microservices is to migrate to two microservices.
# C. RQ2: Classifying Anomalies
We now show the results for the anomaly classification task performed by MAD. In particular, we show the number of anomalies classified by type and by set of subtransactions. Being aware of the anomaly types involved in a decomposition allows developers to gain better insight on the possible costs and work required to mitigate the effects of anomalies in a given decomposition. For example, a decomposition with only Non-Repeatable Reads could be mitigated with a light-weight weakly consistent transactional protocol like Transactional Causal Consistency [3] while Lost Updates or Write Skews require a strongly consistent transactional protocol like Snapshot Isolation or Strict Serializability [1].
Table IV shows the number of anomalies found for each decomposition by type of anomaly. We are only considering the analysis of the sub-transactions metrics for one decomposition, but this can be done for all decompositions. Note that we omit the results for the findmates and petclinic benchmarks since they have no anomalies in any decomposition. Also, the Write Skews are counted together with the Lost Updates because the write skew pattern also corresponds to one of the lost update patterns. The only way to distinguish between them would be to also consider the rows accessed. If the same row was being accessed, then it would be a Lost Update. Otherwise, it would be a Write Skew. However, the current patterns used in our approach only consider the graph cycle edges and operation types, and does not include information regarding the accessed rows which is required to make this distinction.
TABLE IV MAD ANOMALIES FOUND PER TYPE.
(DR=dirty read, DW=dirty write; LU=lost update; WS=write skew, NRR=non-repeatable read; PR=phantom read; RS=read skew, Ext=extensions)
In Table V, we present the core anomalies found in the TPC-C full decomposition. Developers may leverage this information to guide their decomposition design, identifying the most costly entity decouplings. For instance, notice that the combination of entities [oorder, order line] are heavily coupled, as decomposing these entities in different microservices generates five new anomalies. Merging these entities in the same microservice could significantly reduce anomaly mitigation costs when migrating the application.
Furthermore, in Table VI, we present the combinations of sub-transactions that originate the core anomalies in the decomposition. This information highlights the key sections in the application that require more care when migrating the application, allowing developers to be better informed during the decomposition process on what anomalies they will face and what kind of techniques will be required to mitigate the effects of said anomalies.
# D. RQ3: MAD Execution Time
The last column of Table III depicts the time required to execute MAD on each decomposition. The execution time of CMMAM is not presented in the table, since it is negligible $( \approx 0 \mathrm { s } )$ . As can be observed, although $M A D$ can analyze these cases with more precision than CMMAM, $M A D$ often requires non-negligible time to perform the analysis. MAD does not require long executions for simple applications nor when the decompositions do not use many sub-transactions. However, for complex applications with a high number of functionalities and/or sub-transactions, $M A D$ needs to analyze a large number of combinations of transactions, therefore taking more time to finish its execution (in our experiments, the longest execution time was observed when analyzing the full decomposition of the jpetstore, that took almost 3 hours).
TABLE V TPC-C full core anomalies ENTITIES.
(DW=dirty write; LU=lost update; WS=write skew, RS=read skew
TABLE VI TPC-C full core anomalies SUB-TRANSACTIONS.
TABLE VIIDIVIDE AND CONQUER PERFORMANCE.
NO-DC: without divide and conquer; S-DC: sequential divide and conquer; P-DC: parallel divide and conquer
# E. RQ5: Divide and Conquer Performance
To evaluate if MAD’s divide and conquer strategy improves the performance of MAD’s analysis and makes the analysis of more complex applications/decompositions feasible, we compare MAD’s performance with and without the divide and conquer strategy when applied to the same decompositions. To assess the impact of the parallelization, we also consider the performance of the divide and conquer strategy single-threaded and multi-threaded.
The results in Table VII show that the divide and conquer strategy can significantly shorten the analysis time. Gains are more significant for complex cases since each SMT formula is much smaller, and it mitigates the time and space complexities by not considering all the original transactions simultaneously. As a result, all analysis can be performed within the time limit (4 hours $= 1 4 { , } 4 0 0$ seconds). Without this strategy, MAD exceeds the timeout when analyzing decompositions “best” and full of the jpetstore and react benchmarks. However, for simple cases, MAD’s performance tends to be worse with the divide and conquer strategy. This occurs because the strategy originates an overhead to $M A D$ ’s analysis by requiring unnecessary iterations over combinations with no anomalies. In those simple cases, MAD would analyze the interleavings between all the original transactions faster without the strategy because it could consider all of them simultaneously without facing a high number of possible combinations. We also note that the strategy is not fully parallelizable, since the threads of bigger size combinations need to wait for all the threads of smaller size combinations to finish in order to start. Therefore, the analysis time for a given combination size is bounded by the analysis time of the slowest thread that is analyzing a combination of that size. | The advent of microservices has led multiple companies to migrate their
monolithic systems to this new architecture. When decomposing a monolith, a
functionality previously implemented as a transaction may need to be
implemented as a set of independent sub-transactions, possibly executed by
multiple microservices. The concurrent execution of decomposed functionalities
may interleave in ways that were impossible in the monolith, paving the way for
anomalies to emerge. The anomalies that may occur critically depend on how the
monolith is decomposed. The ability to assess, at design time, the anomalies
that different decompositions may generate is key to guide the programmers in
finding the most appropriate decomposition that matches their goals. This paper
introduces MAD, the first framework for automatically detecting anomalies that
are introduced by a given decomposition of a monolith into microservices. MAD
operates by encoding non-serializable executions of the original
functionalities as an SMT formula and then using a solver to find satisfiable
assignments that capture the anomalous interleavings made possible by that
specific decomposition. We have applied MAD to different benchmarks and show
that it can identify precisely the causes of potential anomalous behavior for
different decompositions. | [
"cs.SE"
] |
# 1 Introduction
The rapid development of video generation models has driven the continuous growth of the demand for high-fidelity and high-resolution content in fields such as film production, immersive media, and interactive entertainment [20]. However, the performance of text-to-video (T2V) models is severely limited by the quality of training data, especially regarding visual resolution, temporal consistency, and fine-grained semantic alignment with text descriptions. Although existing large-scale T2V datasets are abundant in quantity, they mainly focus on medium and low-resolution content (such as $7 2 0 \mathrm { p }$ ) and simple captions, failing to meet the requirements for generating Ultra-High Definition (UHD) videos (4K/8K) with sharp details, rich textures, and precise semantic control [8, 36].
High-resolution video generation faces two core challenges. Firstly, resolution scalability: Models trained on low-resolution data generally struggle to generalize to UHD scenarios, and issues such as artifacts, blurriness, and inconsistent content are likely to occur when extrapolating to higher resolutions. As shown in Fig. 2, when the Wan-T2V model is directly applied to a 4K resolution without specialized training, the generation quality significantly deteriorates. Secondly, semantic granularity: Precise control over visual attributes (such as camera motion, lighting, style) requires structured captions that explicitly describe the scene semantics. However, most datasets lack comprehensive annotations that can guide the generation of such details.
To fill these gaps, we propose UltraVideo, a high-quality, open-source UHD-4K/8K T2V dataset designed to enhance the technical level of high-resolution video generation. This dataset contains 42K short videos ( $3 { \sim } 1 0$ seconds) and 17K long videos ${ \displaystyle \geq } 1 0$ seconds). It is the first public dataset that gives priority to native UHD resolution and structured captions, which include 10 types of semantic tags (such as shot type, lighting, video atmosphere), with an average of 824 detailed words per video. The high quality of UltraVideo benefits from a four-stage data curation process: 1) Diverse clip collection: Screen videos with a resolution of $\mathtt { \ge 4 K }$ and a frame rate of up to 60FPS from YouTube, and exclude low-quality content through manual quality inspection (Sec. 2.1). 2) Statistical filtering: Remove videos with excessive text, black borders, abnormal exposure, or low saturation to ensure the purity of visual inputs (Sec. 2.2). 3) Model-based data purification: Utilize a large multimodal model (Qwen2.5-VL-72B [2]) to detect low-quality attributes (watermarks, captions) and quantify aesthetic and motion consistency to further refine the dataset. 4) Comprehensive Structured Caption: Use an open-source MLLM (Qwen2.5-VL-72B [2]) to automatically generate nine categories of detailed captions, supporting fine-grained semantic control during the training process (Sec. 2.4), and further use an LLM to generate detailed descriptions. To verify the effectiveness of UltraVideo, we extend the Wan-T2V model to UltraWan-1K/-4K, which is capable of natively generating high-quality 1K and 4K videos and improves text controllability. By optimizing the training strategy, it achieves advanced performance in UHD generation tasks, and still performs excellently even with a moderate dataset size (42K samples). In summary, our contributions are threefold:
1) To support the increasingly developing high-resolution video generation applications and bridge the gap between academic and large corporate data, we curate a high-quality UHD UltraVideo dataset, focusing on fine-tuning fundamental high-resolution video generation models with fine-grained structured captions.
2) With manually filtered video sources, we propose a sophisticated automated data processing pipeline, which includes high-quality data collection, filtering, and fine-grained structured captions. 3) Based on Wan-T2V-1.3B, we have improved the high-resolution generation architecture UltraWan and proposed a caption sampling strategy. Through fine-tuning with LoRA plugins, it can support the generation of videos with native UHD resolution. The results of evaluations by VBench and human assessments have demonstrated its superiority.
Inthemodernmetropolis,skyscraperstoweringintothecloudsglistenwithgoldenlightunderthesetingsunBusystreetsarefilled witha constant streamof vehicles,droneshover orderlyintheair,andintelligent robotsonthe groundaredelivering packages.
Figure 2: Wan-T2V-1.3B [34] shows a significant decline in visual quality and semantic consistency as the resolution increases, and it fails to generate high-resolution videos without.
# 2 Curating UltraVideo Dataset
Recent T2V datasets emphasize the quantity of videos (million-level 720p videos) with detailed captions that can support the pre-training of video models. In contrast, we mainly focus on the quality of the UHD video dataset we construct for high-quality model fine-tuning, i.e., high-quality image quality, high-resolution frames, and comprehensive captions. Considering that mainstream video generation models only support video generation for a few seconds, for example, HunyuanVideo [15] supports a maximum of 129 frames and Wan [34] supports 81 frames. This paper mainly focuses on the construction and evaluation of short videos. Of course, we also open-source the affiliated long videos for the increasingly popular long video duration generation with the same processing flow. Fig. 3 intuitively outlines the specific data curation pipeline, which contains four steps: 1) Video Clips Collection (Sec. 2.1). 2) Statistical Data Filtering (Sec. 2.2). 3) Model-based data purification (Sec. 2.3). 4) Comprehensive Structured Caption (Sec. 2.4).
# 2.1 Video Clips Collection
UHD-4K/8K video source. Most of the recent popular text-to-video datasets are directly or indirectly sourced from the HD-VILA-100M dataset [8, 20, 31, 36], while MiraData [14] has collected 173K video clips from 156 selected high-quality YouTube channels. We believe that for a high-quality video dataset, strict control should be exercised at the source of collection, which can strictly limit the number of videos entering the filtering process. The benefits of this approach are obvious. It can reduce the computational power and storage pressure during the screening process. At the same time, it can reduce the proportion of low-quality data and improve the quality of the final dataset. To this end, we have decided to use the 4K/8K video pool on YouTube as the sole source. The selected videos consist of two parts: 1) First, from the filtered Koala-36M [36] dataset, a subset is obtained by screening based on resolution (greater than 4K), frame rate (higher than 25FPS), and duration (longer than 30 seconds), and videos that users are not interested in are screened out through meta user behavior information (views, likes, and comments). Furthermore, by calculating the similarity between the video titles and descriptions and the pre-classified video themes, the highest-quality videos of each category are uniformly sampled and duplicates are removed. 2) We use large language models (LLMs) to generate some relevant recommended search keywords according to 108 themes, and manually search for the latest 4K/8K videos related to these themes. Eventually, we obtain 5K original videos, with lengths ranging from 1 minute to 2 hours. And we conduct a secondary manual review of these videos to ensure as much as possible that there are no problems such as low quality, blurriness, watermarks, and jitter to obtain high-quality original videos.
Video theme. The theme diversity of videos is crucial for the training effect of video models. Therefore, we conducted a noun statistics on the captions of Koala-36M. The results were processed by a large language model (LLM), and finally, through manual post-modification and confirmation, we obtained seven major themes (108 topics), namely: i) video scene, ii) subject, iii) action, iv) time event, v) camera motion, vi) video genres, and vii) emotion. Fig. 4 has statistically analyzed the proportion of clips for different topics under each theme. It can be seen that our UltraVideo contains diverse themes.
Scene splitting. We use the popular PySceneDetect [5] to segment the original video into clips. Specifically, a two-pass AdaptiveDetector detector is employed, which applies a rolling average to help reduce false detections in scenarios like camera movement. In addition, we found that this detector might overlook videos with dissolve transitions. Therefore, we use DINOv2 to calculate the feature similarity for the first and last 5 frames of each video to further filter the videos.
① video Clips Collection 》 Caption (Sec. 3.4) HUD-4K/8K > original Videos Text Detection Video Aesthetic Score AA 回 vidvrLs-urce 《 VTSS Panda-70M Scene Splitting Black Boarder Detection Temporal Motion Score Koala-36M FrameNumber RAFT Theme-centric Filtering Overexposure Detection Video-Caption Consistency escr User Search 《 videoCLIP-XL ② Detailed Descript Qwen2.5-VL-72B 7 categories Clip Set Underyngsueteatt Attributes udgment Lahtrarset Ultravideo Summarized Description
Frame number filtering. Mainstream video generation models only support video generation for a few seconds. For example, HunyuanVideo [15] supports a maximum of $7 2 0 { \times } 1 2 8 0$ resolution with 129 frames, while Wan [34] supports $7 2 0 { \times } 1 2 8 0$ resolution with 81 frames, and the average video length of most video datasets is less than ten seconds. However, there has been a recent trend in long video generation research. For instance, MiraData [14] focuses on long duration video generation. Taking the above two points into account, we first filter videos with a time length between 3 seconds and 10 seconds as the short video set, and videos with a frame duration of more than 10 seconds are regarded as the long video set to support future research related to long videos (this setting will not be discussed in detail in this paper). To further expand the number of short videos, for long videos with a length of less than 60 seconds, we take the middle 10 seconds as short videos, and for videos longer than 60 seconds, we additionally take 10 seconds of video from both sides as short videos. Finally, we obtained 62K short videos with a duration of 3 seconds to 10 seconds and 25K long videos with a duration of 10 seconds or longer.
# 2.2 Statistical Data Filtering
At the statistical level, we conduct a secondary strict filtering of the videos by setting a mean threshold.
Text detection. Text inevitably appears in different time intervals of the original video. Large areas of text usually include subtitles, logos, and other markings. An excessively high proportion of such text can have a negative impact on model training. We use PaddleOCR [22] to detect text in each frame and calculate the proportion of the union area of the minimum bounding rectangles of all detected text within the frame to the total image area. If this proportion exceeds a strict threshold of $2 \%$ , the frame is considered problematic. Finally, we calculate the ratio of problematic frames to the total number of frames and rigorously exclude videos with a ratio higher than $5 \%$ .
Black border detection. Black borders often appear in movies and user-edited videos. We calculate the mean value of the rectangular area that extends from the four sides towards the middle by $3 \%$ . If the calculated value is lower than 3, the frame is regarded as an abnormal frame. Finally, we calculate the proportion of the number of problematic frames to the total number of frames, and if it is higher than $5 \%$ , the video will be excluded.
Exposure detection. Overexposure and underexposure greatly affect the video image quality. Taking 5 as the threshold, we calculate the proportion of pixels that are higher than 250 and lower than 5 for each frame. If the proportion is higher than $12 \%$ , the frame is considered to have a problem. We remove videos with more than $5 \%$ of bad frames.
Graying detection. Images that are grayish or have low saturation often give people an unpleasant visual experience. We calculate the variance of RGB values at each position and then take the average value for the entire image. If this average value is lower than 1.2, the frame is considered to have a problem. Similarly, if the proportion of such frames in the whole video is higher than $5 \%$ , the video will be removed. At this stage, we obtained 46K short videos with a duration of 3 seconds to 10 seconds and 19K long videos with a duration of 10 seconds or longer.
# 2.3 Model-Based Data Purification
We further conduct a third strict filtering of the videos at the high-level model layer.
Video aesthetic score. The Video Training Suitability Score (VTSS) proposed in Koala-36M [36] integrates multiple pieces of manually labeled information regarding dynamic and static qualities, which enables a comprehensive evaluation of the quality of each video. We extract the native vtss score for each video (scaled within the range from -0.0575 to 0.0728) and filter out the data with a vtss score less than 0.01.
Temporal motion score. For model training, videos with subjects or camera movements that are either too slow (static frames lacking motion information) or too fast (unstable shots causing blurriness) are not ideal. Therefore, we use RAFT [33] to sample the motion relationships between temporal frames at intervals. After calculating the global average, we filter the data to retain values between 0.1 and 100.
Video-caption consistency. After obtaining the summarized caption of the video according to Sec. 2.4, we use VideoCLIP-XL-v2 [35] to get the similarity scores of all pairs, and filter the data with lower caption similarity by setting a threshold of 0.2.
MLLM-assisted attribute judgment. Before archiving the final data, we use Qwen2.5-VL-72B [2] to output binary judgments of low-quality attributes for each video. These attributes include 16 types such as Transition Effects, Watermarks, Split Screens, Screen Recordings, Picture-in-Picture, etc. If any of these low-quality attributes are detected, the corresponding video will be deleted.
Considering that we have already filtered out the low-quality data during the video collection process, and after the above statistical and model-based filtering procedures, the quality of the clips in the UltraVideo can be greatly ensured. Finally, we obtained 42K short videos in $3 \mathrm { s } \mathrm { \sim } 1 0 \mathrm { s }$ and $1 7 \mathrm { K } \geq 1 0 \mathrm { s }$ long videos.
# 2.4 Comprehensive Structured Caption
Detailed captions are of great importance for fine-grained controllable video generation, which is widely recognized [14, 36]. However, most current datasets focus more on the quantity of videos with simple captions. We fully utilize the capabilities of open-source foundation (M)LLMs to automatically construct comprehensive and high-quality structured captions.
Structured description. To achieve high-quality video generation, some recent datasets have attempted to generate structured captions to provide better text-video consistency. Typically, Miradata [14] combines 8 evenly selected frames into a $2 { \times } 4$ image, and together with the "short" hint from Panda-70M, it is fed into GPT-4V to generate a "dense caption", and then, under carefully designed prompts, an additional 4 types of structured descriptions are obtained in a single dialogue turn. The recent Koala-36M [36] uses GPT-4V to generate structured video captions for fine-tuning the LLaVA caption model, which is used to generate captions containing 6 types of structured information with an average of 202.3 words per video. Different from the above solutions that use the closed-source GPT-4V, we propose a structured captioning solution based on the open-source Qwen2.5-VL-72B [2], which can be easily ported for local deployment and continuously enhance its capabilities as opensource community models are updated. Specifically, it includes 9 categories: 1) Brief Description. 2) Detailed Description. 3) Background. 4) Theme Description. 5) Style. 6) Shot Type. 7) Camera Movement. 8) Lighting. 9) Video Atmosphere. Fig. 4 and Fig. A1 show the distribution of each type of caption, from which it can be seen that our caption system is able to generate more fine-grained descriptions for text-to-video training.
LLM-based caption summarization. Different structured captions may potentially have different preferences due to variations in prompts during their construction. Therefore, based on the opensource Qwen3-4B [32], we integrate the above sub-captions to obtain a summarized description, which serves as one of the additional text prompt options.
# 2.5 Statistical Comparison and Analysis
Comparison with popular video-text datasets. Tab. 1 compares the properties of different popular T2V datasets. Our UltraVideo is the first to push T2V data to UHD-4K/-8K resolution and features more comprehensive structured captions for model fine-tuning. This dataset prioritizes higher visual quality over quantity, yet its volume of 42K samples still represents a substantial scale.
Table 1: Comparison popular text-to-video datasets. Our UltraVideo is a high-resolution and highquality premium T2V dataset, featuring comprehensive structured captions with a significantly longer average caption length. In addition to the main short version ranging from 3 seconds to 10 seconds, we also list the derived long version $( ^ { \ast } ; )$ that exceeds 10 seconds for potential future research.
Resolution vs. FPS. UltraVideo provides the native video resolution and frame rate, potentially supporting future research such as video frame interpolation. Tab. 2 demonstrates the distribution.
Numerical Statistics from Multiple Perspectives. Fig. 4 displays the statistical information of UltraVideo from multiple perspectives to better help users achieve a more detailed understanding. (a) As described in Sec. 2.1, we confirmed seven major themes with diverse topics with the assistance of LLM. The upper-left corner shows a diverse distribution that can promote more generalizable T2V learning. (b) After strict screening in Sec. 2.3, each evaluation model scores at a high level, ensuring the high quality of the dataset. However, users can still further filter based on these scores for stricter criteria. (c) The distribution of video duration and total frame count in short and long video sets. (d) The length distributions of typical "Brief Description", "Detailed Description", "Summarized Description", and the aggregated captions. Structured and detailed captions help improve the capability of fine-grained controllable video consistency. (e) An intuitive word cloud to visualize the captions.
Table 2: Resolution vs. FPS statistics.
Analysis of non-compliance. We selected the recent Koala-36M [36] for a video quality comparison. We randomly sampled 1000 videos each and had five different people evaluate them (we defaulted to using short videos). A video was considered a "bad video" if it had any of the following issues: Subtitles, Abnormal Color Patches, Green Screen, Blue Screen, Transition Effects, Watermarks, Stickers, Borders, Split Screens, Screen Recordings, Picture-in-Picture, Still Video, Blurred Video, Scrambled Video, and Solid-Color Backgrounds. Since the UltraVideo inherently has a high resolution above 4K and high image quality, we informed each subject to ignore this factor when making judgments about the results. In the end, the UltraVideo had a failure rate of $2 . 3 \%$ , which is significantly lower than the $4 1 . 5 \%$ failure rate of the popular Koala-36M. This proves the effectiveness of our curation process and implies that the UltraVideo is undoubtedly the current "quality champion" in the video community.
# 3 UltraWan: Stand on the Shoulders of Giants
Based on the UltraVideo dataset, we explored the attempt of generating natively high-resolution videos, and specifically conducted fine-tuning experiments using Wan-T2V-1.3B [34] in this paper. We were surprised to find that just 42K exceptionally high-quality videos with comprehensive text are sufficient to have a significant impact on the aesthetics and resolution of the generated videos. Since we only use LoRA for fine-tuning without involving modifications to the model structure, the relevant experience can be easily transferred to other T2V models for the open-source community.
Figure 4: Statistical distributions of our UltraVideo from different perspectives.
# 3.1 Resolution Scaling of Wan.
Powerless extrapolation. Benefiting from the relative position encoding and rotational invariance of RoPE, the DiT-based Wan has a certain degree of variable resolution inference capability. However, when we directly perform extrapolation on the native Wan-T2V-1.3B for 1K and 4K resolutions, we find that the performance deteriorates significantly or even becomes ineffective, as shown in Fig. 2. Therefore, the high-resolution inference capability requires model parameters that are adaptable, which has triggered our exploration of scaling the Wan model.
Structural configures for UltraWan-1K and UltraWan-4K. For high-resolution T2V generation, the memory calculation amount of the model will increase significantly. Therefore, we use the smaller
Table 3: Model configures.
Wan-T2V-1.3B to conduct experiments with H20 GPUs. Specifically, our UltraWan-1K maintains an output of 81 frames, while UltraWan-4K reduces the number of output frames to 29 to ensure that a single sample can fit on a single GPU card. Tensor parallel is not used and the batch size per GPU is 1, and the GPU memory usage during training and inference is shown in Tab. 3.
# 3.2 Training Scheme.
Random caption sampling strategy. To make full use of comprehensive structured captions for fine-grained prompt control, we propose a random caption sampling strategy. Specifically, with a probability of 1/3, we select from i) Brief Description, ii) Detailed Description, and iii) Summarized Description. If either the Brief Description or the Detailed Description is sampled, we then randomly select one caption from the remaining 7 categories mentioned in Sec. 2.4 for supplementation, which serves as the final prompt fed into the model.
Sub-clip sampling. For each video, we uniformly sample an average number of frames from the middle to both sides according to the number of training frames to ensure the consistency between the sub-clip and the caption. In the experiment, we keep the native FPS of the video and perform sampling without intervals.
Memory-efficient HDR plugins of Wan-1K/-4K LoRA. Considering the computational power and memory requirements for fine-tuning, we use LoRA for parameter-efficient fine-tuning. The rank is set to 64/16 for UltraWan-1K/UltraWan-4K, and the modules affected are QKV in the self-attention and the output linear layer, as well as the first and third linear layers in the feedforward network.
Table 4: VBench evaluation results per dimension. ∗: Videos are downsampled to 1K to avoid OOM.
Figure 5: Intuitive results with the prompt in VBench [13]. Enlarged for better visual effects.
Hyperparameter setting. We use AdamW [17] with betas $\ c =$ (0.9, 0.999), weight_decay $\ c =$ 1e-2, and learning_rate $\ c =$ 1e-4. Both UltraWan-1K and UltraWan-4K are trained for one epoch.
# 4 Experiments
Limited by the significant increase in computational power and video memory caused by high resolution, this paper only conducts experiments on the small-scale Wan-T2V-1.3B [34] to: 1) propose and implement the training of native 1K/4K T2V models for the first time; 2) demonstrate the high-quality effectiveness of the dataset.
Comparison results for high-resolution video generation. Limited by the slower inference caused by increased computational power for high resolution, we randomly sample one-tenth $( \simeq 9 6 )$ of the prompts from VBench [13] for testing. As shown in Tab. 4, we compare five models: $i )$ official Wan-T2V-1.3B with $4 8 0 \times 8 3 2$ resolution. ii) increasing the resolution to 1K ( $1 0 8 8 \times 1 9 2 0 _ { , }$ ). iii) 1K full finetuning. iv) 1K LoRA PEFT. v) 4K LoRA PEFT. The following conclusions can be drawn from the results: 1) Scaling the official model to 1K leads to a significant decline in performance. 2) The full-parameters training based on UltraWan-1K has significantly improved generation at 1K resolution, but differences in training hyperparameters (such as batch size and prompts) from the native model may cause its results to be slightly worse overall than the LoRA model based on UltraWan-1K. Considering training costs, we recommend using the LoRA-based UltraWan-1K scheme. 3) The higher UltraWan-4K model performs better in indicators related to image quality and temporal stability, but its lower frame rate (inference uses 33 frames to ensure the time exceeds 1s) causes some indicators to be worse compared to UltraWan-1K. Fig. 5 shows the qualitative effect comparison. The official Wan-T2V-1.3B cannot directly generate high-resolution 1K videos, while our UltraWan is capable of handling semantically consistent 1K/4K generation tasks.
Close-up shots capture a bumblebees nectar collection on A hiker navigate a rocky river with stepping stones in steep, flowers in a garden or meadow with blurred background . sparse mountain slopes
t=1 t=81 t=1 福 t=81
Tracking shots follow a silver fish swimming neara sandy seabed Hikers demonstrate teamwork and resilience during a challenging
withoccasional zooms,ina underwater scene.... mountainriver crossing
t=1 t=81 t=1 t=81 棋里 小 L
Table 5: Human preferences.
Human study for 1K video generation. To demonstrate the effectiveness of the proposed UltraWan, we conducted a human preference experiment. Specifically, we used the videos generated from the aforementioned VBench test subset as test samples and built a visual interface using streamlit [30] to ask 10 subjects about their preferences across three dimensions: video quality aesthetics, temporal stability, and text consistency. As shown in Fig. 2, the official Wan-T2V-1.3B struggles to maintain content quality when generating 1K videos, so it retains the officially recommended $4 8 0 \times 8 3 2$ resolution output. As shown in Tab. 5, thanks to the high-resolution fine-tuning on the high-quality UltraVideo, UltraWan-1K has a significant advantage in video quality aesthetics, while showing similar tendencies in temporal stability and text consistency.
Semantic consistency with fine-grained captions. Thanks to the structured captions in UltraVideo during training, our UltraWan exhibits stronger semantic consistency, as shown in Fig. 6. | The quality of the video dataset (image quality, resolution, and fine-grained
caption) greatly influences the performance of the video generation model. The
growing demand for video applications sets higher requirements for high-quality
video generation models. For example, the generation of movie-level Ultra-High
Definition (UHD) videos and the creation of 4K short video content. However,
the existing public datasets cannot support related research and applications.
In this paper, we first propose a high-quality open-sourced UHD-4K (22.4\% of
which are 8K) text-to-video dataset named UltraVideo, which contains a wide
range of topics (more than 100 kinds), and each video has 9 structured captions
with one summarized caption (average of 824 words). Specifically, we carefully
design a highly automated curation process with four stages to obtain the final
high-quality dataset: \textit{i)} collection of diverse and high-quality video
clips. \textit{ii)} statistical data filtering. \textit{iii)} model-based data
purification. \textit{iv)} generation of comprehensive, structured captions. In
addition, we expand Wan to UltraWan-1K/-4K, which can natively generate
high-quality 1K/4K videos with more consistent text controllability,
demonstrating the effectiveness of our data curation.We believe that this work
can make a significant contribution to future research on UHD video generation.
UltraVideo dataset and UltraWan models are available at
https://xzc-zju.github.io/projects/UltraVideo. | [
"cs.CV"
] |
# 1. Introduction
Large Language Models (LLMs) have demonstrated exceptional performance in general-purpose applications and have achieved notable success in vertical domains, such as finance, healthcare, law, and scientific research, by providing precise domain knowledge and specialized text generation (Li et al., 2023; Ren et al., 2025; Lin et al., 2025; Li et al., 2024). However, the development of such verticaldomain models typically depends on large-scale annotated datasets and substantial computational resources (Wu et al., 2023; Gururangan et al., 2020). This dependency results in prolonged development cycles and slow iteration rates, making it challenging to meet the rapidly evolving demands of real-world applications.
Figure 1. Comparison of Traditional Model Updating vs. Collaborative User-driven Updating. (a) Traditional model updates rely on externally provided training data; (b) Collaborative updates leverage user contributions to refine the model.
Traditional domain adaptation methods have significant drawbacks. Supervised fine-tuning requires expensive annotation efforts and large labeled corpora, while retrievalaugmented generation (RAG) (Siriwardhana et al., 2023) leverages external knowledge sources but depends on comprehensive, up-to-date knowledge bases (Zhang et al., 2024; Zheng et al., 2024; Susnjak et al., 2025). These approaches rely heavily on external, annotated inputs – data acquisition and labeling thus remain time-consuming and costly, and the limited volume of annotated domain examples constrains the benefits of scaling laws as model capacity grows.
Given these challenges, we explore the possibility of developing high-quality vertical-domain models without relying on manually annotated external data. Collaborative knowledge platforms, such as wiki projects, continuously evolve through user contributions, revealing the potential of crowdsourced expertise as an ongoing learning signal (Ebersbach et al., 2008). This suggests that user–model conversations can serve as the universal interface to both utilize and contribute to the model, transforming them into a valuable internal source of domain knowledge.The concept difference is shown in Figure 1. Although a single user session may combine seeking assistance and contributing, and lack the structure and quality of traditional annotations, these
# Collaborative Editable Model
Stage 1: User-agent multi-round conversation User C: What is farming? Stage 2: Evaluated the cited knowledge Agent: Farming, a part of agriculture, involves growing Higher value knowledge crops and rearing animals for food and raw materials. It Attribution Cite $\mathrm { A } = 0 . 7$ User B began thousands of year
s ago, likely in the Fertile Crescent Model Cite $\mathrm { B } = 0 . 4$ (Cite A), and led to the Neolithic Revolution as people General transitioned from nomadic hunting to settled farming. This Model allowed for a significant increase in human population Stage 3: Extract user contributed knowledge User A (Cite B). User C: Yes. It most likely first in the Fertile Crescent in Farming arose independently in regions such as the Yangtze and Yellow River basins in China, New the Middle East, and then arose independently in regions Extractor such as the Yangtze and Yellow River basins in China, New Guinea, the Caucasus, and Mesoamerica after first User C Guinea, the Caucasus, and Mesoamerica beginning in the Fertile Crescent. ratin New knowledge!
conversations inherently encode valuable insights shared during interactions (Chen et al., 2024). These insights can be extracted, evaluated, and leveraged to improve the model. However, current LLMs treat each conversation independently, without a mechanism for reintegrating conversational data into subsequent learning.
To address this gap, we present the Collaborative Editable Model (CoEM), a framework designed to enable the continuous improvement of vertical-domain models through user–model interactions. CoEM aggregates domain-relevant information fragments from user-contributed snippets and conversation exchanges, creating a continuously evolving knowledge pool. Users then engage in iterative dialogue sessions with the LLM, providing ratings on each generated response or insight. As the user ratings are for the whole responses, we apply attribution analysis to measure each input fragment’s contribution to overall performance, automatically identifying high-value knowledge. The fragments with high value after several iterations will be dynamically updated into the model, allowing for rapid domain adaptation without the need for extensive fine-tuning or external data. At the same time, the knowledge pool is still updating via extracting new information from user-model dialogues. The overview of the CoEM workflow is shown in Figure 2.
CoEM offers several key advantages. First, it is costefficient, eliminating the need for costly, manually annotated data by relying on user-generated content and interactions, which significantly reduces data acquisition costs. Second, CoEM is user-driven and responsive to feedback. End users can identify issues or gaps in the model’s performance and provide direct feedback, enabling real-time adjustments and ensuring that the system remains relevant and up-to-date. Finally, CoEM is parallelized and scalable. As more users engage with the system, the model benefits from an increasing volume of valuable contributions, supporting rapid updates and continuous improvement.
In this paper, we apply CoEM to the financial domain for initial validation. We collect a set of financial news articles to construct the initial knowledge pool. A general-purpose LLM is used to generate summaries with insights for several relevant news pieces. These summaries are then presented to users for feedback. By applying attribution analysis to the user ratings, we assess the value of each knowledge fragment based on its contribution to the generated insights. These value scores are then compared to those obtained from a state-of-the-art (SOTA) financial LLM (Liu et al., 2023) to validate the reliability of our data and the effectiveness of our method. Through this process, we demonstrate CoEM’s ability to incorporate valuable user-generated knowledge into the model’s context, facilitating rapid domain-specific refinement.
Our contributions are summarized as follows:
• We introduce the CoEM, a framework that enables the continuous improvement of vertical-domain models through user–model interactions, eliminating the need for large-scale annotated data. • Through the innovative contribution attribution mechanism, we address the technical challenges of extracting valuable knowledge from unstructured user dialogues, enabling rapid domain adaptation for the model. • The effectiveness of the CoEM framework is initially validated through experiments in the financial news domain, with future work focusing on enhancing the model’s ability to learn from user contributions through model editing or reinforcement learning.
# 2. Related Work
# 2.1. Domain Adaptation for LLMs
Early efforts in vertical-domain adaptation have predominantly relied on large-scale supervised fine-tuning. Researchers typically continue pre-training or fine-tuning a general-purpose LLM on domain-specific corpora to achieve specialized performance (Gu et al., 2021; Que et al., 2024); however, this approach still demands substantial annotated data and computational resources. To reduce these costs, recent work has introduced lightweight techniques, such as low-rank adaptation (LoRA) (Hu et al., 2022; Mao et al., 2025), adapter modules (Li et al., 2022), and prompt tuning (Ge et al., 2023; Bai et al., 2024; Fahes et al., 2023), which update only a small set of additional parameters or employ plug-and-play adaptation layers, leaving most pretrained weights untouched, and enabling rapid convergence in new domains. Retrieval-augmented generation frameworks (Guu et al., 2020) further enhance domain performance by integrating external knowledge retrieval into the generation process, dynamically incorporating up-to-date documents for tasks like question answering and summarization, though they still face challenges in retriever maintenance and latency.
# 2.2. Interactive Editable Models
As human–machine collaboration paradigms evolve, editable and updatable language models have emerged as a key research direction. One strand of work employs reinforcement learning from human feedback (RLHF), using explicit user ratings or click behaviors to refine the model’s generative policies and improve alignment. (Bai et al., 2022; Kaufmann et al., 2023) Another focuses on fragment-level knowledge injection (Czekalski & Watson, 2024; Song et al., 2025). These methods identify “high-value” text fragments that most strongly influence outputs and prioritize their reuse. At the same time, crowdsourced snippets provide a flexible and rich source of domain knowledge, and by borrowing ideas from collaborative filtering, preferences can propagate across multiple agents to enable multi-directional preference diffusion. Although existing editable-model research has demonstrated local weight modifications and fact updates, it has yet to integrate explicit user feedback, fragment selection, and multi-agent collaborative learning into a unified framework. (Casper et al., 2023) In contrast, CoEM orchestrates multi-turn user–model dialogues with rating feedback, enabling lightweight, sustainable domain iteration and real-time knowledge updating.
# 3. Method
# 3.1. Problem Definition
We begin with two models: a pretrained model $\mathcal { M } _ { p }$ , which is a general-purpose model without domain-specific adaptation, and a vertical domain model $\mathcal { M } _ { v }$ , which represents a model tailored to a specific vertical domain. $\mathcal { M } _ { p }$ predicts the distribution $P _ { p }$ of tokens based on a given sequence s:
$$
\begin{array} { r } { \mathcal { M } _ { p } ( \mathbf { s } ) = P _ { p } ( t | \mathbf { s } ) } \end{array}
$$
While $\mathcal { M } _ { v }$ predicts the domain-specific distribution $P _ { v }$ based on the same sequence:
$$
\mathcal { M } _ { v } ( \mathbf { s } ) = P _ { v } ( t | \mathbf { s } )
$$
As we want to get a domain-specific model efficiently from the general model, our objective is to minimize the difference between the predictions of these two models by aligning $\mathcal { M } _ { p }$ with $\mathcal { M } _ { v }$ :
$$
\mathcal { L } = \operatorname* { m i n } _ { \mathbf { s } } | P _ { p } ( t | \mathbf { s } ) - P _ { v } ( t | \mathbf { s } ) |
$$
In traditional fine-tuning approaches, $P _ { v }$ is explicit, requiring annotated training data to guide the model. However, in our approach, $P _ { v }$ is implicit, as we do not rely on manually annotated data or pre-built knowledge bases. Instead, we rely on the vast amount of user feedback and contributions to guide the model’s learning direction.
# 3.2. User Contribution Modeling
In our framework, user contributions are categorized into two complementary forms: user feedback and user knowledge. Specifically, for model-user dialogue session d, User feedback contribution quantifies the user’s evaluation of the model’s output quality and domain relevance, typically expressed as a scalar score $r \in [ 0 , 1 ]$ . This rating indicates the relevance and usefulness of the generated content with respect to the target domain. This feedback score serves as an indicator of the effectiveness of the supporting knowledge. User knowledge contribution represents the domainrelevant information fragments extracted from the user’s input during the session, each associated with a value score $v _ { i } \in [ 0 , 1 ]$ .
Assumption 3.1. We assume that the user feedback score $r \in [ 0 , 1 ]$ serves as a soft reinforcement learning reward signal for a model output $\mathbf { d } _ { m }$ , reflecting the similarity between the predicted token distribution $P _ { p } ( t | \mathbf { s } )$ and the target vertical domain distribution $P _ { v } ( t | \mathbf { s } )$ . Specifically,
$$
r = \mathcal { R } ( { \bf d } _ { m } ) \quad \mathrm { w i t h } \quad r = 1 \Longleftrightarrow P _ { p } ( t | { \bf s } ) = P _ { v } ( t | { \bf s } ) ,
$$
Moreover, for any $r _ { 1 } > r _ { 2 }$ , it holds that
$$
\rho \big ( P _ { p } ^ { ( r _ { 1 } ) } ( t | \mathbf { s } ) , P _ { v } ( t | \mathbf { s } ) \big ) < \rho \big ( P _ { p } ^ { ( r _ { 2 } ) } ( t | \mathbf { s } ) , P _ { v } ( t | \mathbf { s } ) \big ) ,
$$
where $\mathcal { R } ( \cdot )$ denotes the reward function induced by user feedback, and $\rho ( \cdot , \cdot )$ denotes a distance metric between distributions. Thus, higher feedback corresponds to model outputs closer to the target domain distribution.
The overall objective is to guide model adaptation by aligning the weighted sum of user knowledge contributions with the vertical domain distribution:
$$
P _ { v } ( t | \mathbf { s } ) = \sum _ { i } v _ { i } P _ { i } ( t | \mathbf { s } )
$$
This formalization enables the framework to utilize usergenerated content and real-time feedback as continuous learning signals. Over successive interactions, the model incrementally evolves to better capture domain-specific knowledge, eliminating the need for extensive manually annotated data and facilitating rapid adaptation within vertical domains.
# 3.3. CoEM Knowledge Pool
Consider a multi-round dialogue session $\mathbf { d }$ between the model and the user. During this session, the model generated output ${ \bf d } _ { m }$ is supported by a collection of domain-specific knowledge fragments $\{ k _ { 1 } , k _ { 2 } , \ldots \}$ retrieved from the current knowledge pool $\mathbf { K }$ . Each knowledge fragment $k _ { i }$ is associated with a value score $\boldsymbol { v } _ { i }$ , representing its estimated relevance and utility within the vertical domain.
After receiving the model’s response, the user provides the feedback score $r \in [ 0 , 1 ]$ as introduced in Section 3.2. To quantify the contribution of individual knowledge fragments to the overall feedback, an attributor $\mathcal { A }$ is employed to assign a contribution weight $p _ { i }$ to each fragment $k _ { i }$ . The adjusted contribution value for fragment $k _ { i }$ in the current session is then computed as
$$
v _ { i } ^ { \prime } = p _ { i } \cdot r .
$$
Treating each dialogue session as an iteration, the value scores of the knowledge fragments are updated according to an exponential moving average scheme (Lucas & Saccucci, 1990):
$$
v _ { i } \gets ( 1 - \alpha ) v _ { i } + \alpha v _ { i } ^ { \prime } ,
$$
Where $\alpha \in ( 0 , 1 )$ denotes the learning rate that controls the magnitude of the update. Knowledge fragments whose value scores fall below a predefined threshold $\theta$ after $n$ iterations are pruned from the knowledge pool, as they are considered insufficiently relevant and useful to the domain.
Concurrently, the user’s input during the dialogue session, denoted $\mathbf { d } _ { u }$ , is processed by a knowledge extractor $\mathcal { E }$ to identify potential new domain-relevant knowledge fragments $\{ u _ { 1 } , u _ { 2 } , \ldots \}$ . These newly extracted fragments are incorporated into the knowledge pool $\mathbf { K }$ , thereby expanding the pool.
We initialize newly added knowledge fragments with a value score of 1, reflecting an optimistic initialization strategy (Lobel et al., 2022). This approach encourages the model to treat new knowledge as valuable initially, while the exponential moving average update bounds the scores below 1 and allows subsequent feedback to adjust them dynamically. Optimistic initialization is commonly used in reinforcement and online learning to promote exploration and avoid premature dismissal of novel information. The whole process of updating the CoEM knowledge pool is shown in Algorithm 1.
# Algorithm 1 CoEM Knowledge Pool Update
1: Input: General model $\mathcal { M } _ { p }$ , Knowledge pool $\overline { { \textbf { K } = } }$
$\{ ( k _ { i } , v _ { i } ) \}$ , dialogue session $\mathbf { d } = ( \mathbf { d } _ { m } , \mathbf { d } _ { u } )$ , learning
rate $\alpha$ , value threshold $\theta$ , attribution method $\mathcal { A }$ , knowl
edge extractor $\mathcal { E }$ .
2: Output: Updated knowledge pool K
3: $\mathbf { d } _ { m } \mathbf { \bar { \Omega } } - \mathbf { \mathcal { M } } _ { p } \mathbf { \bar { ( } } \hat { K } ) , \hat { K } \subseteq \mathbf { K }$
4: $r \mathcal { R } ( \mathbf { d } _ { m } ) , r \in [ 0 , 1 ]$ Get user feedback on $\mathbf { d } _ { m } \}$
5: Attributor:
6: $p _ { i } = \mathcal { A } ( \mathbf { d } _ { m } , k _ { i } ) , k _ { i } \in \hat { K }$
7: for $k _ { i } \in { \hat { K } }$ do
8: $v _ { i } ( 1 - \alpha ) \cdot v _ { i } + \alpha \cdot p _ { i } \cdot r$
9: end for
10: Extractor:
11: $u _ { j } = \mathcal { E } ( \mathbf { d } _ { u } )$
12: $v _ { u _ { j } } \gets 1 \quad \forall u _ { j }$ {Initialize value scores.}
13: $\mathbf { K } \mathbf { K } \cup \{ ( u _ { j } , v _ { u _ { j } } ) \}$
14: $\mathbf { K } \mathbf { K } \setminus \{ k _ { i } \mid v _ { i } < \theta \}$
15: Return updated knowledge pool K
Through this iterative process of feedback-driven value updating and continuous knowledge extraction, the knowledge pool is dynamically refined and enlarged. This mechanism enables the model to progressively enhance its domain expertise and improve response quality, all achieved through ongoing user interaction without reliance on extensive manual annotation.
# 3.4. Attribution Mechanism
In this section, we briefly introduce the design of our attribution mechanism. To quantitatively assess the value of knowledge $k _ { i }$ from the model generated text ${ \bf d } _ { m }$ and user feedback $r$ , CoEM employs an attribution function $\mathcal { A }$ defined as
$$
\mathcal { A } ( \mathbf { d } _ { m } , k _ { i } ) \in [ 0 , 1 ]
$$
which represents the strength of the causal influence of knowledge $k _ { i }$ on the generated response $\mathbf { d } _ { m }$ . Intuitively, this value measures how much the distribution of $\mathbf { d } _ { m }$ would differ if the model had not incorporated knowledge from $k _ { i }$
14000 12000 10000 GrBae 8000 6000 1 4000 2000 0 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Score
Figure 3. Attribution score distribution weighted by user feedback.
Figure 4. Distribution of knowl- Figure 5. Distribution of knowledge fragment value scores after edge fragment value scores from iterative updates. baseline model.
Figure 6. Proportion of highvalue knowledge fragments under varying learning rates.
The attribution model satisfies natural boundary conditions:
$$
\mathcal { A } ( k _ { i } , k _ { i } ) = 1
$$
Notably, $\mathcal { A }$ operates without requiring domain-specific knowledge distribution $P _ { v }$ , ensuring broad applicability.
Considering a set of model outputs $\{ \mathbf { d } _ { m } ^ { i } \}$ generated during interactions and corresponding user feedback scores $\{ r _ { i } \} \in$ $[ 0 , 1 ]$ , the value score $\boldsymbol { v } _ { j }$ for each input knowledge $u _ { j }$ is computed as the expected weighted attribution:
$$
v _ { j } = \mathbb { E } _ { \mathbf { d } _ { m } ^ { i } } \left[ \mathcal { A } ( \mathbf { d } _ { m } ^ { i } , u _ { j } ) \cdot r _ { i } \right]
$$
This expectation aggregates the causal effect of $u _ { j }$ across multiple model responses, modulated by user feedback, yielding a statistically meaningful measure of the input’s overall contribution.
# 4. Experiment
In this work, we performed experiments in the financial domain as a starting point. We used the Gemini-2.0-flash-lite API to access Gemini (Team et al., 2023) as the general model. For the vertical domain model, we employed FinGPT (Liu et al., 2023) from Huggingface1 built based on LLaMa-3 (Grattafiori et al., 2024).
# 4.1. Data Construction
We first collected 2,578 knowledge fragments related to finance and cryptocurrency news from a variety of news websites and forums, including Yahoo Finance, MarketWatch.com, Cointelegraph.com, Crypto.news, and so on, in both English and Chinese. These fragments served as the initial knowledge pool for our model.
We then prompted the general model to generate summaries with viewpoints for 3 related fragments in the pool. These summaries were presented to users, who provided feedback in the form of “like” or “dislike” evaluations, which serve as user feedback contributions. The users involved in the feedback process all had relevant knowledge and backgrounds in the finance and cryptocurrency domains. We received a total of 15,040 feedback responses from approximately 120 users, with a ”like” to ”dislike” ratio of 10,696: 4,344. Importantly, all user data was anonymized, and no personal information was recorded, ensuring the privacy of the contributors.
Then we use another general model as the attributor. Given the model-generated summary with viewpoints and user feedback, the attributor model evaluates the contribution of each news article to the generated summary. The feedback score provided by the user is used to compute the contribution weight for each news article with Equation (6). As we only collect the user rating as likes or dislikes, we assign $r = 1$ to what the user likes and $- 1$ to what the user dislikes. We choose the learning rate $\alpha$ as 0.03.
# 4.2. Results
High-value knowledge The key step for CoEM is to distinguish which contributions are useful to the model. After the iterative updating of the value score, we regard the knowledge with high scores as valuable knowledge for the model to learning to become a vertical domain model. In the current dataset, after calculating with Equation (5), we got the value score for each iteration. The distribution is shown as Figure 3.
With the value score $\boldsymbol { v } ^ { \prime }$ for each iteration, we update the value score for each knowledge fragment in the knowledge pool; the distribution of the total value score is shown in Figure 4. Only $0 . 1 4 \%$ of the fragments didn’t get feedback from the users, with a learning rate of 0.03 and a threshold of 0.5, $7 6 . 1 1 \%$ of the fragments remain in the knowledge pool and are marked as high-value knowledge.
Comparison with Vertical Domain Model To illustrate the effectiveness of our user contribution strategy, we also used the vertical domain model to evaluate our knowledge and validate our Assumption 3.1. Specifically, we ask the vertical domain model to rate the relevance and usefulness of each knowledge fragment and compare the results with the value scores in our knowledge pool. The distribution is shown in Figure 5. From FinGPT, $7 3 . 6 2 \%$ of the fragments are marked as high-value knowledge. Among all the highvalue knowledge in the knowledge pool, $7 6 . 1 1 \%$ of them are also recognized by the baseline model, indicating the effectiveness of our methods.
Learning Rate of CoEM We also performed an ablation study on the learning rate to illustrate the reason for picking 0.03 in the experiment. As shown in Figure 6, when the learning rate increases, the proportion of high-value knowledge decreases, so 0.03 could serve as a value to keep enough knowledge in the pool. This may also show that our attributor tends to give lower marks, indicating that the attributor and the threshold require further exploration.
# 5. Discussion
User Feedback and Attribution Mechanism CoEM heavily depends on integrating user feedback with the attribution model. However, accurately attributing individual user contributions to the model’s output remains challenging as we discussed in Section 4.2, especially in multi-turn dialogues where prior context and existing knowledge interact in complex ways. One possible approach to address this challenge is using Shapley value-based attribution, which offers a principled and fair method for quantifying the contribution of each user input, enhancing transparency. Additionally, as shown in Figure 6, the current optimistic initialization strategy may lead to a decrease in attribution scores over time. To mitigate this, future versions of CoEM may incorporate a decay mechanism within the value score update process to better balance score evolution.
Scalability and Further Experiment As CoEM relies on continuous user interaction, scalability is a key concern. With an increasing number of users, the computational overhead for real-time updates and knowledge pool management could become a bottleneck. One possible solution could be to implement incremental learning techniques that allow for more efficient adaptation with minimal computational cost. For instance, leveraging sparse updates rather than full model retraining can drastically reduce resource consumption while preserving model performance. Additionally, the use of distributed learning frameworks can help balance the load across different computing nodes, ensuring scalability without compromising real-time performance. Optimizing these algorithms will be crucial for large-scale deployment.
User Privacy and Data Security A major advantage of CoEM is its inherent respect for user privacy. Unlike traditional models, CoEM eliminates the need for centralized data collection, directly addressing privacy concerns arising from large-scale data usage. However, to ensure the protection of the user’s privacy, we need to design the decentralized learning process to keep the user’s data local. A possible solution could be integrating differential privacy techniques (Yan et al., 2024), which add noise to the data, ensuring that individual user contributions cannot be traced back to a specific user while still allowing the model to learn effectively from the aggregated knowledge. Additionally, secure multi-party computation (SMPC) (Knott et al., 2021) could be utilized to allow the model to learn from user data without directly accessing sensitive information. These techniques would further reinforce CoEM’s privacypreserving framework, ensuring robust protection for users while maintaining high model performance. | Vertical-domain large language models (LLMs) play a crucial role in
specialized scenarios such as finance, healthcare, and law; however, their
training often relies on large-scale annotated data and substantial
computational resources, impeding rapid development and continuous iteration.
To address these challenges, we introduce the Collaborative Editable Model
(CoEM), which constructs a candidate knowledge pool from user-contributed
domain snippets, leverages interactive user-model dialogues combined with user
ratings and attribution analysis to pinpoint high-value knowledge fragments,
and injects these fragments via in-context prompts for lightweight domain
adaptation. With high-value knowledge, the LLM can generate more accurate and
domain-specific content. In a financial information scenario, we collect 15k
feedback from about 120 users and validate CoEM with user ratings to assess the
quality of generated insights, demonstrating significant improvements in
domain-specific generation while avoiding the time and compute overhead of
traditional fine-tuning workflows. | [
"cs.AI"
] |
# 1 INTRODUCTION
Knowing the effect of interventions is key to understanding the effect of a treatment in medicine or the effect of a maintenance operation in IT monitoring systems for example. When one cannot perform interventions in practice, for example when these interventions may endanger people’s life or when they may disrupt a critical process or be too costly, one can try and identify do-free formulas which allow one to estimate the effects of interventions using only observational data.
Finding such do-free formulas is referred to as the identifiability problem for interventions in causal graphs. Solving the identifiability problem usually amounts to providing a graphical criterion under which the total effect can be identified, and in providing a do-free formula for its estimation on observational data. The problem is, under causal sufficiency, relatively easy for simple graphs, like DAGs (directed acyclic graphs) for static variables (Pearl, 1995) or FTCGs (full time causal graphs) for time series (Blondel et al., 2016), where the backdoor criterion is sound and complete for monovariate interventions. It becomes much harder when the graphs considered are abstractions of simple graphs, like CPDAGs (completed partially directed acyclic graphs) and MPDAGs (maximally oriented partially directed) for static variables (Maathuis and Colombo, 2013; Perkovic et al., 2016) or SCGs (summary causal graphs) for time series (Assaad et al., 2024). This is due to the fact that, for interventions to be identifiable, one needs to prove that the same do-free formula holds in all the simpler causal graphs corresponding to the abstraction considered.
Despite this increased complexity, Perkovic (2020) was able to propose, under causal sufficiency, a sound and complete graphical criterion to the identifiability problem for CPDAGs and MPDAGs, namely the general adjustment criterion. However, for SCGs, only a sufficient condition for identifiability has been proposed so far (Assaad et al., 2024), using the backdoor criterion which is sound but not complete and under causal sufficiency.
We study in this work the identifiability problem in SCGs under causal sufficiency. In particular:
• We introduce a common adjustment criterion, which we show is both sound and complete for the adjustment formulae,
• We propose both necessary and sufficient conditions which further characterize conditions for identifiability by adjustment in SCGs,
Based on these conditions, we derive an algorithm of limited (pseudo-linear) complexity to decide whether the problem is identifiable or not.
These results are established here for a single effect and multiple interventions, and hold, on different forms, whether the consistency through time assumption is made or not. They furthermore rely on novel concepts and tools.
The remainder of the paper is structured as follows: related work is discussed in Section 2; Section 3 introduces the main notions while Section 4 presents our main result regarding identifiability with the adjustment criterion without assuming consistency through time; Section 5 presents a similar result when consistency through time holds and a numerical experiment to illustrate the estimation; lastly, Section 6 concludes the paper. All proofs are provided in the Supplementary Material.
Figure 1: Thermoregulation (Assaad et al., 2024; Peters et al., 2013). Only the living room and bathroom have radiators on which we can intervene, highlighted in red. In both scenarios, we are interested in the temperature in the office, highlighted in blue. Scenario 1: $P ( \mathbf { O f } _ { t } \mid$ $\mathrm { d o } ( L _ { t - 1 } , L _ { t } , B _ { t - 1 } , B _ { t } ) )$ . Scenario 2: $P ( \operatorname { O f } _ { t } \mid \operatorname { d o } ( L _ { t - 1 } , B _ { t - 1 } ) )$ .
Running Example. As a running example throughout this paper, we consider the SCG in Figure 1, which models thermoregulation in a house where only the living room and bathroom have radiators. For notational simplicity, let $L _ { t }$ , $K _ { t }$ , $B _ { t }$ , $\mathbf { O f } _ { t }$ , and $\mathrm { O u t } _ { t }$ denote the temperatures in the living room, kitchen, bathroom, office, and outside at time t, respectively. In Scenario 1, we aim to predict the office temperature at time $t$ , assuming interventions that set the living-room and bathroom thermostats at times $t - 1$ and $t$ $P ( \mathrm { O f } _ { t } \mid \mathrm { d o } ( L _ { t - 1 } , L _ { t } , B _ { t - 1 } , B _ { t } ) )$ . In Scenario 2, we assume interventions only at time $_ { t - 1 }$ , giving $P ( \mathbf { O f } _ { t } \mid \mathrm { d o } ( L _ { t - 1 } , B _ { t - 1 } ) )$ The Python implementation is available at this repository.1
# 2 STATE OF THE ART
The identifiability problem for DAGs and under causal sufficiency can be solved with the backdoor criterion, which is sound and complete for total effects with single interventions (Pearl, 1995). However, Shpitser et al. (2010) have shown that this criterion does not allow one to identify all possible adjustment sets. When the backdoor is not complete, e.g., with hidden confounders or multiple interventions, one may relate to the do-calculus (Pearl, 1995) and the associated ID algorithm, which are sound and complete (Shpitser et al., 2010).
For CPDAGs, Maathuis and Colombo (2013); Perkovic et al.
(2016) provided both necessary and sufficient conditions of identifiability for single interventions, which are nevertheless only sufficient for multiple interventions. Perkovic (2020) later developed necessary and sufficient conditions under causal sufficiency and the adjustment criterion for MPDAGs, which encompass DAGs, CPDAGs and CPDAGs with background knowledge. When considering latent confounding, Jaber et al. (2022); Wang et al. (2023) provided sufficient conditions of identifiability for PAGs (partial ancestral graphs). Cluster DAGs (Anand et al., 2023) constitute another interesting abstraction of simple graphs as they encode partially understood causal relationships between variables grouped into predefined clusters, within which internal causal dependencies remain unspecified. They extended docalculus to establish necessary and sufficient conditions for identifying total effects in these structures.
Fewer studies have however been devoted to the identifiability problem on causal graphs defined over time series, like FTCGs, and abstractions one can define over them (Assaad et al., 2022), like ECGs (extended summary causal graphs) and SCGs. As mentioned before, if the problem can be solved relatively easily for FTCGs (Blondel et al., 2016), it is more complex for SCGs. Eichler and Didelez (2007) provided sufficient conditions for identifiability of the total effect on graphs based on time series which can directly be generalized to SCGs with no instantaneous relations. With possible instantaneous relations, Assaad et al. (2023) demonstrated that the total effect is always identifiable on SCGs under causal sufficiency and in the absence of cycles larger than one in the SCG (allowing only self-causes). Another assumption one can make to simplify the problem is to consider that the underlying causal model is linear. This allowed Ferreira and Assaad (2024) to propose both necessary and sufficient conditions for identifying direct effects in SCGs. On a slightly different line, Assaad (2025) provided sufficient conditions based on the front-door criterion when causal sufficiency is not satisfied. The most general result proposed so far on SCGs is the one presented by Assaad et al. (2024), who showed that, under causal sufficiency, the total effect is always identifiable in ECGs and exhibited sufficient conditions for identifiability by common backdoor assuming consistency through time and considering single interventions (but without making assumptions on the form of the SCG or the underlying causal model).
Our work fits within this line of research as it also addresses the identifiability problem in SCGs and goes further than previous studies by introducing a graphical criterion, shown to be both sound and complete, for identifiability in SCGs, together with necessary and sufficient conditions allowing one to efficiently decide on identifiability, without other assumptions than causal sufficiency. These results furthermore hold for both single and multiple interventions, with and without consistency through time.
# 3 CONTEXT
# 3.1 NOTATIONS AND ELEMENTARY NOTIONS
For a graph $\mathcal { G } = ( \mathcal { V } , \mathcal { E } )$ , if $X Y$ , then $X$ is a parent of $Y$ and $Y$ is a child of $X$ . A path is a sequence of distinct vertices in which each vertex is connected to its successor by an edge in $\mathcal { G }$ . A directed path, or a causal path, is a path in which all edges are pointing towards the last vertex. A non-causal path refers to any path that is not causal. If there is a directed path from $X$ to $Y$ , then $X$ is an ancestor of $Y$ , and $Y$ is a descendant of $X$ . The sets of parents, children, ancestors and descendants of $X$ in $\mathcal { G }$ are denoted by $\operatorname { P a } ( X , { \mathcal { G } } )$ , $\operatorname { C h } ( X , { \mathcal { G } } )$ , $\operatorname { A n c } ( X , { \mathcal { G } } )$ and Desc $( X , { \mathcal { G } } )$ respectively. We write $X Y$ (or equivalently $Y \left. \sim X \right\}$ ) to indicate that the graph contains a directed path from $X$ to $Y$ consisting of at least one edge. Furthermore, the mutilated graph $\mathcal G _ { \overline { { \mathbf { X } } } \underline { { \mathbf { Y } } } }$ represents the graph obtained by removing from $\mathcal { G }$ all incoming edge on $\mathbf { X }$ and all outgoing edges from Y. The skeleton of $\mathcal { G }$ is the undirected graph given by forgetting all arrow orientations in $\mathcal { G }$ . The subgraph $\mathcal { G } _ { \lvert S }$ of a graph $\mathcal { G }$ induced by a vertex set $S$ includes all nodes in $s$ and all edges in $\mathcal { G }$ with both endpoints in S . For two disjoint subsets $\mathbf { X } , \mathbf { Y } \subseteq \mathcal { V }$ , a path from $\mathbf { X }$ to $\mathbf { Y }$ is a path from some $X \in \mathbf { X }$ to some $Y \in \mathbf { Y }$ . A path from $\mathbf { X }$ to $\mathbf { Y }$ is proper if only its first node is in $\mathbf { X }$ . A backdoor path between $X$ and $Y$ is a path between $X$ and $Y$ in which the first arrow is pointing to $X$ . A directed cycle is a circular list of distinct vertices in which each vertex is a parent of its successor. If a path $\pi$ contains $X _ { i } \right. X _ { j } \left. X _ { k }$ as a subpath, then $X _ { j }$ is a collider on $\pi$ . A path $\pi$ is blocked by a subset of vertices $\mathbf { z }$ if a non-collider in $\pi$ belongs to $\mathbf { Z }$ or if $\pi$ contains a collider of which no descendant belongs to $\mathbf { Z }$ . Otherwise, $\textbf { Z } d \mathbf { \Omega }$ -connects $\pi$ .
Let $\mathbf { X } , \mathbf { Y }$ and $\mathbf { z }$ be pairwise distinct sets of variables in a DAG $\mathcal { G }$ . $\mathbf { Z }$ is an adjustment set relative to $( \mathbf { X } , \mathbf { Y } )$ in $\mathcal { G }$ if for a distribution $P$ compatible with $\mathcal { G }$ (Pearl, 2000, Def. 1.2.2) we have2:
$$
P ( \mathbf { y } \mid \mathrm { d o } ( \mathbf { x } ) ) = { \left\{ \begin{array} { l l } { P ( \mathbf { y } \mid \mathbf { x } ) } & { { \mathrm { i f } } \mathbf { Z } = 0 , } \\ { \sum _ { \mathbf { z } } P ( \mathbf { y } \mid \mathbf { x } , \mathbf { z } ) P ( \mathbf { z } ) } & { { \mathrm { o t h e r w i s e . } } } \end{array} \right. }
$$
Lastly, following Perkovic et al. (2016), we make use of the forbidden set in the adjustment criterion.
Definition 1 (Adjustment criterion). Let $\mathbf { X } , \mathbf { Y }$ and $\mathbf { z }$ be pairwise distinct sets of variables in a DAG ${ \mathcal { G } } . \mathbf { Z }$ is said to satisfy the adjustment criterion relative to $\mathbf { X }$ and $\mathbf { Y }$ in $\mathcal { G }$ if:
1. Forb $( \mathbf { X } , \mathbf { Y } , \mathcal { G } ) \cap \mathbf { Z } = \emptyset$ ; and
2. Z blocks all proper non-causal paths from $\mathbf { X }$ to $\mathbf { Y }$ in $\mathcal { G }$ ,
where the forbidden set Forb $( \mathbf { X } , \mathbf { Y } , \mathcal { G } )$ is the set of all descendants of any $W \not \in \mathbf { X }$ which lies on a proper causal path from $\mathbf { X }$ to $\mathbf { Y }$ .
Figure 2: Illustration: (a) three FTCGs; (b) the SCG which can be derived from any FTCG in (a).
# 3.2 CAUSAL GRAPHS IN TIME SERIES
Consider $\mathcal { V }$ a set of $p$ observational time series and $\mathcal { V } ^ { f } =$ $\{ \mathcal { V } _ { t } | t \in \mathbb { Z } \}$ the set of temporal instances of $\mathcal { V }$ observed over discrete time, where $\mathcal { V } _ { t }$ corresponds to the variables of the time series at time $t$ . We suppose that the discrete time observations $\mathcal { V } ^ { f }$ are generated from an unknown structural causal model, which defines an FTCG which we call the true FTCG and a joint distribution $P$ over its vertices which we call the true probability distribution, which is compatible with, or Markov relative to, the true FTCG by construction.
As common in causality studies on time series, we consider in the remainder acyclic FTCGs with potential self-causes, i.e., the fact that, for any time series $X$ , $X _ { t - \ell } ( \ell \in \mathbb { N } ^ { * } )$ may cause $X _ { t }$ . Note that acyclicity is guaranteed for relations between variables at different time stamps and that selfcauses are present in most time series. As a result, FTCGs are DAGs in which descendant relationships are constrained by the fact that causality cannot go backward in time, and all causal notions extend directly to FTCGs.
Experts are used to working with abstractions of causal graphs which summarize the information into a smaller graph that is interpretable, often with the omission of precise temporal information. We consider in this study a known causal abstraction for time series, namely summary causal graphs (Peters et al., 2013; Meng et al., 2020), which represents causal relationships among time series, regardless of the time delay between the cause and its effect.
Definition 2 (Summary causal graph (SCG), Figure 2b). Let $\mathcal { G } ^ { f } = ( \mathcal { V } ^ { f } , \mathcal { E } ^ { f } )$ be an FTCG built from the set of time series $\mathcal { V }$ . The summary causal graph (SCG) $\mathcal { G } ^ { s } = ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ associated to $\mathcal { G } ^ { f }$ is such that:
• $\mathcal { V } ^ { s }$ corresponds to the set of time series $\mathcal { V }$ , • $X Y \in { \mathcal { E } } ^ { s }$ if and only if there exists at least one timepoint $t$ and one temporal lag $0 ~ \leq ~ \gamma$ such that $X _ { t - \gamma } \to Y _ { t } \in \mathcal { E } ^ { f }$ .
In that case, we say that $\mathcal { G } ^ { s }$ is reduced from $\boldsymbol { \mathcal { G } ^ { f } }$ .
SCGs may include directed cycles and even self-loops. For example, the three FTCGs in Figure 2a are acyclic, while the SCG in Figure 2b has a cycle. We use the notation $X Y$ to indicate situations where there exist time instants in which $X$ causes $Y$ and $Y$ causes $X$ . It is furthermore worth noting that if there is a single SCG reduced from a given FTCG, different FTCGs, with possibly different orientations and skeletons, can yield the same SCG. For example, the SCG in Figure 2b can be reduced from any FTCG in Figure 2a, even though they may have different skeletons or different orientations. In the remainder, we refer to any FTCG from which a given SCG $\mathcal { G } ^ { s }$ can be reduced as a candidate FTCG for $\mathcal { G } ^ { s }$ . For example, in Figure 2, $\boldsymbol { \mathcal { G } } _ { 1 } ^ { f }$ , $\mathcal { G } _ { 2 } ^ { f }$ and $\mathcal { G } _ { 3 } ^ { f }$ are all candidate FTCGs for $\mathcal { G } ^ { s }$ . The class of all candidate FTCGs for $\mathcal { G } ^ { s }$ is denoted by $C ( \mathcal { G } ^ { s } )$ .
# 3.3 PROBLEM SETUP
We focus in this paper on identifying total effects (Pearl, 2000) of multiple interventions on single effects, written $P \left( { Y } _ { t } = { y } _ { t } \big | ( \mathrm { d o } ( \bar { X } _ { t _ { i } } ^ { i } = x _ { t _ { i } } ^ { i } ) ) _ { i } \right)$ (as well as $P \left( y _ { t } \mid \operatorname { d o } \left( ( x _ { t _ { i } } ^ { i } ) _ { i } \right) \right)$ by a slight abu
se of notation) when only the SCG reduced from the true FTCG is known, using the common adjustment criterion defined below.
Definition 3 (Common adjustment criterion). Let $\mathcal { G } ^ { s } = $ $( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ be an SCG. Let $\mathbf { X } , \mathbf { Y }$ and $\mathbf { z }$ be pairwise distinct subsets of $\mathcal { V } ^ { f }$ . $\mathbf { Z }$ satisfies the common adjustment criterion relative to $\mathbf { X }$ and $\mathbf { Y }$ in $\mathcal { G } ^ { s }$ if for all $\mathcal { G } ^ { f } \in C ( \mathcal { G } ^ { s } ) ,$ Z satisfies the adjustment criterion relative to $\mathbf { X }$ and $\mathbf { Y }$ in $\mathcal { G } ^ { f }$ .
This criterion is sound and complete for the adjustment formulae, meaning that:
Proposition 1. Let $\mathcal { G } ^ { s } = ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ be an SCG and let $\mathbf { X } , \mathbf { Y }$ and $\mathbf { z }$ be pairwise distinct subsets of $\mathcal { V } ^ { f }$ . We say that a probability distribution $P$ is compatible with $\mathcal { G } ^ { s }$ if there exists $\mathcal { G } ^ { f } \in C ( \mathcal { G } ^ { s } )$ such that $P$ is compatible with $\mathcal { G } ^ { f }$ . The two following propositions are equivalent:
(i) $\mathbf { z }$ satisfies the common adjustment criterion relative to $\mathbf { X }$ and $\mathbf { Y }$ ,
(ii) for all $P$ compatible with $\mathcal { G } ^ { s }$ :
$$
P \left( \mathbf { y } \mid \mathsf { d o } ( \mathbf { x } ) \right) = \left\{ { P \left( \mathbf { y } \mid \mathbf { x } \right) } \right. \quad \mathrm { i f } \ \mathbf { Z } = \emptyset \quad \quad \quad \quad \quad \quad \quad
$$
When either (i) or (ii) hold, we say that the total effect $P ( \mathbf { y } \mid \mathrm { d o } ( \mathbf { x } ) )$ is identifiable in $\mathcal { G } ^ { s }$ by adjustment criterion.
Finally, our problem takes the form:
Problem 1. Consider an SCG $\mathcal { G } ^ { s }$ . We aim to find out operational3 necessary and sufficient conditions to identify the total effect $P \left( y _ { t } \mid \operatorname { d o } \left( ( x _ { t _ { i } } ^ { i } ) _ { i } \right) \right)$ by common adjustment when having access solely to the SCG $\mathcal { G } ^ { s }$ .
Note that if $Y$ is not a descendant of one of the intervening variables $X ^ { i }$ in $\mathcal { G } ^ { s }$ or if $\gamma _ { i } : = t - t _ { i } < 0$ , then $X _ { t _ { i } } ^ { i }$ can be removed from the conditioning set through, e.g., the adjustment for direct causes (Pearl, 2000). In the extreme case where $Y$ is not a descendant of any element of $\{ X ^ { i } \} _ { i }$ , then $P \left( y _ { t } \mid \mathsf { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \ldots , \mathsf { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) \right) = P ( y _ { t } )$ . In the remainder, we thus assume that $Y$ is a descendant of each element in $\{ X ^ { i } \} _ { i }$ in $\mathcal { G } ^ { s }$ and that $\gamma _ { i } \geq 0$ for all $i$ , and will use the following notations: $X ^ { f } : = \{ X _ { t - \gamma _ { i } } ^ { i } \} _ { i }$ and $\smash { \boldsymbol { X } ^ { s } : = \{ \boldsymbol { X } ^ { i } \} _ { i } }$ .
# 4 IDENTIFIABILITY BY COMMON ADJUSTMENT
We provide in this section the main results of this paper, which is a graphical necessary and sufficient condition for identifiability of the causal effect by common adjustment, and a solution to compute it in practice. The classical consistency through time, assuming that causal relations are the same at different time instants, is not assumed here and its discussion is postponed to Section 5. All the proofs are deferred to Section D in the Supplementary Material.
# 4.1 NECESSARY AND SUFFICIENT CONDITION BASED ON THE COMMON FORBIDDEN SET
We first introduce the common forbidden set, the set of vertices that belong to $\mathrm { F o r b } \left( \boldsymbol { X } ^ { f } , Y _ { t } , \boldsymbol { G } ^ { f } \right)$ in at least one candidate FTCG $\boldsymbol { \mathcal { G } ^ { f } }$ . The common forbidden set, and the related notion of non-conditionable set defined below, define a set of variables which cannot be elements of a common adjustment set as they violate the first condition in Definition 1. As such, they cannot be used as conditioning variables in the do-free formula rewriting the interventions (Equation 1 in Proposition 1).
Definition 4. Let $\mathcal G ^ { s } = ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and $P ( y _ { t } \mid$ $\mathrm { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \dotsc , \mathrm { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. We define the common forbidden set as follows:
$$
C \mathcal { F } : = \bigcup _ { \mathcal { G } ^ { f } \in C ( \mathcal { G } ^ { s } ) } \operatorname { F o r b } \left( X ^ { f } , Y _ { t } , \mathcal { G } ^ { f } \right) .
$$
The set of non-conditionable variables is defined by
$$
N C : = C \mathcal { F } \setminus { \boldsymbol { X } } ^ { f } .
$$
Running Example. In the first scenario, we have $N C =$ $\{ \mathrm { O f } _ { t - 1 } , \mathrm { O f } _ { t } \}$ , whereas, in the second scenario, we have $\begin{array} { r } { N C = \{ K _ { t - 1 } , K _ { t } , B _ { t } , L _ { t } , \mathrm { O f } _ { t - 1 } , \mathrm { O f } _ { t } \} } \end{array}$ . In the second scenario, $K _ { t - 1 }$ cannot belong to a common adjustment set as there exists a candidate FTCG which contains the path $L _ { t - 1 } \to$ $K _ { t - 1 } \to L _ { t } \to \operatorname { O f } _ { t }$ . Similarly, $L _ { t }$ cannot belong to a common adjustment set as there exists a candidate FTCG which contains the path $L _ { t - 1 } \to L _ { t } \to \mathbf { O f } _ { t }$ .
Theorem 1 below shows that identifiability by common adjustment is directly related to the existence of colliderfree backdoor path remaining in this set.
Theorem 1. Let $G ^ { s } ~ = ~ ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ be an SCG and $P ( y _ { t } \mid$ $\mathrm { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \dotsc , \mathrm { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. Then the two statements are equivalent:
1. The effect is identifiable by common adjustment in $\mathcal { G } ^ { s }$ .
2. For all intervention $X _ { t - \gamma _ { i } } ^ { i }$ and candidate FTCG $\mathcal { G } ^ { f } \in$ $C ( \mathcal G ^ { s } )$ , $\mathcal { G } ^ { f }$ does not contain a collider-free backdoor path going from $X _ { t - \gamma _ { i } } ^ { i }$ to $Y _ { t }$ that remains in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ .
In that case, a common adjustment set is given by $c : = { }$ $\left( \mathcal { V } ^ { f } \setminus \mathcal { N } C \right) \setminus X ^ { f }$ , and we have
$$
P ( y _ { t } \mid \mathsf { d o } ( ( x _ { t - \gamma _ { i } } ^ { i } ) _ { i } ) ) = \sum _ { \mathbf { c } } P \left( y _ { t } \mid ( x _ { t - \gamma _ { i } } ^ { i } ) _ { i } , \mathbf { c } \right) P ( \mathbf { c } ) .
$$
Proof Sketch. Let $\mathcal { G } ^ { f }$ be a candidate FTCG. For any $X _ { t - \gamma _ { i } } ^ { i } \in$ $X ^ { f }$ , consider any proper non-causal path $\pi ^ { f }$ from $X ^ { f }$ to $Y _ { t }$ that starts at $X _ { t - \gamma _ { i } } ^ { i }$ . Then $\pi ^ { f }$ either:
• leaves $C \mathcal F \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ , in which case it contains a noncollider in $c$ (see $C _ { t _ { c } }$ in Figure 3) and is blocked by $c$ ,
• remains in $C \mathcal F \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ and contains a collider, in which case it is also blocked by $c$ , since the collider and its descendants remain in $_ { { N C } }$ , remains in $C { \mathcal { F } } \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ and contains no collider (i.e., it is a collider-free backdoor path), in which case it cannot be blocked.
Thus, collider-free backdoor paths entirely contained in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ are therefore the only proper non-causal paths from $X ^ { f }$ to $Y _ { t }$ starting at $X _ { t - \gamma _ { i } } ^ { i }$ that cannot be blocked by $c$ . Moreover, such paths cannot be blocked by any common adjustment set, as they remain in $N C \cup X ^ { f }$ . As a result, the effect is identifiable by common adjustment if and only if no such path exists for any $X _ { t - \gamma _ { i } } ^ { i } \in X ^ { f }$ . □
Running Example. In the first scenario, we have $\begin{array} { r c l } { N C } & { = } & { \{ \mathbf { O f } _ { t - 1 } , \mathbf { O f } _ { t } \} } \end{array}$ . No candidate FTCG contains a non-causal path that remains within $_ { { N C } }$ . As a result, $\begin{array} { r l } { P ( \mathrm { O f } _ { t } } & { { } | \quad \mathrm { d o } ( L _ { t - 1 } , L _ { t } , B _ { t - 1 } , B _ { t } ) ) } \end{array}$ is identifiable by common adjustment. In the second scenario,we have $N C =$ $\{ K _ { t - 1 } , K _ { t } , B _ { t } , L _ { t } , \mathrm { O f } _ { t - 1 } , \mathrm { O f } _ { t } \}$ and we know that both $K _ { t - 1 }$ and $L _ { t }$ cannot be part of a common adjustment set. Since a candidate FTCG contains the path $L _ { t - 1 } K _ { t - 1 } L _ { t } \mathrm { O f } _ { t }$ , $P ( \operatorname { O f } _ { t } \mid \operatorname { d o } ( L _ { t - 1 } , B _ { t - 1 } ) )$ is not identifiable by common adjustment.
Figure 3: Proof idea of Theorem 1. The green path represents a proper non-causal path $\pi ^ { f }$ from $X ^ { f }$ to $Y _ { t }$ starting at $X _ { t - \gamma _ { i } } ^ { i }$ The node $X _ { t - \gamma _ { j } } ^ { j }$ represents another intervention (if any). The dashed lines depict the set $C \mathcal { F } \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ . The vertex $C _ { t _ { c } }$ is the last node on $\pi ^ { f }$ outside of $C \mathcal F \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ , and $D _ { t _ { d } }$ is its successor on $\pi ^ { f }$ . Necessarily, $\pi ^ { f }$ must contain the arrow $C _ { t _ { c } } \to D _ { t _ { d } }$ ; otherwise, $C _ { t _ { c } }$ would belong to $C \mathcal { F }$ .
# 4.2 AN EFFICIENT WAY TO DECIDE ON IDENTIFIABILITY
To determine whether the causal effect is identifiable, we propose an algorithm that efficiently tests the existence of collider-free backdoor paths that remain in $_ { { N C } }$ , except perhaps for their first vertices. Instead of enumerating all FTCGs and such paths in , which would be computationally prohibitive, we introduce a more refined approach to characterize their existence. Specifically, we distinguish between those with and without forks. Paths without forks can be easily and efficiently identified. The situation is more complex for those that contain forks, but they can still be efficiently handled via a divide-and-conquer strategy. All these elements are detailed in the following subsections.
# 4.2.1 Additional Characterizations
Characterization of $_ { { N C } }$ We first introduce another characterization of $_ { { N C } }$ based on the time instant a time series first arrives in this set.
Definition 5. Let $\mathcal G ^ { s } = ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and $P ( y _ { t } \mid$ $\mathrm { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \dotsc , \mathrm { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. For a time series $F \in \mathcal { V } ^ { s }$ , we define
$$
\begin{array} { r } { t _ { N C } ( F ) : = \operatorname* { m i n } \{ t _ { 1 } ~ | ~ F _ { t _ { 1 } } \in N C \} , } \end{array}
$$
as the first time step at which $F$ enters the non-conditionable set $_ { { N C } }$ , with the convention that $\operatorname* { m i n } \{ \varnothing \} = + \infty$ .
Running Example. In the second scenario, we have $N C =$ $\{ K _ { t - 1 } , K _ { t } , B _ { t } , L _ { t } , \mathrm { O f } _ { t - 1 } , \mathrm { O f } _ { t } \}$ . As a result, $t _ { N C } \mathrm { ( K i t c h e n ) } = t - 1$ and $t _ { N C } ( \mathrm { O u t s i d e } ) = + \infty$ .
# Algorithm 1: Computation of $( t _ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } }$
Input: $\mathcal { G } ^ { s } = ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ an SCG and $X ^ { f }$ .
Output: $( t _ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } }$
// Compute $t _ { C }$ A $C \in \mathsf { C h } ( X ^ { s } )$ . (cf. Lemma 7)
$A n c Y \gets \left( \operatorname* { m a x } \left\{ t _ { 1 } \ : | \ : \exists \mathcal { G } ^ { f } \ : \mathrm { s . t } \ : S _ { t _ { 1 } } \in \mathrm { A n c } ( Y _ { t } , \mathcal { G } ^ { f } \ : \backslash \ : \mathcal { X } ^ { f } ) \right\} \right) _ { S \in \mathcal { V } ^ { s } } \ :$
foreach $C \in C h ( { \mathcal { X } } ^ { s } )$ do $t _ { \operatorname* { m i n } } \gets \operatorname* { m i n } \{ t - \gamma _ { i } \mid X ^ { i } \in \operatorname { P a } ( C , { \mathcal { G } } ^ { s } ) \}$ ; $t _ { C } \gets \operatorname* { m i n } \big \{ t _ { 1 } \in [ t _ { \operatorname* { m i n } } , A n c Y [ C ] ] \mid C _ { t _ { 1 } } \notin X ^ { f } \big \} .$ $d ( C ) \gets \# \{ i \ | \ X ^ { i } \in \mathrm { P a } ( C , \mathcal { G } ^ { s } ) \ \mathrm { a n d } \ t - \gamma _ { i } < t _ { C } \} \geq 1$ or #{ $i \mid X ^ { i } \in \operatorname { P a } ( C , \mathcal { G } ^ { s } )$ and $t - \gamma _ { i } = t _ { C } \} \geq 2$ ;
// Avoid extra computations. (cf. Lemma 8)
$L \gets [ ( C , t _ { C } ) ] _ { C \in \mathrm { D e s c } ( \chi f ) }$ ,with $t _ { C } < + \infty$ ;
Sort $L$ using $( t _ { C } , n o t d ( C ) )$ lexicographically;
// Compute $( t _ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } }$ . (cf. Lemma 11)
$( t _ { N C } ( S ) ) + \infty$ $\forall S \in \mathcal { V } ^ { s }$ ;
$S . s e e n \gets F a l s e$ $\forall S \in \mathcal { V } ^ { s }$ ;
for $( C , t _ { C } ) \in L$ do if $d ( C )$ then foreach unseen $D \in D e s c ( C , \mathcal { G } ^ { s } )$ do $t _ { N C } ( D ) \gets \operatorname* { m i n } \{ t _ { 1 } \ | \ t _ { 1 } \geq t _ { C }$ and $D _ { t _ { 1 } } \notin X ^ { f } \}$ ; $D . s e e n \gets { \mathrm { t r u e } }$ ; else foreach unseen $D \in D e s c { ( C , \mathcal { G } ^ { s } \setminus \mathcal { X } ^ { s } ) }$ do $t _ { N C } ( D ) \gets \operatorname* { m i n } \{ t _ { 1 } \ | \ t _ { 1 } \geq t _ { C }$ and $D _ { t _ { 1 } } \notin X ^ { f } \}$ ; $D . s e e n \gets { \mathrm { t r u e } }$ ; foreach unseen $D \in D e s c ( C , \mathcal { G } ^ { s } )$ do $t _ { N C } ( D ) \gets \operatorname* { m i n } \{ t _ { 1 } \ | \ t _ { 1 } \ge t _ { C } + 1$ and $D _ { t _ { 1 } } \notin X ^ { f } \}$ D.seen ← true ;
Lemma 2. (Characterization of collider-free backdoor paths without fork) Let $\mathcal G ^ { s } ~ = ~ ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and $P ( y _ { t } \mid \mathsf { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \ldots , \mathsf { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. The following statements are equivalent:
1. There exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ and a candidate FTCG $\mathcal { G } ^ { f } \in C ( \mathcal { G } ^ { s } )$ which contains $X _ { t - \gamma _ { i } } ^ { i } Y _ { t }$ which remains in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ .
2. There exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ such that $\gamma _ { i } = 0$ and $X ^ { i } \in \operatorname { D e s c } \big ( Y , { \mathcal { G } } _ { | S } ^ { s } \big ) .$ where $S : = \{ S \in \mathcal { V } ^ { s } \mid t _ { N C } ( S ) \leq$ $t \} \cup \{ X ^ { i } \in X ^ { s } \mid \gamma _ { i } = 0 \}$ .
Fork collider-free backdoor paths in $_ { { N C } }$ We first introduce an accessibility concept essential to the enumeration of fork paths.
Definition 6 ( -accessibility). Let $\mathcal { G } ^ { s } = ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ be an SCG, $P ( y _ { t } \mid \mathsf { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \ldots , \mathsf { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect and $V _ { t _ { \nu } } \ \in \ \mathcal { V } ^ { f }$ . We say that $\dot { F _ { t _ { 1 } } } \in \mathcal { V } ^ { f } \setminus \{ V _ { t _ { \nu } } \}$ is $\mathbf { } V _ { t _ { \nu } } ^ { } - N C .$ - accessible if there exists a candidate FTCG which contains a directed path from $\boldsymbol { F } _ { t _ { 1 } }$ to $V _ { t _ { \nu } }$ which remains in $_ { { N C } }$ except perhaps for $V _ { t _ { \nu } }$ . We denote
$$
t _ { V _ { t _ { \nu } } } ^ { N C } ( F ) : = \operatorname* { m a x } \{ t _ { 1 } \mid F _ { t _ { 1 } } { \mathrm { i s } } V _ { t _ { \nu } } { \mathrm { - } } N C { \mathrm { - a c c e s s i b l e } } \} ,
$$
with the convention $\operatorname* { m a x } \{ \varnothing \} = - \infty$ .
Running Example. In both scenarios, $L _ { t }$ is $\mathbf { O f } _ { t ^ { - } } N C -$ accessible, since there exists a candidate FTCG containing the path $L _ { t } \ \to \ \mathrm { O f } _ { t }$ . Although there is also a candidate FTCG containing the path $B _ { t } \ \to \ L _ { t } \ \to \ \mathrm { O f } _ { t } , \ L$ $B _ { t }$ is $\mathbf { O f } _ { t }$ - -accessible only in the second scenario, because in the first scenario $B _ { t }$ is itself an intervention. Consequently, in both scenarios, we have $t _ { \mathrm { O f } _ { t } } ^ { N C } ( \mathrm { L i v i n g } \mathrm { R o o m } ) = t$ . However, $t _ { \mathrm { O f } _ { t } } ^ { N C } \mathrm { ( B a t h r o o m ) } = - \infty$ in Scenario 1 and $t _ { \mathrm { O f } _ { t } } ^ { N C } ( \mathrm { B a t h r o o m } ) = t$ in Scenario 2.
In the next lemma, we show that $\{ t _ { N C } ( F ) \} _ { F \in \mathcal { V } ^ { S } }$ gives a simple characterization of these sets.
Lemma 1. (Characterization of $N C )$ Let $\mathcal G ^ { s } = ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and let $P ( y _ { t } \mid \mathsf { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \ldots , \mathsf { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. With the convention $\{ F _ { t _ { 1 } } \} _ { t _ { 1 } \ge + \infty } = 0$ , we have:
$$
N C = \bigcup _ { Z \in \mathcal { V } ^ { S } } \{ Z _ { t _ { 1 } } \} _ { t _ { 1 } \geq t _ { N C } ( Z ) } \setminus X ^ { f } .
$$
Moreover, $( t _ { N C } ( F ) ) _ { F \in \mathscr { V } ^ { s } }$ can be computed through Algorithm 1, detailed in Appendix $\mathrm { \Delta D }$ , which complexity is pseudo-linear with respect to $\mathcal { G } ^ { s }$ and $X ^ { f }$ .
The above characterization, based on $t _ { N C } ( F )$ , slightly departs from standard, purely graphical characterizations often used in the identifiability literature. This is due to the complexity of the class of candidate FTCGs and the difficulty to explore this class efficiently.
This leads to characterize efficiently the existence of a collider-free backdoor path with a fork that remains in $_ { { N C } }$ , as proposed in Lemma 3.
Lemma 3. (Characterization of collider-free backdoor paths with fork) Let $\mathcal G ^ { s } ~ = ~ ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and $P ( y _ { t } \mid$ $\mathrm { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \dotsc , \mathrm { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect such that for all $\mathcal { G } ^ { f }$ belonging to $C ( \mathcal { G } ^ { s } ) , \mathcal { G } ^ { f }$ does not contain a directed path from $Y _ { t }$ to an intervention $X _ { t - \gamma _ { i } } ^ { i } \gets Y _ { t }$ which remains in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ . The following statements are equivalent:
1. There exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ , $F _ { t _ { f } } \in \mathcal { V } ^ { f }$ and a candidate FTCG $\mathcal { G } ^ { f } \in C ( \mathcal { G } ^ { s } )$ which contains the path $X _ { t - \gamma _ { i } } ^ { i } \not \sim F _ { t _ { f } } Y _ { t }$ which remains in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ . 2. There exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ and $F _ { t _ { f } } \in \mathcal { V } ^ { f }$ such that $F _ { t _ { f } }$ is $X _ { t - \gamma _ { i } } ^ { i }$ -NC-accessible and $Y _ { t }$ -NC-accessible.
Collider-free backdoor paths without fork in NC First, Lemma 2 characterizes efficiently the existence of colliderfree backdoor paths that do not contain a fork.
The intuition behind Lemma 3 relies on a divide-andconquer strategy to avoid searching for collider-free backdoor paths with a fork directly. In an FTCG, any such path decomposes into two directed subpaths. The lemma shows that it suffices to exhibit the first subpath in one candidate FTCG and the second subpath in another. This ensures that some candidate FTCG realizes the entire fork path, without resorting to an explicit reconstruction argument. Since testing for a single directed path in a candidate FTCG can be done efficiently, this reduction renders the overall existence check more tractable. The formal proof appears in the Supplementary Material.
Condition 2 in Lemma 3 is not efficiently tractable, since it requires checking each $F _ { t _ { f } } \in \mathcal { V } ^ { f }$ , and the set $\mathcal { V } ^ { f }$ is infinite. Fortunately, the set $\{ t _ { 1 } ~ | ~ \boldsymbol { F } _ { t _ { 1 } }$ is $V _ { t _ { \nu } }$ -NC-accessible} is bounded by $t _ { V _ { t _ { \nu } } } ^ { N C } ( F )$ and $t _ { N C } ( F )$ (see Lemma 13 in the appendix), and testing only the single time point $F _ { t _ { N C } ( F ) }$ for each time series $F$ is sufficient (see Corollary 1 in the appendix). Consequently, Condition 2 in Lemma 3 reduces to the following:
• There exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ and a time series $F$ such that $t _ { N C } ( F ) \leq t _ { X _ { t - \gamma _ { i } } ^ { i } } ^ { N C } ( F )$ and $\ddot { t } _ { N C } ( F ) \leq t _ { Y _ { t } } ^ { N C } ( F )$ , improving further the tractability of the check of existence of a collider-free backdoor path with a fork.
# 4.2.2 An Efficient Algorithm
The above results show that identifiability by common adjustment in $\mathcal { G } ^ { s }$ is equivalent to the following two conditions:
1. There does not exist an intervention $X _ { t - \gamma _ { i } } ^ { i }$ such that $\gamma _ { i } = 0$ and $X ^ { i } \in \operatorname { D e s c } \left( Y , { \mathcal { G } } _ { | S } ^ { s } \right) .$ , where $S : = \{ S \in \mathcal { V } ^ { s } \mid$ $t _ { N C } ( S ) \cup \{ X ^ { i } \mid \gamma _ { i } = 0 \}$ (see Lemma 2), 2. And, there does not exist an intervention $X _ { t - \gamma _ { i } } ^ { i }$ and a time series $F \in \mathcal { V } ^ { s }$ such that $t _ { N C } ( F ) \le t _ { X _ { t - \gamma _ { i } } ^ { i } } ^ { N C } ( F )$ and $t _ { N C } ( F ) \leq t _ { Y _ { t } } ^ { N C } ( F )$ (see discussion below Lemma 3)
Condition 2 from Lemma 2 can be verified efficiently. Indeed, it suffices to compute the set of all descendants of $Y$ in $\mathcal { G } _ { | S } ^ { s }$ using a single breadth- or depth-first search in time $O ( | \tilde { \mathcal { V } } ^ { s } | + | \mathcal { E } ^ { s } | )$ (Cormen et al., 2009, Chapter 22), and then check if there exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ with $\gamma _ { i } = 0$ such that $X ^ { i } \in \operatorname { D e s c } _ { { \mathcal { G } _ { | S } ^ { s } } } ( Y )$ .
Having ruled out directed-paths, we now focus on the forkpath condition. To this end, we present Algorithm 2, which computes $\{ t _ { V _ { t _ { \nu } } } ^ { N C } ( F ) \mid F \in \mathcal { V } ^ { S } \}$ in pseudo-linear time (see Lemma 14 in the appendix).
As a result, the second statement of Lemma 3 can be checked by executing Algorithm 2 twice, knowing that Algorithm 1 has already been run. Consequently, the overall complexity is $O \left( \left| X ^ { f } \right| \log \left| X ^ { f } \right| + ( | \mathcal { E } ^ { s } | + | \mathcal { \bar { V } } ^ { s } | ) \log | \mathcal { V } ^ { s } | \right)$ .
By combining the previous results as in Algorithm 3, one can directly assess whether the effect is identifiable or not:
Algorithm 2: Computation of $( t _ { V _ { t _ { \nu } } } ^ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } }$
Input: $\mathcal { G } ^ { s } = ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ an SCG, $X ^ { f }$ and $V _ { t _ { \nu } } \in \mathcal { V } ^ { f }$
Output: $( t _ { V _ { t _ { \nu } } } ^ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } }$
$Q \gets$ PriorityQueue $\left( V _ { t _ { \nu } } \right)$ ;
$t _ { V _ { t \nu } } ^ { N C } ( S ) \gets - \infty$ $\forall S \in \mathcal { V } ^ { s }$ ;
$S . s e e n \gets F a l s e$ $\forall S \in \mathcal { V } ^ { s }$ ;
while $Q \neq \emptyset$ do $S _ { t _ { s } } \gets Q$ .pop_element_with_max_time_index(); foreach unseen $P \in P a ( S , { \mathcal { G } } ^ { s } )$ do $t _ { V _ { t _ { \nu } } } ^ { N C } ( P ) \gets \operatorname* { m a x } \{ t _ { 1 } ~ | ~ t _ { 1 } \le t _ { s }$ and $P _ { t _ { 1 } } \in N C \setminus \{ V _ { t _ { \nu } } \} \}$ ; if tVNC(P) , −∞ then Q.insert(PtNC(P)) ; $P . s e e n \gets t r u e$ ;
Input: $\mathcal { G } ^ { s } = ( \mathcal { V } ^ { s } , \mathcal { E } ^ { s } )$ an SCG and $X ^ { f }$ .
Output: A boolean indicating whether the effect is identifiable by common adjustment or not.
$( t _ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } } \mathrm { A l g o r i t h m } \ 1$ ;
// Enumeration of directed paths.
$S \gets \{ S \in \mathcal { V } ^ { s } \mid t _ { N C } ( S ) \leq 0 \} \cup \{ X ^ { i } \mid t - \gamma _ { i } = 0 \}$ ;
if $\exists i \in \{ 1 , \ldots , n \}$ s.t. $X ^ { i } \in D e s c \left( Y , { \mathcal { G } } _ { | S } ^ { s } \right) a n d \gamma _ { i } = 0$ then $\llcorner$ return False
// Enumeration of fork paths.
foreach Vtv ∈ {Yt, Xt1 γ , · $V _ { t _ { \nu } } \in \{ Y _ { t } , X _ { t - \gamma _ { 1 } } ^ { 1 } , \cdot \cdot \cdot , X _ { t - \gamma _ { n } } ^ { n } \}$ do $( t _ { V _ { t _ { \nu } } } ^ { N C } ( S ) ) _ { S \in \mathcal { V } ^ { s } } \dot { \mathrm { A l g o r i t h m } } \hat { 2 }$ ;
foreach $F \in \mathcal { V } ^ { s } , X _ { t - \gamma _ { i } } ^ { i } \in ( X _ { t - \gamma _ { j } } ^ { j } ) _ { j } \mathbf { d o }$ if tNC(F) ≤ tXNtiCγ−(iF) and−t jNC(F) ≤ tYNt (F) then return False
return True
Theorem 2. Let $\mathcal G ^ { s } ~ = ~ ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and $P ( y _ { t } \mid$ $\mathrm { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \dotsc , \mathrm { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. Then the two statements are equivalent:
• The effect is identifiable by common adjustment in $\mathcal { G } ^ { s }$ .
• Algorithm 3 outputs True.
Moreover Algorithm 3 has a polynomial complexity of $O \left( \left| X ^ { f } \right| \left( \log \left| \bar { X ^ { f } } \right| + ( | \mathcal { E } ^ { s } | + | \mathcal { V } ^ { s } | ) \mathcal { \bar { \log } } | \mathcal { V } ^ { s } | \right) \right)$ .
The complexity of Algorithm 3 can be further reduced to pseudo-linear time, as detailed in Section F of the Supplementary Material. There is little interest in replacing the efficient implementation of Algorithm 3 with a formula.4 Indeed, we can not expect having a complexity better than $O \big ( \big | X ^ { f } \big | + | \mathcal { E } ^ { s } | + | \mathcal { V } ^ { s } | \big )$ because in the worst case, it is necessar
y to traverse $\mathcal { G } ^ { s }$ and consider all interventions.
# 5 WITH CONSISTENCY THROUGH TIME
In practice, it is usually impossible to work with general FTCGs in which causal relations may change from one time instant to another, and people have resorted to the consistency through time assumption (also referred to as Causal Stationarity in Runge (2018)), to obtain a simpler class of FTCGs.
Assumption 1 (Consistency through time). An FTCG $\boldsymbol { \mathcal { G } ^ { f } }$ is said to be consistent through time if all the causal relationships remain constant in direction through time.
Under this assumption, the number of candidate FTCGs for a fixed SCG $\mathcal { G } ^ { s }$ is smaller, meaning that conditions to be identifiable are weaker and thus that more effects should be identifiable. We detail in Section 5.1 necessary and sufficient conditions to be identifiable. All the proofs are deferred to Section E in the Supplementary Material.
# 5.1 IDENTIFIABILITY
Theorem 1 remains valid under Assumption 1. Lemma 2 also holds because Assumption 1 only affects paths that traverse different time indices. The enumeration of colliderfree backdoor paths containing a fork that remains within $_ { N C }$ , except perhaps at their first vertices, is however more complex, as detailed below.
Lemma 4. Let $\mathcal G ^ { s } ~ = ~ ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG and $P ( y _ { t } \mid$ $\mathrm { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \dotsc , \mathrm { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect such that for all $\mathcal { G } ^ { f }$ belonging to $C ( \mathcal { G } ^ { s } ) , \mathcal { G } ^ { f }$ does not contain a directed path from $Y _ { t }$ to an intervention $X _ { t - \gamma _ { i } } ^ { i } Y _ { t }$ which remains in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ . The following statements are equivalent:
1. There exist an intervention $X _ { t - \gamma _ { i } } ^ { i }$ , $F _ { t ^ { \prime } } \in \mathcal { V } ^ { f }$ and an FTCG $\mathcal { G } ^ { f } \in C ( \mathcal { G } ^ { s } )$ containing the path $X _ { t - \gamma _ { i } } ^ { i } \\\gets F _ { t ^ { \prime } }$ ⇝ $Y _ { t }$ which remains in $N C \cup \{ X _ { t - \gamma _ { i } } ^ { i } \}$ .
2. At least one of the following conditions is satisfied:
(a) There exist an intervention $X _ { t - \gamma _ { i } } ^ { i }$ and $F \in \mathcal { V } ^ { s }$ such that $F _ { t _ { N C } ( F ) }$ is well defined, $X _ { t - \gamma _ { i } } ^ { i }$ -NC-accessible and $Y _ { t }$ - -accessible, and $\left\{ { \begin{array} { l } { F \neq Y , { \mathrm { o r } } } \\ { t - \gamma _ { i } \neq t _ { N C ( F ) } . } \end{array} } \right.$
(b) There exists an intervention $X _ { t - \gamma _ { i } } ^ { i }$ such that $t - \gamma _ { i } =$ $t _ { N C ( Y ) }$ and at least one of the following properties is satisfied: i. $Y _ { t _ { N C } ( Y ) }$ is $X _ { t - \gamma _ { i } } ^ { i } - N C$ -accessible without using $X _ { t - \gamma _ { i } } ^ { i } \gets Y _ { t - \gamma _ { i } }$ and $Y _ { t } \mathrm { ~ - ~ } N C$ -accessible. ii. $Y _ { t _ { N C } ( Y ) }$ is $X _ { t - \gamma _ { i } } ^ { i }$ - NC-accessible and $Y _ { t } - N C -$ accessible without using $X _ { t } ^ { i } Y _ { t }$ .
Lemma 4 characterizes the existence of a collider-free backdoor path containing a fork. While the conditions outlined are more complex than those in Corollary 1, they play the same role, and still require only a small number of calls to -accessibility. Consequently, one can replace the conditions in the final loop of Algorithm 3 with conditions 2.(a) and 2.(b) of Lemma 4 to derive an algorithm for identifiability by common adjustment in $\mathcal { G } ^ { s }$ under consistency through time, as stated in the following theorem which is the counterpart of Theorem 2.
Theorem 3. Let $\mathcal G ^ { s } = ( \mathcal V ^ { s } , \mathcal E ^ { s } )$ be an SCG that satisfies Assumption 1 and $P ( y _ { t } \mid \mathsf { d o } ( x _ { t - \gamma _ { 1 } } ^ { 1 } ) , \ldots , \mathsf { d o } ( x _ { t - \gamma _ { n } } ^ { n } ) )$ be the considered effect. Then the two statements are equivalent:
• The effect is identifiable by common adjustment in $\mathcal { G } ^ { s }$ .
• An adaptation of Algorithm 3 outputs True.
In that case, a common adjustment set is given by $c : = { }$ $\left( \mathcal { V } ^ { f } \setminus \mathcal { N } C \right) \setminus X ^ { f }$ .
In its simpler form, the adaptation of Algorithm 3 still has a polynomial complexity of $O \big ( \big | X ^ { f } \big | \cdot ( | \mathcal { E } ^ { s } | + | \mathcal { V } ^ { s } | \log | \mathcal { V } ^ { s } | ) \big )$ . A pseudo-linear algorithm is dis
cus
sed in Appendix G.
The main difference between the two algorithms (with and without consistency through time) lies in how they test for collider-free backdoor paths with forks. Without consistency through time, this check is based on Lemma 3, while with consistency through time, this check relies on Lemma 4. Note that assuming consistency through time reduces the number of candidate FTCGs: any candidate FTCG under consistency through time is also candidate without this assumption. As a result, the algorithm in Theorem 2 is sound (but not complete) under consistency through time, while the algorithm from Theorem 3 is complete (but not sound) without this assumption.
# 5.2 EXPERIMENTAL ILLUSTRATION
Although this article is primarily theoretical, we have conducted experiments to demonstrate the practical relevance of the results in terms of computation time and estimation. The Python implementation is available at this repository.5
Execution time Algorithm 3 has been implemented in Python with some speed ups discussed in Appendix F. We measure its execution speed, on a standard laptop, as a function of the graph size. For each graph size, 20 random SCGs are generated, and 5 interventions are selected at random. The average execution time of the algorithm is then measured over 5 runs and presented in Figure 4. As one can note, even for very large graphs (with up to 100,000 vertices), the execution time remains reasonable, around 1 second, showing that the theoretical complexity of the algorithm translates into an acceptable computation time.
Figure 4: Average execution time of the implementation (in seconds) as a function of the number of vertices in the graph, with error bars representing standard deviation over 5 runs.
Estimation We further considered a fixed FTCG under a linear Structural Causal Model (SCM) with additive standard Gaussian noises with a lag of 1. We designed the SCM so that the total effect is 0.25. For each choice of $\gamma _ { \mathrm { m a x } }$ which defines the farthest time horizon up to which past information is considered, we estimated the total causal effect $P ( y _ { t } \mid \operatorname { d o } ( x _ { t } ) )$ by using the adjustment set given by Theorem 1 up to the time index $t - \gamma _ { \mathrm { m a x } }$ over 500 data points (non overlapping windows). This estimation procedure was repeated 100 times for each $\gamma _ { \mathrm { m a x } }$ . The estimated total effect and its standard deviation across these repetitions are given in Figure 5.
For comparison, we estimated the total effect using a backdoor set from the true FTCG, yielding an estimate of 0.245 with a standard deviation of 0.036 over 100 runs. The bias and variance of the SCG-based estimator remain comparable to those of the FTCG-based estimator for $\gamma \leq 5 0$ . However, as the adjustment set expands already to 376 variables at $\gamma = 5 3$ , variance increases beyond large $\gamma$ , and for even larger γ a non-negligible bias emerges. This suggests that the adjustment set proposed in this work is particularly useful when one can assume a maximal latency of reasonable size. | The identifiability problem for interventions aims at assessing whether the
total causal effect can be written with a do-free formula, and thus be
estimated from observational data only. We study this problem, considering
multiple interventions, in the context of time series when only an abstraction
of the true causal graph, in the form of a summary causal graph, is available.
We propose in particular both necessary and sufficient conditions for the
adjustment criterion, which we show is complete in this setting, and provide a
pseudo-linear algorithm to decide whether the query is identifiable or not. | [
"math.ST",
"cs.AI",
"stat.TH"
] |
# 1 Introduction
"Testing with AI will take software test automation to the next level."
The integration of artificial intelligence (AI) within the fields of software engineering and software testing has become a prominent area of research. Various novel testing methods and tools are being discussed and presented, yet there is a lack of comprehensive reviews addressing the range of options for augmenting software testing with AI. The present paper aims to address this gap by providing an introductory overview of software testing, which forms the foundation for elaborating novel testing methods enabled by AI and for developing a taxonomy on AI for software testing $( a i \textmu s t )$ :
Software testing is an essential part of the software development life cycle (SDLC) to ensure that software products are released with sufficient quality, reduced risks and minimized number of defects contained. Testing addresses both the verification of the software under test, i.e. whether the software is technically of high quality, and its validation, i.e. whether the software meets the requirements. Testing uses methods of code analysis, also known as static testing because the software is not being run, and execution analysis, also known as dynamic testing because the software is being run. Dynamic testing is conducted at different levels of software composition, for example, at component level for basic software components such as classes in object-oriented programming or functions in functional programming, at integration level for compositions of software components, and at system level where the complete software-based system is tested. Software testing can be performed manually for first impressions or when test automation is not cost-effective, and automatically when testing requires automation, for example for real-time testing, or when test repetition is more efficient with test automation. Test cases, executed manually or automatically, are designed and/or generated from software requirements, software designs, bug reports or other sources of information, collectively known as the test basis. Test cases can be defined abstractly, i.e. logically at a high level with abstract preconditions, inputs, expected outputs and expected postconditions, or concretely, i.e. at a lower level with detailed preconditions, inputs, expected outputs and expected postconditions. Test cases may also contain timing requirements or other actions and procedures to make them executable. Sets of test cases form test suites. Their execution is logged and analysed for missed preconditions or mismatches with expected outputs or postconditions. The evaluation of the test runs, including the mismatches, verifies the correctness of the test cases or reports failures of the software under test. Debugging is used to identify the root causes of these failures, i.e. the errors that lead to them.
Furthermore, effective monitoring and management of the overall testing process must be implemented to ensure or enhance the quality of the software produced. In close relation to the SDLC, the testing processes can vary in terms of size, phases, teams, and so forth. They can be sequential, iterative, agile, or they can follow the principles of the W-model [57]. However, they share the common phases of testing as defined by the fundamental testing process [58] being planning and control, analysis and design, implementation and execution, evaluation and reporting, and completion and teardown.
However, the utilisation of AI in testing is not fully realised when it is confined solely to these phases of the fundamental test process. While it is imperative to acknowledge that testing requires also more general approaches, such as project management or technical infrastructural management, which can also be improved with AI, it is necessary to delve further into the particularities of testing. And indeed, research publications on software testing with AI is steadily growing and so is the understanding of where and how to apply AI in testing is increasing. However, an overarching view relating AI support to the various testing activities and their processes is missing.
Taxonomies, and their formalisation as ontologies, are particularly useful for classifying and consolidating knowledge. They are powerful tools for organising, presenting and using knowledge effectively. They can act as a foundation for reasoning, interoperability and intelligent data usage. Ontologies are formalised, machine-readable representations of knowledge, extending and formalising taxonomies with additional semantics, constraints and logic. Unlike hierarchical taxonomies, ontologies enable richer querying, inference and relationship discovery.
Therefore, after reviewing taxonomies and ontologies on software testing in general and the use of AI for testing in particular in Section 2, an ontology dedicated to AI for software testing was developed and is presented in Section 3. Referred to as ai4st, this ontology can be used to classify contributions from research papers on using AI techniques to evolve and/or improve software testing activities and processes. Selected results are presented and discussed in Section 4. The paper concludes with an outline of future work in Section 5.
# 2 Related Work
According to [63], taxonomies have been proposed for every knowledge area of software engineering (SE) within the SE body of knowledge. These SE knowledge areas are defined in the SWEBOK [67] and include among others software testing fundamentals, test levels, test techniques, test-related measures, test process, software testing in the development processes and the application domains, testing of and testing through emerging technologies, and software testing tools.
However, although the highly detailed SWEBOK provides extensive descriptions of software engineering concepts, methods and techniques, including those for software testing, it does not define a comprehensive glossary or coherent taxonomy. This is somewhat surprising given the intensive discussion of the development of a SWEBOK ontology [68, 55, 2]. According to [63], no other overarching ontology besides SWEBOK has been proposed, except for numerous taxonomies specific to certain knowledge areas including software testing [46, 64, 15, 65, 16, 14].
Taxonomies can be formally defined using first-order logic, the Web Ontology Language OWL [38] or the UML-based Ontology Modelling Language OntoUML [25]. OWL has attracted considerable interest, particularly due to its association with the Semantic Web and the support offered by the Protégé tool. Nevertheless, in most cases, there is no formally defined ontology available for taxonomies proposed in the literature that can be reused for classification purposes, e.g. for the classification of software test research contributions.
In addition, although numerous ontologies related to software testing (ST) exist [62], they are limited in their coverage of the research field, their ability to classify research contributions, and/or their relation to established glossaries and/or taxonomies: According to [7], ontologies for SE including ST are either generic, such as the software engineering ontology network SEON [5], or specific to a knowledge area, such as software testing like the Reference Ontology on Software Testing ROoST [56].
It is important to note that these two ontologies (and others, such as OntoTest [4] and TestTDO [61]) focus on the conceptual grounding of SE or ST concepts, respectively, with regard to their philosophical relations, rather than than focusing on the established body of knowledge. A body of knowledge provides not just terms and relations, but also definitions, explanations, examples and/or best practices. Relevant bodies of knowledge for software testing include the SWEBOK [67], the SE terms by ISO, IEC and IEEE in [31], also known as SEVOCAB, and the ST terms by ISTQB in [32], also known as the ISTQB Glossary.
Furthermore, ROoST [56], being also part of SEON, OntoTest [4] or the ontology presented in [71] focus on dynamic testing only. Although TestTDO [61] includes static testing, it does not link to bodies of knowledge either.
It should also be noted that large-scale research taxonomies such as the Computer Science Ontology CSO [48] do not detail software testing. It categorises ’Testing and Debugging’ as a research topic comprising 15 subtopics only. Similarly, the ACM Computing Classification System [47] only covers some aspects of software testing research.
In summary, and to the best of the author’s knowledge, there is no software testing ontology that
– is formally defined, machine-processable, and downloadable as OWL (or in a comparable format), can be reused and extended for new application scenarios like software testing research classification,
– is closely linked to well-established and standardized bodies of knowledge, and
– covers the software testing knowledge area extensively.
The development of this ontology is described in the following section.
# 3 The Ontology for Artificial Intelligence in Software Testing – ai4st
In order to support the classification of research contributions in the area of using AI methods and techniques for the improvement of ST activities and processes, a dedicated taxonomy named ai4st has been developed by
– making primarily use of the terms defined by ISTQB [32]1,
– defining it in OWL[38] to support machine-processability,
– assigning the CC-BY-SA license [9] to support reuse, and
– providing it via GitHub [51] to enable contributions and/or the uptake by others.
Furthermore, the ai4st taxonomy is based on a four layer model consisting of
– the lightweight universal foundational ontology gUFO [3] for grounding, $-$ the software testing concept ontology stc [53],
– the AI for software engineering ontology ai4se [49], and
– the consolidated overarching ai4st ontology itself.
The ontology gUFO [3] is a simplified version of the Unified Foundational Ontology UFO, designed for easier integration with ontology-driven conceptual modelling, particularly in domains such as information systems. It supports objects (enduring entities) and qualities related to them, which are used to classify research papers into the dimensions of the ai4st taxonomy.
The stc ontology [53] represents a selection of terms in the ISTQB glossary. The glossary contains keywords from software testing syllabi, covering foundation, advanced, and expert-level concepts, methods, and techniques in the software testing profession. Currently under development using the Protégé tool, the stc ontology consists of over 200 classes representing software testing terms and 20 object properties representing their relations. Each term is described as being defined by ISTQB, SEVOCAB, or as proprietary. The development of stc began with the concept maps provided by ISTQB and has evolved beyond them, as these concept maps are informal and mainly represent top-level terms in software testing.
As a predecessor to ai4st, the ai4se taxonomy [49] was created to structure the emerging research field of applying AI to SE and to address its nuances. ai4se is structured along four dimensions:
– Purpose: The goal of using AI is to understand, generate or improve SE artefacts/processes. A new approach may address one, two, or all three of these purposes; for example, it may address both understanding and generation.
– Target: The SE activity is addressed in (1) development and (2) operations, as well as in the corresponding (3) processes. Whenever models are central to an approach, as they are in Model-Driven SE, this is denoted as well. An approach may target several SE activities. SE processes consist of SE activities and constitute activities themselves, allowing for a more detailed representation of an SE target.
– AI Type: The AI techniques being used by an approach, including Symbolic, Subsymbolic, Generative, Agentic, and General AI. One approach may use several types of AI.
– Level: The degree of automation is based on a five-level scale ranging from (1) no support to (5) full automation. The highest level achieved by an approach is indicated. Currently, level 3 (AI-assisted selection) is the most common.
When the ai4se taxonomy was used to classify a large set of research papers, it became clear that a tool-based approach, such as that offered by the Semantic Web, was preferable to manual classification. Hence, the ai4se ontology was developed [50]. It also became apparent that, in order to address the specifics of using AI for software testing, a more specific taxonomy and machine-processable ontology for software testing research would be preferable. The resulting ai4st ontology [51] is shown in Figure 1 with its (top-level) dimensions.
# 4 Classification of AI for Software Testing Research
The classification of AI for software testing research is an ongoing project. To check the validity of the ai4st ontology, an adapted, lightweight systematic literature review (SLR), as described in [59], was conducted to analyse related research. This SLR protocol was followed:
– Review title: Initial SLR on the application of the ai4st taxonomy.
– Objectives of the review: 1. To test the validity of the ai4st taxonomy with an initial research selection. 2. To determine a classification of this initial research selection.
# – Research questions:
RQ1: Which standardized terms are being used in ai4st related research? • RQ2: Which alternative terms are being used in ai4st related research? RQ3: Are the ai4st taxonomy dimensions useful to classify the pre-selected research?
– Database: Research papers from the conference proceedings of the latest International Conference on Software Engineering, ICSE 2025 in Ottawa, Canada and its co-located conferences and workshops; and referenced papers for backward snowballing the research. Forward snowballing was in this case unnecessary, as ICSE 2025 represented the most recent research publications at that time. A complementary search of the IEEE and ACM digital libraries has added further software testing and AI-related research papers published between 2020 and 2025.
# – Inclusion criteria:
Peer-reviewed original research.
Online available.
Research on AI for ST.
# – Exclusion criteria:
• Meta-research such as evaluations, benchmarking, comparisons, surveys, taxonomies, roadmaps Testing of software-based systems like IoT, cloud, vehicle, etc.
Research on ST for AI.
Posters and tutorials.
# – Selection process:
1. Title and abstract screening for the pre-selection of unique research candidates by use of
(a) The dimensions of the ai4st taxonomy: Research Topic, Solution Purpose, ST Target, AI Type, Automation Level.
(c) The solution purpose classifiers to understand, generate, or improve ST artefacts
(e) The target classifiers represent the ST techniques, activities, or processes to which an AI system is being applied2
(b) The classification of a sample paper about testing and debugging, aiming at understanding and improving the testing activities, using deep learning methods, and providing AI-assisted selections.
(d) The automation level classifiers consisting of no AI support, providing AI-assisted options or AIassisted selections, or supporting AI-driven partial automation or AI-driven full automation.
(f) The AI type classifiers for sumbolic, subsymbolic, generative, agentic, or general AI. Subsymbolic AI is classified into statistical, classical machine learning, evolutionary algorithms, swarm intelligence, and deep learning AI.
Fig. 1: Research Paper Classification with ai4st
(a) the concept map resulting from the stc ontology [53] to identify ST-related research, and
(b) the concept map resulting from the ai4st dimensions ’AI type’ to identify AI related research in the ST-related subset of research.
2. Full text review and assessment of the research contributions for the final selection of unique research.
3. Tools:
Online research libraries, including dblp, ACM DL, IEEE Xplore, and Google Scholar to identify related work; and
Python for text analysis and post-processing of finally selected research, supported by MS Visual Studio, Google AI Studio, and LibreOffice.
# – Synthesis process:
• Review of the new and synonym candidate terms for inclusion into the stc ontology.
Classification of the selected research for inclusion into the ai4st ontology.
Alongside the analysis of recent AI for ST research, a new SLR approach has been developed. Rather than using simple search expressions, the pre-selection of papers uses more detailed concept maps that are derived from the relevant ontologies and also include synonyms, as shown in Figure 2. Additionally, the SLR results are used to verify, improve and extend the ontologies further. Therefore, assessing the research texts also involves searching for new term and new synonym candidates. The potential for further refining this kind of ontology-based SLR, as well as SLR-based ontology refinement, depends on the features of digital library APIs that enable more powerful automated searches and research (meta-)data collections.
Fig. 2: Overview on the ontology-driven systematic literature review (SLR) combined with the SLRdriven ontology development
In result of the SLR and text (title and abstract) analysis, of the 1643 papers identified, 1150 referred to a term in the stc ontology in their abstracts and/or titles, but 735 of these referred to only one term, indicating that the paper merely references a fact about software testing. Another 1337 papers used variations of the terms in the stc ontology, such as ’unit test’ instead of ’unit-level test’3. 460 papers referred to only one variation. Papers using two or more original or alternative terms, what makes them candidate papers for the ai4st taxonomy, form a body of 949 papers. Of these 949 papers, 656 contained terms from the ai4st ontology related to AI. Within the 656 papers, 38 relevant original research papers for ai4st were identified.
Furthermore, the text analysis revealed 40 new stc term candidates, 53 new stc synonym candidates and 26 terms that can be treated as either new term or synonym candidates. These new term candidates include terms such as ’test result’ and ’fuzz testing’, which are included in the ISTQB Glossary – a collection of over 600 terms – but are not yet included in the initial stc ontology, which contains over 200 terms. The new term candidates also include terms such as ’flaky test’ and ’genetic testing’, which are not included in the ISTQB Glossary but are extensively discussed in research. Another four new term candidates, such as ’test bot’ and ’bias testing’, and one new synonym candidate, ’AI-based’, have been identified for the ai4st ontology. This is mainly because the SRL focused on ST rather than AI. In the next release of ai4st, the decision will be made as to which new or synonym candidates, beyond those required for research classification in this paper (see below), will be added and whether distinguishing AI types in more detail would be useful.
Table 1: List of Research Papers
Due to space limitations, the classification of the selected 38 papers given in Table 1 is not fully shown here. The complete results on pre-selected research and finally selected unique research, as well as the usage of terms and the assessment of new term and synonym candidates, are provided in [52].
The research questions RQ1, RQ2, and RQ3 of this SLR are answered briefly as follows: Classifying the unique research led to an extension of the stc ontology with:
– three new terms defined in the ISTQB Glossary: ’visual testing’ [19, 45, 43], ’assertion’ [66, 44], and ’penetration testing’ [28, 10]
– eight new terms not in the ISTQB Glossary: • two test techniques: ’mutation testing’ [6] and ’concolic testing’ [24]; • five test activities: ’test selection’ [60], ’test generation’ [70, 22, 20], ’test verification’ [42], ’test prioritization’ [1], and ’test documentation’ [13]; • one non-functional testing: ’penetration testing’ [28, 10]; and • one basic concept: ’ethics’ [60]
Furthermore, the new synonym ’test architecture’ [26] for ’test approach’ was added. In response to RQ1 and RQ2, the software testing targets in the unique research papers were successfully classified by combining the new terms and synonyms with the terms in stc, and hence in ai4st.
Alongside this, RQ3 can also be answered positively. This small selection of unique research covers all potential purposes and levels of automation supported by AI. With regard to the types of AI, all but general AI (due to its non-existence) and evolutionary algorithms (which are currently not in focus) are represented. Additionally, 28 software testing targets are addressed, representing over $1 0 \%$ of the extended stc ontology.
In addition, ai4st can be used as a research knowledge base. One straightforward application is elaborating on the classified research corpus: As the classification can be queried like a database, it is easy to formulate queries about the research corpus, such as which software testing targets are addressed or which research supports AI-assisted option automation, see the listings below.
PREFIX rdf : <http :// www .w3. org /1999/02/22 - rdf -syntax -ns#>
2 PREFIX ai4st : <http :// purl . org / ai4st / ontology #>
3 SELECT ? paper
4 WHERE {
5 ? paper ai4st : hasLevel ai4st :AI - assisted_options
6 }
Listing 1.1: All papers on AI-assisted options automation.
PREFIX rdf : <http :// www .w3. org /1999/02/22 - rdf -syntax -ns#> 2 PREFIX ai4st : <http :// purl . org / ai4st / ontology #> 3 SELECT DISTINCT ? target WHERE { 5 ? paper rdf $\because$ type ai4st : ResearchPaper . 6 ? paper ai4st : hasTarget ? target . 7 }
Listing 1.2: All software testing targets addressed by research papers.
# 5 Outlook
This paper describes ongoing work on representing the exhaustive body of knowledge on software testing using an in-depth ontology. This ontology can also form the basis for exploring new research fields in software testing, such as the emerging area of using AI techniques and tools in software testing (ST). To this end, the paper presents the initial versions of
– the stc ontology on software testing concepts, which is mainly based on the ISTQB Glossary and completed with SEVOCAB and proprietary software testing vocabulary.
– the ai4st ontology that classifies AI for ST research according to the purpose of the AI-based solution, the software testing target being addressed, the type of AI being used, and the level of automation achieved.
– an exemplary SLR on AI for ST, revealing 38 original research papers classified in the ai4st ontology.
The research results including the ontologies stc and ai4st as well as the paper selections from the SLR are available online for reuse and further uptake. The ai4st can be used not only to understand the concepts in this research field better, but also to explore research related to a specific aspect, such as all papers on agentic AI for ST, using SPARQL queries.
The next step will be to extend stc to cover the remaining ISTQB terms, to further refine ai4st to cover more AI-related details, and and carefully revise new term and synonym candidates stemming from SLRs for potential addition. This will form the basis for classifying further research results on the application of ai4st including stc.
Acknowledgement The ideas presented in this paper were developed through constructive dialogue within the Testing, Analysis and Verification (TAV) section of the German Informatics Society (GI), the German Testing Board (GTB), and the International Software Testing Qualifications Board (ISTQB). While the author wrote this paper independently, she acknowledges that the writing process was aided by DeepL to fine-tune the wording. The author has no competing interests to declare that are relevant to the content of this article.
# Bibliography
[1] Abdelkarim, M., ElAdawi, R.: TCP-Net++: Test Case Prioritization Using End-to-End Deep Neural Networks - Deployment Analysis and Enhancements. In: 2023 IEEE International Conference On Artificial Intelligence Testing (AITest). pp. 99–106. IEEE, Athens, Greece (Jul 2023). https://doi.org/ 10.1109/AITest58265.2023.00024, https://ieeexplore.ieee.org/document/10229439/
[2] Abran, A., Cuadrado, J.J., García-Barriocanal, E., Mendes, O., Sánchez-Alonso, S., Sicilia, M.A.: Engineering the ontology for the SWEBOK: Issues and techniques. Ontologies for software engineering and software technology pp. 103–121 (2006)
[3] Almeida, J.P.A., Guizzardi, G., Falbo, R., Sales, T.P.: gufo: a lightweight implementation of the unified foundational ontology (ufo). URL http://purl. org/nemo/doc/gufo (2019)
[4] Barbosa, E.F., Nakagawa, E.Y., Maldonado, J.C.: Towards the establishment of an ontology of software testing. In: Seke. vol. 6, pp. 522–525 (2006)
[5] Borges Ruy, F., de Almeida Falbo, R., Perini Barcellos, M., Dornelas Costa, S., Guizzardi, G.: SEON: A software engineering ontology network. In: Knowledge Engineering and Knowledge Management: 20th International Conference, EKAW 2016, Bologna, Italy, November 19-23, 2016, Proceedings 20. pp. 527–542. Springer (2016)
[6] Caglar, O., Taskin, F., Baglum, C., Asik, S., Yayan, U.: Development of Cloud and Artificial Intelligence based Software Testing Platform (ChArIoT). In: 2023 Innovations in Intelligent Systems and Applications Conference (ASYU). pp. 1–6. IEEE, Sivas, Turkiye (Oct 2023). https://doi.org/10.1109/ ASYU58738.2023.10296551, https://ieeexplore.ieee.org/document/10296551/
[7] Calero, C., Ruiz, F., Piattini, M.: Ontologies for software engineering and software technology. Springer Science & Business Media (2006)
[8] Calvano, M., Curci, A., Lanzilotti, R., Piccinno, A., Ragone, A.: Leveraging Large Language Models for Usability Testing: a Preliminary Study. In: Companion Proceedings of the 30th International Conference on Intelligent User Interfaces. pp. 78–81. ACM, Cagliari Italy (Mar 2025). https: //doi.org/10.1145/3708557.3716341, https://dl.acm.org/doi/10.1145/3708557.3716341
[9] CC: CC BY-SA 4.0 license, attribution-sharealike 4.0 international, legal code (2025), https:// creativecommons.org/licenses/by-sa/4.0/legalcode.en
[10] Confido, A., Ntagiou, E.V., Wallum, M.: Reinforcing Penetration Testing Using AI. In: 2022 IEEE Aerospace Conference (AERO). pp. 1–15. IEEE, Big Sky, MT, USA (Mar 2022). https://doi.org/10. 1109/AERO53065.2022.9843459, https://ieeexplore.ieee.org/document/9843459/
[11] De Almeida, g., Collins, E., Oran, A.C.: AI in Service of Software Quality: How ChatGPT and Personas Are Transforming Exploratory Testing. In: Proceedings of the XXIII Brazilian Symposium on Software Quality. pp. 179–188. ACM, Salvador Bahia Brazil (Nov 2024). https://doi.org/10.1145/3701625. 3701657, https://dl.acm.org/doi/10.1145/3701625.3701657
[12] De Santiago Júnior, V.A.: A method and experiment to evaluate deep neural networks as test oracles for scientific software. In: Proceedings of the 3rd ACM/IEEE International Conference on Automation of Software Test. pp. 40–51. ACM, Pittsburgh Pennsylvania (May 2022). https://doi.org/10.1145/ 3524481.3527232, https://dl.acm.org/doi/10.1145/3524481.3527232
[13] Djajadi, N., Deljouyi, A., Zaidman, A.: Using Large Language Models to Generate Concise and Understandable Test Case Summaries. In: Early Research Achievements (ERA). https://doi.org/10.1109/ ICPC66645.2025.00040, https://azaidman.github.io/publications/djajadiICPC2025.pdf
[14] Engström, E., Petersen, K., Ali, N.B., Bjarnason, E.: SERP-test: a taxonomy for supporting industry– academia communication. Software Quality Journal 25, 1269–1305 (2017)
[15] Felderer, M., Schieferdecker, I.: A taxonomy of risk-based testing. International Journal on Software Tools for Technology Transfer 16, 559–568 (2014)
[16] Felderer, M., Zech, P., Breu, R., Büchler, M., Pretschner, A.: Model-based security testing: a taxonomy and systematic classification. Software testing, verification and reliability 26(2), 119–148 (2016)
[17] Ferreira, M., Viegas, L., Faria, J.P., Lima, B.: Acceptance Test Generation with Large Language Models: An Industrial Case Study (Apr 2025). https://doi.org/10.48550/arXiv.2504.07244, http:// arxiv.org/abs/2504.07244, arXiv:2504.07244 [cs]
[18] Franzosi, D.B., Alégroth, E., Isaac, M.: LLM-Based Labelling of Recorded Automated GUI-Based Test Cases. In: 2025 IEEE Conference on Software Testing, Verification and Validation (ICST). pp. 453–463. IEEE, Napoli, Italy (Mar 2025). https://doi.org/10.1109/ICST62969.2025.10988984, https: //ieeexplore.ieee.org/document/10988984/
[19] Gamal, A., Emad, R., Mohamed, T., Mohamed, O., Hamdy, A., Ali, S.: Owl Eye: An AI-Driven Visual Testing Tool. In: 2023 5th Novel Intelligent and Leading Emerging Sciences Conference (NILES). pp. 312–315. IEEE, Giza, Egypt (Oct 2023). https://doi.org/10.1109/NILES59815.2023.10296575, https: //ieeexplore.ieee.org/document/10296575/
[20] Gao, H., Yang, Y., Sun, M., Wu, J., Zhou, Y., Xu, B.: ClozeMaster: Fuzzing Rust Compiler by Harnessing LLMs for Infilling Masked Real Programs. pp. 712–712. IEEE Computer Society (Mar 2025). https://doi.org/10.1109/ICSE55347.2025.00175, https://www.computer.org/csdl/proceedings-article/ icse/2025/056900a712/251mH1NLq1y, iSSN: 1558-1225
[21] Gao, J., Li, S., Tao, C., He, Y., Anumalasetty, A.P., Joseph, E.W., Sripathi, A.H.K., Nayani, H.: An Approach to GUI Test Scenario Generation Using Machine Learning. In: 2022 IEEE International Conference On Artificial Intelligence Testing (AITest). pp. 79–86. IEEE, Newark, CA, USA (Aug 2022). https://doi.org/10.1109/AITest55621.2022.00020, https://ieeexplore.ieee.org/document/9898132/
[22] Garg, A., Sharma, D.: Generative AI for Software Test Modelling with a focus on ERP Software. In: 2023 International Conference on Advances in Computation, Communication and Information Technology (ICAICCIT). pp. 187–193. IEEE, Faridabad, India (Nov 2023). https://doi.org/10.1109/ ICAICCIT60255.2023.10466102, https://ieeexplore.ieee.org/document/10466102/
[23] Garlapati, A., Satya Sai Muni Parmesh, M.N.V., Savitha, S, J.: AI-Powered Multi-Agent Framework for Automated Unit Test Case Generation: Enhancing Software Quality through LLM’s. In: 2024 5th IEEE Global Conference for Advancement in Technology (GCAT). pp. 1–5. IEEE, Bangalore, India (Oct 2024). https://doi.org/10.1109/GCAT62922.2024.10923987, https://ieeexplore.ieee. org/document/10923987/
[24] Ghimis, B., Paduraru, M., Stefanescu, A.: RIVER 2.0: an open-source testing framework using AI techniques. In: Proceedings of the 1st ACM SIGSOFT International Workshop on Languages and Tools for Next-Generation Testing. pp. 13–18. ACM, Virtual USA (Nov 2020). https://doi.org/10. 1145/3416504.3424335, https://dl.acm.org/doi/10.1145/3416504.3424335
[25] Guizzardi, G., Fonseca, C.M., Benevides, A.B., Almeida, J.P.A., Porello, D., Sales, T.P.: Endurant types in ontology-driven conceptual modeling: Towards ontouml 2.0. In: Conceptual Modeling: 37th International Conference, ER 2018, Xi’an, China, October 22–25, 2018, Proceedings 37. pp. 136–150. Springer (2018)
[26] Hagar, J., Wissink, T.: AIs Understanding of Software Test Architecture. In: 2025 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). pp. 194– 199. IEEE, Naples, Italy (Mar 2025). https://doi.org/10.1109/ICSTW64639.2025.10962517, https: //ieeexplore.ieee.org/document/10962517/
[27] Haldar, S., Pierce, M., Capretz, L.F.: WIP: Assessing the Effectiveness of ChatGPT in Preparatory Testing Activities. In: 2024 IEEE Frontiers in Education Conference (FIE). pp. 1–5. IEEE, Washington, DC, USA (Oct 2024). https://doi.org/10.1109/FIE61694.2024.10893214, https://ieeexplore.ieee.org/ document/10893214/
[28] Happe, A., Cito, J.: Getting pwn’d by AI: Penetration Testing with Large Language Models. In: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. pp. 2082–2086. ACM, San Francisco CA USA (Nov 2023). https://doi.org/10.1145/3611643.3613083, https://dl.acm.org/doi/10.1145/3611643.3613083
[29] Helmy, M., Sobhy, O., ElHusseiny, F.: AI-Driven Testing: Unleashing Autonomous Systems for Superior Software Quality Using Generative AI. In: 2024 International Telecommunications Conference (ITC-Egypt). pp. 1–6. IEEE, Cairo, Egypt (Jul 2024). https://doi.org/10.1109/ITC-Egypt61547.2024. 10620598, https://ieeexplore.ieee.org/document/10620598/
[30] Hossain, S.B., Dwyer, M.: TOGLL: Correct and Strong Test Oracle Generation with LLMs (Dec 2024). https://doi.org/10.48550/arXiv.2405.03786, http://arxiv.org/abs/2405.03786, arXiv:2405.03786 [cs]
[31] ISO, IEC, IEEE: ISO/IEC/IEEE 24765 international standard, second edition: Systems and software engineering – vocabulary (2017), https://pascal.computer.org
[32] ISTQB: Glossary of the international software testing qualifications board (2025), https://glossary. istqb.org
[33] Kapoor, S.: AI-Assisted Test Script Generation for GUI Applications. In: 2025 Fifth International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies (ICAECT). pp. 1–5. IEEE, Bhilai, India (Jan 2025). https://doi.org/10.1109/ICAECT63952.2025. 10958949, https://ieeexplore.ieee.org/document/10958949/
[34] Kaur, A.: An Approach To Extract Optimal Test Cases Using AI. In: 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence). pp. 649–654. IEEE, Noida, India (Jan 2020). https://doi.org/10.1109/Confluence47617.2020.9058244, https://ieeexplore.ieee.org/ document/9058244/
[35] Leu, B., Volken, J., Kropp, M., Dogru, N., Anslow, C., Biddle, R.: Reducing Workload in Using AIbased API REST Test Generation. In: Proceedings of the 5th ACM/IEEE International Conference on Automation of Software Test (AST 2024). pp. 147–148. ACM, Lisbon Portugal (Apr 2024). https: //doi.org/10.1145/3644032.3644449, https://dl.acm.org/doi/10.1145/3644032.3644449
[36] Maia, C.J.D.L., Aguiar, Y.P.C.: AI-Driven Acceptance Testing: first insights exploring the educational potential for test analysts. In: Proceedings of the XXIII Brazilian Symposium on Software Quality. pp. 665–672. ACM, Salvador Bahia Brazil (Nov 2024). https://doi.org/10.1145/3701625.3701691, https: //dl.acm.org/doi/10.1145/3701625.3701691
[37] Martin-Lopez, A.: AI-driven web API testing. In: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Companion Proceedings. pp. 202–205. ACM, Seoul South Korea (Jun 2020). https://doi.org/10.1145/3377812.3381388, https://dl.acm.org/doi/10.1145/3377812. 3381388
[38] McGuinness, D.L., Van Harmelen, F., et al.: Owl web ontology language overview. W3C recommendation 10(10), 2004 (2004)
[39] Mohacsi, S., Felderer, M.: AI-Based Enhancement of Test Models in an Industrial Model-Based Testing Tool. In: 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). pp. 636–638. IEEE, Honolulu, HI, USA (Mar 2021). https://doi.org/10.1109/SANER50967. 2021.00080, https://ieeexplore.ieee.org/document/9426031/
[40] Naimi, L., Bouziane, E.M., Manaouch, M., Jakimi, A.: A new approach for automatic test case generation from use case diagram using LLMs and prompt engineering. In: 2024 International Conference on Circuit, Systems and Communication (ICCSC). pp. 1–5. IEEE, Fes, Morocco (Jun 2024). https: //doi.org/10.1109/ICCSC62074.2024.10616548, https://ieeexplore.ieee.org/document/10616548/
[41] Olmez, M.M., Gehringer, E.: Automation of Test Skeletons Within Test-Driven Development Projects. In: 2024 36th International Conference on Software Engineering Education and Training (CSEE&T). pp. 1–10. IEEE, Würzburg, Germany (Jul 2024). https://doi.org/10.1109/ CSEET62301.2024.10663016, https://ieeexplore.ieee.org/document/10663016/
[42] Peixoto, M., Baía, D., Nascimento, N., Alencar, P., Fonseca, B., Ribeiro, M.: On the Effectiveness of LLMs for Manual Test Verifications. In: 2025 IEEE/ACM International Workshop on Deep Learning for Testing and Testing for Deep Learning (DeepTest). pp. 45–52. IEEE, Ottawa, ON, Canada (May 2025). https://doi.org/10.1109/DeepTest66595.2025.00012, https://ieeexplore.ieee.org/document/11026915/
[43] Prasetya, I.S.W.B., Shirzadehhajimahmood, S., Ansari, S.G., Fernandes, P., Prada, R.: An Agentbased Architecture for AI-Enhanced Automated Testing for XR Systems, a Short Paper. In: 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). pp. 213–217. IEEE, Porto de Galinhas, Brazil (Apr 2021). https://doi.org/10.1109/ICSTW52544.2021. 00044, https://ieeexplore.ieee.org/document/9440175/ [44] Primbs, S., Fein, B., Fraser, G.: AsserT5: Test Assertion Generation Using a Fine-Tuned Code Language Model (Feb 2025). https://doi.org/10.1109/AST66626.2025.00008, http://arxiv.org/abs/2502.
02708, arXiv:2502.02708 [cs] [45] Ragel, R.K.C., Balahadia, F.F.: Visual Test Framework: Enhancing Software Test Automation with Visual Artificial Intelligence and Behavioral Driven Development. In: 2023 IEEE 15th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM). pp. 1–5. IEEE, Coron, Palawan, Philippines (Nov 2023). https: //doi.org/10.1109/HNICEM60674.2023.10589222, https://ieeexplore.ieee.org/document/10589222/ [46] Robinson, P., Ragusa, C.: Taxonomy and requirements rationalization for infrastructure in cloud-based software testing. In: 2011 IEEE Third International Conference on Cloud Computing Technology and Science. pp. 454–461. IEEE (2011) [47] Rous, B.: Major update to acm’s computing classification system. Communications of the ACM 55(11),
12–12 (2012) [48] Salatino, A.A., Thanapalasingam, T., Mannocci, A., Birukou, A., Osborne, F., Motta, E.: The computer science ontology: A comprehensive automatically-generated taxonomy of research areas. Data Intelligence 2(3), 379–416 (2020) [49] Schieferdecker, I.K.: Next-gen software engineering: Ai-assisted big models. arXiv preprint arXiv:2409.18048 (2024) [50] Schieferdecker, I.K.: ai4se - the AI for software engineering ontology (2025), https://github.com/ schieferdecker/ai4se [51] Schieferdecker, I.K.: ai4st - the AI for software testing ontology (2025), https://github.com/ schieferdecker/ai4st [52] Schieferdecker, I.K.: Annex for ’a taxonomy for ai-augmented software testing’ (2025), https://github. com/schieferdecker/ai4stpaper [53] Schieferdecker, I.K.: stc - the software testing concept ontology (2025), https://github.com/ schieferdecker/stc [54] Shirzadehhajimahmood, S., Prasetya, I.S.W.B., Dignum, F., Dastani, M., Keller, G.: Using an agentbased approach for robust automated testing of computer games. In: Proceedings of the 12th International Workshop on Automating TEST Case Design, Selection, and Evaluation. pp. 1–8. ACM, Athens Greece (Aug 2021). https://doi.org/10.1145/3472672.3473952, https://dl.acm.org/doi/
10.1145/3472672.3473952 [55] Sicilia, M., Cuadrado, J.J., García, E., Rodríguez, D., Hilera, J.R.: The evaluation of ontological representation of the SWEBOK as a revision tool. In: 29th Annual International Computer Software and Application Conference (COMPSAC), Edinburgh, UK. pp. 26–28 (2005) [56] Souza, É.F.d., Falbo, R.d.A., Vijaykumar, N.L.: ROoST: Reference ontology on software testing. Applied Ontology 12(1), 59–90 (2017) [57] Spillner, A., Bremenn, H.: The w-model. strengthening the bond between development and test. In: Int. Conf. on Software Testing, Analysis and Review. pp. 15–17 (2002) [58] Spillner, A., Linz, T.: Software testing foundations: A study guide for the certified tester examfoundation level-ISTQB® compliant. dpunkt. verlag (2021) [59] Stapic, Z., López, E.G., Cabot, A.G., de Marcos Ortega, L., Strahonja, V.: Performing systematic literature review in software engineering. In: Central European conference on information and intelligent systems. p. 441. Faculty of Organization and Informatics Varazdin (2012) [60] Strandberg, P.E., Frasheri, M., Enoiu, E.P.: Ethical AI-Powered Regression Test Selection. In: 2021 IEEE International Conference on Artificial Intelligence Testing (AITest). pp. 83–84. IEEE, Oxford, United Kingdom (Aug 2021). https://doi.org/10.1109/AITEST52744.2021.00025, https://ieeexplore. ieee.org/document/9564367/ [61] Tebes, G., Olsina, L., Peppino, D., Becker, P.: TestTDO: A top-domain software testing ontology. In: CIbSE. pp. 364–377 (2020) [62] Tebes, G., Peppino, D., Becker, P., Matturro, G., Solari, M., Olsina, L.: Analyzing and documenting the systematic review results of software testing ontologies. Information and Software Technology 123,
106298 (2020)
[63] Usman, M., Britto, R., Börstler, J., Mendes, E.: Taxonomies in software engineering: A Systematic mapping study and a revised taxonomy development method. Information and Software Technology 85, 43–59 (May 2017). https://doi.org/10.1016/j.infsof.2017.01.006, https://www.sciencedirect.com/ science/article/pii/S0950584917300472
[64] Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Software testing, verification and reliability 22(5), 297–312 (2012)
[65] Villalón, J.C.M., Agustin, G.C., Gilabert, T.S.F., de Jesús Jiménez Puello, J.: A taxonomy for software testing projects. In: 2015 10th Iberian Conference on Information Systems and Technologies (CISTI). pp. 1–6 (2015). https://doi.org/10.1109/CISTI.2015.7170545
[66] Wang, H., Xu, T., Wang, B.: Deep Multiple Assertions Generation. In: Proceedings of the 2024 IEEE/ACM First International Conference on AI Foundation Models and Software Engineering. pp. 1–11. ACM, Lisbon Portugal (Apr 2024). https://doi.org/10.1145/3650105.3652293, https://dl.acm. org/doi/10.1145/3650105.3652293
[67] Washizaki, H.e.: Guide to the Software Engineering Body of Knowledge (SWEBOK Guide), Version 4.0 (2024), http://www.swebok.org
[68] Wille, C., Abran, A., Desharnais, J.M., Dumke, R.: The quality concepts and sub concepts in SWEBOK: An ontology challenge. In: International Workshop on Software Measurement (IWSM), Montreal. vol. 18 (2003)
[69] Yao, Y., Wang, J., Hu, Y., Wang, L., Zhou, Y., Chen, J., Gai, X., Wang, Z., Liu, W.: BugBlitz-AI: An Intelligent QA Assistant. In: 2024 IEEE 15th International Conference on Software Engineering and Service Science (ICSESS). pp. 57–63. IEEE, Changsha, China (Sep 2024). https://doi.org/10.1109/ ICSESS62520.2024.10719045, https://ieeexplore.ieee.org/document/10719045/
[70] Zhang, Y.: New Approaches to Automated Software Testing Based on Artificial Intelligence. In: 2024 5th International Conference on Artificial Intelligence and Computer Engineering (ICAICE). pp. 806– 810. IEEE, Wuhu, China (Nov 2024). https://doi.org/10.1109/ICAICE63571.2024.10863866, https: //ieeexplore.ieee.org/document/10863866/
[71] Zhu, H., Huo, Q.: Developing software testing ontology in UML for a software growth environment of web-based applications. In: Software Evolution with UML and XML, pp. 263–295. IGI Global (2005)
[72] Zimmermann, D., Koziolek, A.: GUI-Based Software Testing: An Automated Approach Using GPT-4 and Selenium WebDriver. In: 2023 38th IEEE/ACM International Conference on Automated Software Engineering Workshops (ASEW). pp. 171–174. IEEE, Luxembourg, Luxembourg (Sep 2023). https: //doi.org/10.1109/ASEW60602.2023.00028, https://ieeexplore.ieee.org/document/10298721/ | In industry, software testing is the primary method to verify and validate
the functionality, performance, security, usability, and so on, of
software-based systems. Test automation has gained increasing attention in
industry over the last decade, following decades of intense research into test
automation and model-based testing. However, designing, developing, maintaining
and evolving test automation is a considerable effort. Meanwhile, AI's
breakthroughs in many engineering fields are opening up new perspectives for
software testing, for both manual and automated testing. This paper reviews
recent research on AI augmentation in software test automation, from no
automation to full automation. It also discusses new forms of testing made
possible by AI. Based on this, the newly developed taxonomy, ai4st, is
presented and used to classify recent research and identify open research
questions. | [
"cs.SE",
"cs.AI",
"D.2.5"
] |
# 1 Introduction
Deep learning has emerged as a powerful technique to learn complex patterns from data [29], leading to advancements in many areas of science, such as protein modeling [22], genetics [43], and climate science [48]. However, the black-box nature of neural networks makes it challenging to translate their predictive ability into scientific insights that can be understood by humans [40]. Neural networks can even produce predictions that violate basic scientific laws like mass conservation [24]. It is still a grand challenge to leverage existing scientific knowledge – insights that have been gained through centuries of experimentation and theorizing – to improve the neural network [23].
As a motivating example, consider the task of modeling the soil organic carbon (SOC) cycle. Soils play a crucial role in the global carbon cycle – they store more carbon than plants and the atmosphere combined [21], and have the potential to mitigate climate change by sequestering carbon dioxide from the atmosphere [27]. Yet it is difficult to understand how carbon flows through the soil, leading to high uncertainties in future climate projections [38]. Out of the box, neural networks can predict the total amount of organic carbon stored in the soil at each location [52]. However, neural networks do not respect prior scientific knowledge about the carbon cycle, nor do they give us new insights into the biogeochemical processes governing the soil carbon cycle – which control how much carbon can be stored in the soil and how long it will remain sequestered there.
To gain insight into these biogeochemical processes, ecologists have developed process-based models to simulate the soil carbon cycle [37]. These models have been meticulously developed through years of research – they may contain hundreds of pools representing different types of carbon, and matrices specifying flux rates between each pair of pools [39, 18, 34]. However, these models contain many unknown parameters, which traditionally must be tuned by human experts through an inefficient trial-and-error process [36]. These models often cannot fit observed data well, especially in crossscale predictions, mainly due to a poor understanding of the relationships between environmental conditions and the parameter values. This is a key bottleneck in using process-based models for verifiable predictions on the global soil carbon cycle and its response to climate change.
A few prior works in hydrology [55, 15], phenology [56], ecosystem modeling [46], and soil biogeochemistry [59] have proposed implementing process-based models in a differentiable way so that poorly-understood parameters can be optimized by backpropagation or replaced with neural networks. While these approaches aim to combine process-based models with deep learning, the neural network components remain too opaque for scientists to understand, which leaves the relationships between the latent parameters and input features unclear.
To address these limitations, we propose a Scientifically-Interpretable Reasoning Network (ScIReN), an end-to-end differentiable framework that embeds scientific process-based models into a fullytransparent neural model, creating a system that respects scientific knowledge, learns from data in an end-to-end manner, and can discover new scientific relationships. ScIReN contains three main components. First, a learnable encoder takes in environmental features for a given location, and predicts scientifically-meaningful latent parameters. We use a fully-interpretable network such as a sparse Kolmogorov-Arnold network [32] to make this portion fully transparent, allowing scientists to understand the relationships between these latent parameters and input features. Second, a novel hard-sigmoid constraint layer projects these parameters into a physically-plausible range set by prior knowledge. Finally, a process-based decoder uses these predicted parameters to simulate the flow of carbon through the soil based on scientific knowledge, and finally predicts the output variables (e.g. amount of soil organic carbon at each depth in the soil). We compare the predicted outputs with ground-truth labels, and backpropagate this loss to train the entire system.
Our main contributions are: (1) We propose a fully-interpretable framework for combining scientific reasoning (in the form of process-based modeling) with data-driven learning. The model infers latent mechanisms that are quantified by scientific parameters used in process-based models, and reveals their relationships to input variables in a transparent way. (2) We balance smoothness and expressivity in the learned functional relationships by using B-splines with smoothness penalties. (3) We propose a novel hard-sigmoid constraint layer to constrain the scientific parameters to fall within physically-plausible ranges. (4) We validate our technique on two scientific domains. First, we apply ScIReN on a model of ecosystem respiration; ScIReN discovers the correct relationships between latent parameters and input features, while other methods do not. It also improves out-of-distribution extrapolation compared to black-box models. Second, we test ScIReN on the challenging task of modeling the soil carbon cycle, where we need to simulate carbon flows through 140 soil pools at each location. On a synthetic dataset, ScIReN is able to predict unlabeled latent biogeochemical parameters and accurately retrieve their relationships with environmental features. With real data, ScIReN simulates observed soil carbon amounts with accuracy comparable to blackbox methods. We hope our work inspires other researchers to apply and extend ScIReN across diverse scientific tasks, advancing AI’s capacity for interpretable scientific discovery.
# 2 Related work
Knowledge-Guided Machine Learning. There is a rich history of incorporating prior knowledge into neural networks by modifying the loss function, pretraining procedure, or model architecture [25, 58]. A simple approach is to add a loss term that penalizes when physical laws (such as energy conservation) are violated [10, 4, 19]. For example, the density of water is known to increase monotonically with depth; thus, Daw et al. [10] and Jia et al. [19] use a “monotonicity loss” that penalizes when the model’s predictions are not monotonic. Physics-informed neural networks assume that the governing equation of a system is known, and penalize when its predictions and gradients violate this equation [44]. Although these approaches encourage the model to comply with physical laws, they cannot guarantee that the model will fully satisfy them [58]. Adding these loss terms makes the loss landscape more complex, making the model difficult to train [26, 45]. These approaches also do not help scientists gain new insights about the relationships between variables.
Scientific process-based models can also be used to generate synthetic data to pretrain the network, as done in lake temperature modeling [20] and agriculture [30]. While this increases the amount of data available to train the network, simply training on process-based model outputs does not provide new insights into physical processes, nor does it guarantee that the predictions satisfy physical constraints.
One can also design model architectures to encode prior knowledge. Convolutional neural networks encode inductive biases such as translation equivariance and locality into the model architecture, reducing the amount of training data required to classify images [28]. In lake temperature modeling, Daw et al. [9] design a monotonicity-preserving LSTM that produces monotonically-increasing intermediate variables (densities) by design. In agriculture, Liu et al. [30] design a hierarchical neural network that incorporates causal relations between different variables. However, it is difficult to design a new architecture for every problem, and these models are still black boxes.
Combining reasoning and learning. A few works reason about constraints and prior knowledge within the network itself. Deep Reasoning Networks use entropy-based losses to encourage the latent space to be interpretable and satisfy constraints (e.g. in Sudoku, each row must contain exactly one of each number) [5]. This approach was used to solve the phase-mapping problem in materials discovery, where the constraints are thermodynamic rules [6]. CLR-DRNets enhanced the reasoning process using a modified LSTM, and used curriculum learning to improve trainability [3]. Physicallyinformed Graph-based DRNets add a physical decoder that reconstructs X-ray diffraction patterns based on Bragg’s law [41]. CS-SUNet uses a smoothness loss to encourage pixels with similar input features to have similar predictions; this inductive bias helps the model predict vegetation productivity at a much finer resolution than the labels [14]. While these approaches have an interpretable latent space, the bulk of the network remains uninterpretable.
Process-based models. Scientists develop process-based models to simulate physical processes based on domain knowledge [8]. These models consist of mathematical equations that describe the relationships between various variables. In soil science, pool-and-flux models are common, where a matrix equation tracks the amount of carbon at each soil depth and matter type [37]. Transition matrices encode the rate at which carbon is transferred between pools, which are functions of soil and climate properties. Many scientific models can be unified under this matrix form [18, 39].
Unfortunately, despite their sophistication, these models have difficulty matching real observations, and have numerous unknown parameters that are traditionally set in an ad-hoc way. These unknown parameters need to vary across space and (sometimes) time, but it is unclear how to estimate these parameters [36]. A state-of-the-art approach for setting these parameters is PRODA [53]. PRODA first runs Bayesian data assimilation at each location separately to find optimal biogeochemical parameters for each location. Then, a neural network is trained to predict these optimal parameters given environmental covariates. While this approach is effective, it is computationally expensive, and is not always robust since each location’s parameters are estimated with only a few observations.
Differentiable Process-Based Models. A few works integrate process-based models into neural networks in an end-to-end differentiable framework; this has been called differentiable parameter learning [55], hybrid modeling [46], or differentiable process-based modeling [50]. For example, [56, 46, 55] used a process-based model as the main backbone, but replaced some components with neural networks when the functional form of the relationship was unknown. By implementing the
Param violation loss (push hardsigmoid input out of flat area) Environmental Forcings Kelmgon !ai Pors Gipoesle SmothL1(Y,Y (KAN) [pin,piax] : Update Neural Network Parameters through Backpropagation Interpretable Neural Biogeochemical Parameters Process-based Model Network (Encoder) (Interpretable Latent Space) (Decoder)
process-based model in a differentiable way, the model could be trained end-to-end, and unknown components could be fit using data. Xu et al. [59] scaled this approach up to a more complex soil carbon model with 21 unknown parameters and 140 carbon pools. However, the neural network component is still opaque, making it difficult for scientists to discover new relationships and insights.
Explainable and Interpretable AI. A subfield of machine learning aims to interpret how neural networks make predictions [42]. Post-hoc feature attribution methods such as SHAP [35] or Integrated Gradients [51] estimate the impact of each feature on the model’s prediction for a given example. Local surrogate models such as LIME [47] fit an interpretable surrogate model that approximates the black-box model in a small area, but it is hard to infer global behavior from local approximations. Marginal effect plots such as Partial Dependence Plots [16] or Accumulated Local Effects plots [2] visualize how each feature affects the output on average. These methods are mere approximations of a black-box model, and can produce misleading explanations if the approximation is poor [49].
On the other hand, inherently-interpretable models aim to make the entire model transparent by design [49]. In linear regression, the coefficients directly reveal how each input affects the prediction. Unfortunately, linear models are not expressive enough for many applications. Neural additive models add expressivity by modeling the output variable as the sum of single-variable functions (one for each input) [1]. However, they cannot represent complex interactions between variables. Kolmogorov-Arnold networks stack multiple additive models into layers, and provably have the ability to approximate any function [32], but are harder to interpret. All of these approaches are typically applied in supervised settings – they have not been combined with process-based models or scientific knowledge to predict unlabeled variables.
# 3 Methods
To combine scientific knowledge and data-driven learning into a fully-transparent model, ScIReN contains three main components. First, a neural network encoder $f _ { N N }$ (with learnable weights $\theta$ ) takes in input features $\mathbf { x } \in \mathbb { R } ^ { D }$ (e.g. soil and climate variables at a given location), and outputs unconstrained latent parameters $\mathbf { a } \in \mathbb { R } ^ { P }$ : $\mathbf { a } ~ = ~ f _ { N N } ( \mathbf { x } ; \theta )$ . In ScIReN, $f _ { N N }$ should be fullytransparent, which can be achieved by making it a neural additive model or sparse KolmogorovArnold network. The latent parameters are scientifically-meaningful variables that govern the underlying physical process, yet cannot be observed directly. Secondly, a constraint layer maps the unconstrained parameters to a scientifically-plausible prior range for each parameter set by prior knowledge; $\mathbf { p } = \mathrm { P r o j } ( \mathbf { a } )$ . Finally, the constrained parameters are passed through a fixed, deterministic process-based decoder $g _ { P B M }$ , which simulates the system and produces the final predicted output $\hat { y } = g _ { P B M } ( \mathbf { p } )$ . We compare this with the true label and backpropagate. The framework is summarized in Figure 1, and we elaborate on each component below.
Figure 2: Learned encoder examples: 1-layer KAN (left) and 2-layer KAN (right)
# 3.1 Encoder: Learned Interpretable Relationships
We want to learn a function $f _ { N N }$ mapping observed input features $\mathbf { x }$ to latent scientific parameters $\mathbf { p }$ . In prior work, this function is typically a fully-connected neural network [59, 56, 46, 55]. However, soil scientists want to understand how biogeochemical parameters (e.g. transfer rates between pools) depend on input features (e.g. temperature), but this is difficult to read out from a neural network.
Recently, a line of work has aimed to produce neural networks that are inherently interpretable while being expressive. Neural additive models (NAM) [1] model the output as the sum of single-variable functions of each input feature. Specifically, they learn a neural network $\phi _ { i } : \mathbb { R } \mathbb { R }$ (with one input and one output) for each feature $x _ { i }$ , and sum contributions from each feature into the output:
$$
N A M ( \mathbf { x } ) = b + \sum _ { i = 1 } ^ { D } \phi _ { i } ( x _ { i } ; \theta _ { i } )
$$
where $( b , \{ \theta _ { i } \} _ { i = 1 } ^ { D } )$ are learnable parameters trained via backpropagation. While NAM is quite expressive, it cannot model non-additive feature interactions, which are important in soil science [11].
To increase the expressivity of neural additive models, we can generate intermediate variables using neural additive model of the inputs, then further use a neural additive model on the intermediate variables to generate the output. Specifically, generate intermediate variables $\mathbf { z } = \{ z _ { 1 } , \dots , z _ { H } \}$ as
$$
z _ { j } = N A M _ { j } ( \mathbf { x } ) = b _ { j } + \sum _ { i = 1 } ^ { D } \phi _ { j , i } ( x _ { i } ) , \quad \forall j \in [ 1 , H ]
$$
Now define each output variable $a _ { p }$ as a neural additive model over the intermediate variables
$$
a _ { p } = N A M _ { p } ( \mathbf { z } ) = b _ { p } + \sum _ { j = 1 } ^ { H } \Phi _ { p , j } ( z _ { j } ) = b _ { p } ^ { \prime } + \sum _ { j = 1 } ^ { H } \Phi _ { p , j } \left( \sum _ { i = 1 } ^ { D } \phi _ { j , i } ( x _ { i } ) \right)
$$
where all the bias terms are collected into $b _ { p } ^ { \prime }$ . This is now the same form as a two-layer KolmogorovArnold network (KAN, see equation 2.1 in [32]). By the Kolmogorov-Arnold theorem, this multi-layer stack of neural additive models can approximate any multivariate continuous function [32].
Two examples of learned encoders are shown in Figure 2. On the left, a 1-layer KAN models the latent parameter $R b \approx w _ { 1 } \cdot s w _ { - } p o t + ( - w _ { 2 } ) \cdot d s w _ { - } p o t$ . One can immediately see how each feature influences $R b$ . On the right, a 2-layer KAN is needed to capture this nonlinear relationship, where an intermediate variable $R b ^ { \prime } \approx w _ { 1 } \cdot s w \_ p o t + ( - w _ { 2 } ) \cdot d s w \_ p o t$ , and then $R b \approx | R b ^ { \prime } |$ .
# 3.2 Sparsity and Smoothness Regularization
Kolmogorov-Arnold networks are still hard to interpret if each output depends on many inputs. Liu et al. [31] propose entropy regularization to sparsify the network. Specifically, we compute an importance score for each edge, as the mean absolute deviation of the output activations from the edge, weighted by their eventual contribution to the final output variables [31]. We denote this score for edge $( i j )$ ) in layer $l$ as $E _ { i , j } ^ { l }$ . We encourage the entropy of the edge importance distribution to be low (making the network choose a few important edges and push others towards zero). We also encourage the absolute deviation of each edge’s outputs to be low via an L1 penalty:
$$
e _ { i , j } ^ { l } = \frac { E _ { i , j } ^ { l } } { \sum _ { i , j } E _ { i , j } ^ { l } } \quad \mathrm { ( n o r m a l i z e ~ e d g e ~ i m p o r t a n c e ~ t o ~ s u m ~ t o ~ 1 ) }
$$
Figure 3: Left: The hard-sigmoid function constrains parameters to $[ p _ { m i n } , p _ { m a x } ]$ , without adding nonlinearity. Right: parameter violation loss pushes the hard-sigmoid input away from flat regions.
$$
\mathcal { L } _ { e n t r o p y } = - \sum _ { l } \sum _ { i , j } e _ { i , j } ^ { l } \log e _ { i , j } ^ { l } ; \quad \mathcal { L } _ { L 1 } = - \sum _ { l } \sum _ { i , j } | E _ { i , j } ^ { l } |
$$
Note that KANs parameterize the learnable edge activation functions $\Phi , \phi$ using B-splines instead of neural networks. B-splines represent a curve as a weighted sum of basis functions, each of which peaks at a different point on the $\mathbf { \boldsymbol { x } }$ -axis (see [12] for details). We can apply a second-order difference penalty on the spline coefficients – this encourages the coefficients to change in a linear way, making the function more linear [13]. This allows us to increase the number of basis functions (knots) and the expressivity of the function while maintaining smoothness and preventing overfitting. If $c _ { 1 } \ldots c _ { G }$ are the coefficients, the penalty is
$$
\mathcal { L } _ { s m o o t h } = \sum _ { i = 1 } ^ { G - 2 } ( ( c _ { i + 2 } - c _ { i + 1 } ) - ( c _ { i + 1 } - c _ { i } ) ) ^ { 2 }
$$
# 3.3 Linear Parameter Constraint Layer
For some latent scientific parameters $p _ { i }$ , there is a known prior range $[ p _ { i } ^ { m i n } , p _ { i } ^ { m a x } ]$ based on prior knowledge and physical plausibility. Xu et al. [59] applied a sigmoid function to the encoder output $a _ { i }$ to force the predicted parameter into the prior range: $p _ { i } = \bar { \sigma ( } a _ { i } )$ . However, this adds nonlinearity and harms interpretability. For example, suppose a parameter $p _ { i }$ is actually a linear function of input variable $x _ { j }$ $, p _ { i } = w x _ { j }$ . If the parameter was constrained using a sigmoid, the unconstrained encoder would have to learn $\overset { \cdot } { a } _ { i } = \sigma ^ { - 1 } ( w x _ { j } )$ , so that after the sigmoid function the parameter becomes $p _ { i } = \sigma ( \sigma ^ { - 1 } ( w x _ { j } ) ) = w x _ { j }$ . The additional inverse sigmoid makes the function less interpretable.
Instead, we use a piecewise linear hard-sigmoid function (figure 3 left) to constrain the parameters:
$$
\begin{array} { r } { p _ { i } = \mathrm { H a r d s i g m o i d } ( a _ { i } ) = \left\{ \begin{array} { l l } { p _ { i } ^ { m i n } } & { \mathrm { i f ~ } a _ { i } \leq - 3 , } \\ { p _ { i } ^ { m a x } } & { \mathrm { i f ~ } a _ { i } \geq + 3 , } \\ { \frac { 1 } { 6 \tau } ( p _ { i } ^ { m a x } - p _ { i } ^ { m i n } ) \cdot a _ { i } + \frac { 1 } { 2 } ( p _ { i } ^ { m a x } + p _ { i } ^ { m i n } ) } & { \mathrm { o t h e r w i s e } } \end{array} \right. } \end{array}
$$
where $\tau$ is a hyperparameter that influences how extreme the predicted parameters are at initialization.
While the gradient $\frac { \partial { { p } _ { i } } } { \partial { { a } _ { i } } }$ is zero when $a _ { i }$ is outside the range $[ - 3 , 3 ]$ , we can add another loss that places a penalty when $a _ { i }$ is in the flat area (figure 3 right).
$$
\mathcal { L } _ { p a r a m } = \sum _ { i = 1 } ^ { P } \operatorname* { m a x } ( 0 , - a _ { i } - 3 , a _ { i } - 3 )
$$
This provides a gradient that pushes $a _ { i }$ towards the linear range $[ - 3 , 3 ]$ when it is in the flat range.
For the ecosystem respiration model, we only know that the latent parameter is nonnegative, so we use ReLU to impose the constraint, and a flipped ReLU loss to push inputs out of the flat region.
$$
p _ { i } = \operatorname* { m a x } ( a _ { i } , 0 ) ; \quad \mathcal { L } _ { p a r a m } = \operatorname* { m a x } ( - a _ { i } , 0 )
$$
# 3.4 Differentiable Process-Based Decoder
ScIReN uses a process-based model that expresses output variables as a fixed, differentiable function of scientific parameters and input variables: $\hat { y } = g _ { P B M } ( \mathbf { p } , \mathbf { x } )$ . Two examples are described below.
Ecosystem respiration. Consider the model of ecosystem respiration in [46]. Based on scientific knowledge, we write the output variable $R _ { e c o }$ (ecosystem respiration) as a differentiable function of two latent parameters, base respiration $R _ { b }$ and temperature sensitivity $Q _ { 1 0 }$ . Specifically:
$$
R _ { e c o } = g _ { P B M } ( \mathbf { p } , \mathbf { x } ) = R _ { b } ( \mathbf { x } ) \cdot Q _ { 1 0 } ^ { \frac { t _ { a } - T _ { r e f } } { 1 0 } }
$$
where the latent parameters are $\mathbf { p } = \{ R _ { b } , Q _ { 1 0 } \}$ , the input features are $\mathbf { x } = \{ s w \_ p o t , d s w \_ p o t , t _ { a } \}$ , and $T _ { r e f } = 1 5$ . $R _ { b }$ has an unknown relationship with the input features (learned in the encoder), while $Q _ { 1 0 }$ is a learnable constant.
Soil carbon modeling. As a more complex process, we use the soil organic carbon module from Community Land Model 5 (CLM5) [34], which tracks the amount of soil organic carbon (SOC) in 140 pools in the soil (20 depths and 7 material types per depth). Denote the amount of carbon in pool $i$ as $Y _ { i }$ . The core of the model is a mass conservation equation for each pool, where the change in carbon equals inflow (from plants and other pools) minus outflow (to other pools or the atmosphere):
$$
\frac { d Y _ { i } ( t ) } { d t } = \mathrm { i n f l o w ~ t o ~ p o o l ~ } i - \mathrm { o u t f l o w ~ f r o m ~ p o o l ~ } i
$$
The equations for each pool can be combined into a single matrix equation. If we assume steady state ( dYdit(t) = 0), we can write Y as a function of biogeochemical parameters p and input features x:
$$
\hat { Y } ( \mathbf { p } , \mathbf { x } ) = \left[ A ( \mathbf { p } ) \mathrm { d i a g } ( \boldsymbol { \xi } ( \mathbf { x } ) \odot K ( \mathbf { p } ) ) + V ( \mathbf { p } , \mathbf { x } ) \right] ^ { - 1 } B ( \mathbf { p } , \mathbf { x } ) I ( \mathbf { x } )
$$
The details of this equation are explained in the Appendix A.4. For now, it is sufficient to note that the process-based model takes in 21 latent biogeochemical parameters $\mathbf { p }$ , uses the parameters to construct matrices describing carbon fluxes and decomposition, and finally predicts the amount of carbon in 140 pools (20 layers and 7 pools), $\hat { Y }$ . Each operation (including matrix inversion) is differentiable and can be implemented in PyTorch. Note that our labeled data only contains aggregate SOC amounts at specific depths (which may not match the 20 fixed layers). Thus, we sum up the SOC pools at each layer, and linearly interpolate to predict SOC at the observed depths.
# 3.5 Final Loss
The final loss contains a smooth L1 loss between the predicted and ground-truth output variables, as well as the parameter regularization loss and KAN regularization losses:
$$
\mathcal { L } = \sum _ { i = 1 } ^ { N } \left[ S m o o t h L 1 ( \hat { Y _ { i } } , Y _ { i } ) \right] + \lambda _ { p a r a m } \mathcal { L } _ { p a r a m } + \lambda _ { L 1 } \mathcal { L } _ { L 1 } + \lambda _ { e n t r o p y } \mathcal { L } _ { e n t r o p y } + \lambda _ { s m o o t h } \mathcal { L } _ { s m o o t h } ,
$$
Since the entire network is differentiable, we can backpropagate the loss through the process-based model to optimize the latent parameters and the learnable weights of the neural network. The loss weights $\lambda$ are tuned on a validation set for each domain; they are relatively intuitive to tune since we can visualize whether the KAN is too sparse/dense and whether the functions are too jagged/smooth.
# 4 Experiments
We test our approach on two domains: the model of ecosystem respiration in [46], and the CLM5 soil carbon cycle model on sites in the contiguous United States [59].
# 4.1 Evaluation Metrics and Baselines
For each dataset, we evaluate the test-set accuracy $( R ^ { 2 } )$ of various methods in predicting observed variables. For experiments using synthetic labels, we also have ground-truth latent parameter values and functional relationships, so we can evaluate how accurately each method recovers these parameters and relationships. To evaluate functional relationship quality, for both ground-truth (synthetic) relationships and our learned models, we first compute the fraction of variance in the output that is explained by each input feature. For 1-layer KAN, we can simply compute the variance of each edge’s post-activation outputs, and divide by the total variance in the output. For other models, this is non-trivial; we use Partial Dependence Variance [17] to estimate how much variance in the output is explained by each input feature. Once we have feature importance distributions, we use KL divergence to measure how far the model’s learned feature importances are from the ground-truth.
Table 1: Ecosystem respiration, linear $R _ { b }$ . Mean and standard deviation across 5 seeds.
Table 2: Ecosystem respiration, nonlinear $R _ { b }$ . Mean and standard deviation across 5 seeds.
For baselines, we compare against a pure neural network that only predicts observed variables (and cannot infer latent variables), and a blackbox hybrid model [46, 59] where latent parameters are predicted by a neural network. We run ScIReN and Blackbox-Hybrid using a nonlinear constraint (sigmoid or softplus) and a linear constraint (hard-sigmoid or ReLU).
# 4.2 Ecosystem Respiration
For ecosystem respiration, we used the same dataset and splits as [46], except we removed the $20 \%$ highest-temperature examples from the train set, forcing the model to extrapolate to higher temperatures than seen during training. We created two sets of latent $R _ { b }$ (base respiration) values. First, we model $R _ { b }$ as a linear function of 2 features sw_pot, dsw_pot (following [46]):
$$
R _ { b } = 0 . 0 0 7 5 \cdot s w \_ p o t - 0 . 0 0 3 7 5 \cdot d s w \_ p o t + 1 . 0 3 5 0 6 8 5 8
$$
Second, to create a setting where 2-layer KAN is needed, we add an absolute value.
$$
R _ { b } ^ { \prime } = 0 . 0 0 7 5 \cdot s w _ { - } p o t - 0 . 0 0 3 7 5 \cdot d s w _ { - } p o t ; \quad R _ { b } = \left| \frac { R _ { b } ^ { \prime } - m e a n ( R _ { b } ^ { \prime } ) } { s t d e v ( R _ { b } ^ { \prime } ) } \right| + 0 . 1
$$
We then generated the observed variable $R _ { e c o }$ according to the process-based model with multiplicative noise, as in [7]:
$$
R _ { e c o } = R _ { b } \cdot Q _ { 1 0 } ^ { \frac { t _ { a } - T _ { r e f } } { 1 0 } } \cdot ( 1 + \epsilon ) , \quad \epsilon \sim N ( 0 , 0 . 1 ) , \mathrm { t r u n c a t e d \ t o } \ [ - 0 . 9 5 , 0 . 9 5 ]
$$
Table 1 shows results for the first setting (linear $R _ { b }$ ). For predicting the observed variable $R _ { e c o }$ , Blackbox-Hybrid and ScIReN outperform pure-NN as the process-based model provides prior knowledge that helps the model extrapolate out-of-distribution. For inferring the latent variable $R _ { b }$ and functional relationships, ScIReN with linear constraint does best; it correctly learns that $R _ { b }$ only depends on ${ _ { s w } } _ { - } p o t$ and ${ d s w \_ p o t }$ , not $t _ { a }$ (see Figure 2 left). This is difficult because the irrelevant feature $t _ { a }$ (air temperature) is highly correlated with feature sw_pot. Also, if the model learns the wrong $Q _ { 1 0 }$ value, it can make $R _ { b }$ depend on $t _ { a }$ to compensate. ScIReN’s entropy loss pushes it to eliminate as many variables as possible, and the smoothness loss (with the linear constraint) makes the relationship as linear as possible. Other methods learn complex relationships that perform worse.
Table 2 shows results for nonlinear $R _ { b }$ , where a 2-layer KAN is needed to model the complex relationship (1-layer KAN is insufficient). ScIReN predicts the observed variable, latent variable, and functional relationships almost perfectly, significantly outperforming pure-NN and Blackbox-Hybrid. Qualitatively, figure 2 (right) shows that ScIReN learned the true relationship.
Table 3: Soil carbon cycle (synthetic parameters). Mean and standard deviation across 5 splits/seeds.
Figure 4: Functional relationships learned by Blackbox-Hybrid (left) and ScIReN (center) vs. truth (right), on synthetic labels. ScIReN recovers the true relationships much more accurately.
Table 4: Soil carbon cycle (real labels). Mean and standard deviation across 5 splits/seeds.
# 4.3 Soil carbon cycle
For soil carbon, similar to [57] we split the US into $2 \times 2$ degree blocks and randomly assign the blocks to five folds. We average across five data splits – each split uses one fold for testing, one fold for validation, and the other folds for training. Each split also uses its own seed for initialization.
First, we synthetically generate functional relationships between the 10 input features and 4 most sensitive biogeochemical parameters from [59]. Of the $1 0 \times 4 = 4 0$ possible relationships, we select $2 0 \%$ and randomly assign each to (linear, quadratic, log, exp, abs) with random affine shifts. We set the other parameters to default values as they are poorly constrained by data [59]; this equifinality is an inherent limitation of process-based models, which we mitigate by only predicting the 4 most sensitive parameters. We then use the CLM5 process-based model to generate synthetic SOC labels from these parameters. Table 3 shows how various methods perform in recovering these functional relationships. ScIReN (1-layer KAN) recovers the ground-truth relationships (see Figure 4) and observed/latent variables almost perfectly, while Blackbox-Hybrid is not incentivized to produce sparse relationships and mixes correlated features in. Finally, we train all methods on real carbon labels in Table 4. ScIReN’s accuracy in predicting SOC amounts is comparable to blackbox hybrid models and pure neural networks, which do not reveal functional relationships. This indicates that we can obtain full interpretability without significantly sacrificing predictive accuracy. Note that a 1-layer KAN is enough to achieve good accuracy, making the encoder even easier to interpret. | Neural networks are a powerful tool for learning patterns from data. However,
they do not respect known scientific laws, nor can they reveal novel scientific
insights due to their black-box nature. In contrast, scientific reasoning
distills biological or physical principles from observations and controlled
experiments, and quantitatively interprets them with process-based models made
of mathematical equations. Yet, process-based models rely on numerous free
parameters that must be set in an ad-hoc manner, and thus often fit
observations poorly in cross-scale predictions. While prior work has embedded
process-based models in conventional neural networks, discovering interpretable
relationships between parameters in process-based models and input features is
still a grand challenge for scientific discovery. We thus propose
Scientifically-Interpretable Reasoning Network (ScIReN), a fully-transparent
framework that combines interpretable neural and process-based reasoning. An
interpretable encoder predicts scientifically-meaningful latent parameters,
which are then passed through a differentiable process-based decoder to predict
labeled output variables. ScIReN also uses a novel hard-sigmoid constraint
layer to restrict latent parameters to meaningful ranges defined by scientific
prior knowledge, further enhancing its interpretability. While the embedded
process-based model enforces established scientific knowledge, the encoder
reveals new scientific mechanisms and relationships hidden in conventional
black-box models. We apply ScIReN on two tasks: simulating the flow of organic
carbon through soils, and modeling ecosystem respiration from plants. In both
tasks, ScIReN outperforms black-box networks in predictive accuracy while
providing substantial scientific interpretability -- it can infer latent
scientific mechanisms and their relationships with input features. | [
"cs.LG",
"cs.AI"
] |
# 1 Introduction
Recent advances in large language models (LLMs) have made remarkable progress in complex reasoning and problem solving across domains such as mathematics [6, 15, 26] and programming [4, 3, 21]. Yet despite these impressive capabilities, conventional LLM reasoning approaches remain fundamentally limited: they standalone each problem instance, generating solutions from scratch without accumulating or transferring insights from rich, diverse experiential knowledge.
This isolated reasoning paradigm marks a significant departure from how expert human problem solvers operate. Expert problem solvers—such as an Olympiad or programming contest teams—rarely approach problems in a vacuum. Instead, they draw upon a rich tapestry of cumulative experiences: absorbing mentorship from coaches, developing intuition from past problems, leveraging knowledge of tool usage and library functionality (e.g., calculator), adapting strategies based on peers’ expertise and experiences, gaining insights through iterative trial and error, and learning from related problems even during competition. This holistic experience empowers them to tackle new challenges not from scratch, but by dynamically applying accumulated knowledge and adaptive strategies.
Figure 1: Results Summary on AIME $^ { , } 2 4$ (16 runs), AIME ’25 and LiveCodeBench (32 runs). Our framework $\mathbb { X }$ olver, built on o3-mini-medium and o3-mini-high backbones (denoted $\mathrm { ( m ) }$ and (h)), achieves up to $3 0 . 9 \%$ gain over the baseline and often outperforms leading models on both tasks.
While numerous prior studies have enhanced LLM reasoning and problem solving through various forms of experiential knowledge augmentation, they have predominantly operated within discrete modalities—retrieving similar problems or relevant contexts [46, 25, 14], leveraging external tools [33, 32], or facilitating multi-agent collaboration [18, 16, 62]. Despite their individual strengths, these approaches address distinct facets of experiential knowledge independently, preventing LLMs from accumulating and synthesizing a comprehensive repertoire of learning signals across diverse experiential dimensions, thereby limiting the development of the rich, interconnected knowledge structures that characterize human expertise.
In this paper, we introduce $\mathbb { X }$ olver, a unified, memory-augmented, multi-agent inference framework that emulates the holistic experience-driven, collaborative reasoning of expert teams. $\mathbb { X }$ olver dynamically orchestrates a roster of specialized agents—such as mathematicians, programmers, verifiers—that iteratively tackle complex problems. Unlike conventional LLM pipelines, $\mathbb { X }$ olver seamlessly integrates planning, episodic retrieval—both from external or self-parametric long-term memory—an evolving intermediate shared memory, tool invocation, multi-agent collaboration, agentdriven evaluation, and iterative self-refinement into a single adaptive architecture.
Each agent’s reasoning begins with exemplars drawn from episodic memory. From the second iteration onward, agents rely exclusively on an evolving shared memory that records the highestquality reasoning paths, solutions, and evaluation feedback generated so far—thereby accumulating symbolic experience over time. This shared repository guides agents to build on successful strategies, correct mistakes, and improve solution quality. When needed, agents invoke external tools (e.g., code execution), and a dedicated judge agent reviews all outputs—selecting top responses, issuing feedback, and enriching the intermediate shared memory with curated traces and collective evaluations for future rounds. Iterations continue until outputs converge or a preset limit is reached, followed by a final verification or external debugging phase to ensure correctness. Additionally, by updating its episodic store with each newly solved problem and its reasoning trace, Xolver can continually expand its knowledge base. Through this closed loop of collaborative agents, memory-guided refinement, and tool guided precision, $\mathbb { X }$ olver features a more holistic experience learning and transcends static LLM inference, delivering adaptive, expert-level reasoning over time. Figure 2 illustrates the workflow.
We conduct large-scale experiments across a range of math and programming benchmarks—including GSM8K, Math-500, AIME (2024 and 2025), and LiveCodeBench (v5)—using both proprietary (o3- mini-medium) and open-weight (QWQ-32B) backbone models. $\mathbb { X }$ olver consistently outperforms specialized reasoning systems such as OctoTools [33], CheatSheet [55], and Search-o1 [28]. Remarkably, even when instantiated with lightweight models, $\mathbb { X }$ olver often surpasses significantly larger state-of-the-art LLMs, including Qwen3-235B [56], Gemini 2.5 Pro [7], o1, o3, and o4-minihigh [41]. As in Figure 1, $\mathbb { X }$ olver $\mathrm { ( m ) }$ achieves $9 1 . 6 \%$ average accuracy on the AIME ’24 and $^ { , } 2 5$ benchmarks—an 18.5-point gain over o3-mini-medium—while $\mathbb { X }$ olver (h) reaches $9 4 . 1 \%$ , outper
0 Planner Agent [中 [ T0i , R0i ] LLM Judge Agent 黑 由 [ T1i , R1i ] 田 Final Output [ y, TF, RF ] Problem Tool Execution Specification [ T2i , R2i ] ↑ 1. [ T2i , R2i , S2i , a2] 1 Dynamic Thought, Response [ T1i , R1i , S1i , a1] 国 Agents Prior Generation 3. [ T0i , R0i , S0i , a0 Self/External
Episodic Retrieval Iterative Refinement Intermediate Shared memory VDerbifuiegrg/er
Retrieval Memory Update on Completion Prior Experinece Synthesize, Accumulate and Refine Experience Experience to Next
forming o3-mini-high by 7.2 points. On LiveCodeBench, $\mathbb { X }$ olver $\mathrm { ( m ) }$ improves upon its base by 21 points $6 6 . 3 \%$ to $8 7 . 3 \%$ ), with $\mathbb { X }$ olver (h) achieving $9 1 . 6 \%$ , a 22.1-point lift over o3-mini-high.
Our analysis reveals how $\mathbb { X }$ olver’s experiential components contribute to its performance. Accuracy improves consistently with more agents and iterations, reflecting the benefits of experience accumulation, though at increased cost. While external retrieval remains powerful, we find that self-retrieval—drawing from the model’s own parametric memory—can serve as an alternative with some performance drop. For tasks involving symbolic reasoning and complex arithmetic, multi-agent, multi-iterative refinement is more beneficial than tool use (e.g., Python execution). Our experiments confirm that even without updating episodic memory during inference, $\mathbb { X }$ olver retains substantial performance gains, emphasizing the strength of its intermediate memory and iterative refinement. Together, these findings highlight $\mathbb { X }$ olver ’s ability to accumulate, refine, and reuse symbolic experience through collaborative, memory-guided reasoning.
# 2 The Xolver Framework
Given a problem query $q \in \mathcal { Q }$ and a pretrained language model $\operatorname { L L M } _ { \theta } ( \cdot )$ , a conventional approach generates a solution via single-step inference: $y \sim \mathrm { L L M } _ { \theta } ( q )$ . In contrast, $\mathbb { X }$ olver executes a dynamic, multi-agent reasoning process that iteratively accumulates and leverages symbolic experience to solve complex problems more effectively.
To support structured collaborative reasoning, $\mathbb { X }$ olver maintains two complementary forms of memory: an episodic memory $\mathcal { D } _ { E }$ , which stores a library of past problems, solutions, and reasoning traces; and an intermediate dynamic shared memory $\mathcal { D } _ { S }$ , which evolves during inference to retain high-quality agent trajectories—comprising reasoning thoughts, responses, agent metadata, and feedback. In $\mathbb { X }$ olver, a multi-agent team $\mathcal { A }$ is orchestrated adaptively by a planner agent $\mathcal { P }$ , which assigns roles and configures memory access. During inference, $\mathcal { A }$ agents leverage an external toolset $\tau$ (e.g., Python interpreter) to support accurate computation. Finally, a verifier or external debugger $\nu$ is invoked to extract and format the final answer, and to validate correctness for executable outputs.
Below, we first describe the $\mathbb { X }$ olver agents and tools in Section 2.1, followed by the memory components in Section 2.2, and the inference cycle in Section 2.3.
# 2.1 Agents and Tools
Planner Agent $\mathcal { P }$ . The planner agent $\mathcal { P }$ is responsible for initiating, planning, and orchestrating the $\mathbb { X }$ olver multi-agent architecture. Given the problem $q$ and the number of agents $m$ , it constructs a team $\mathcal { A }$ of $m$ dynamic agents, each assigned a distinct expert role (e.g., algebra solver, mathematician, theorist, programmer, algorithm designer) tailored to the demands of $q$ . To ensure sufficient task coverage and role diversity, $\mathcal { P }$ first prompts the underlying LLM to over-generate $M > m$ candidate agents, from which it then selects the most effective subset $\mathcal { A } \subset \{ a _ { 1 } , \dotsc , a _ { M } \}$ such that $| { \mathcal { A } } | = m$ . A summary of the most frequently generated and selected roles is provided in Appendix D.4.
Dynamic Reasoning Agents $\mathcal { A }$ . The set $\mathcal { A } = \{ a ^ { 1 } , a ^ { 2 } , . . . , a ^ { m } \}$ represents a team of dynamic reasoning agents constructed by the planner agent $\mathcal { P }$ . Each agent $a ^ { j } \in { \mathcal { A } }$ is assigned a distinct expert role (e.g., algebra solver, programmer, counter-example generator) tailored to the task query $q$ Agents are instantiated using a standardized prompting template (see Appendix A) that incorporates the task description, assigned role, retrieved examples, prior reasoning attempts, and shared memory feedback—enabling iterative self-correction and role specialization.
At each iteration $i$ , agent $a ^ { j }$ receives a context $\mathcal { C } _ { i } ^ { j }$ and generates a structured reasoning trace $T _ { i } ^ { j }$ and a response $R _ { i } ^ { j }$ . For the first iteration $( i = 0 )$ ), the context is initialized using the task query and relevant retrieved exemplars:
$$
\mathcal { A } \mathcal { C } _ { 0 } ^ { j } = \{ q \} \cup \mathcal { R } ( \mathcal { D } _ { E } ) .
$$
(BUILDCONTEXT)
For subsequent iterations $( i \geq 1 )$ ), the context evolves by incorporating its prior generation (history) and the shared memory:
$$
\begin{array} { r } { \mathcal { A } \mathcal { C } _ { i } ^ { j } = \{ q \} \cup \{ T _ { i - 1 } ^ { j } , R _ { i - 1 } ^ { j } \} \cup \mathcal { D } _ { S } . } \end{array}
$$
(BUILDCONTEXT)
Judge Agent $\mathcal { I }$ . The judge agent $\mathcal { I }$ evaluates intermediate outputs from each agent and returns structured feedback to guide refinement and memory updates. Given a query $q$ , a reasoning trace $T$ , and a response $R$ , it produces a feedback tuple $S = ( T _ { S } , s )$ , where $T _ { S }$ is a natural language explanation (e.g., critique, justification, correction), and $s$ is a scalar quality score. The interpretation of $s$ is task-dependent: for math problems, $s \in [ 0 , 1 ]$ reflects an LLM-estimated correctness probability; for code tasks, $s \in \{ 0 , 1 , \ldots , N _ { \mathrm { t e s t } } \}$ , where $N _ { \mathrm { t e s t } }$ denotes the total number of test cases including problem-provided samples and 10 synthesized test cases generated using AceCode-RM-32B [67]. To avoid compiler interaction latency and maintain symbolic traceability, test case outcomes are determined by simulating execution through LLM prompting within the judge agent $\mathcal { I }$ , following the CodeSim protocol [18]. This structured feedback enables agents to identify failures, receive localized corrections, and improve reasoning over iterations.
Verifier Agent $\nu$ . Due to linguistic complexity and varying answer specification formats, a response may be incorrect even when the underlying reasoning or open-ended response is valid. For instance, answer formats may require multiple-choice letters (e.g., “(A)” or “Choice B”), boxed numerical values (e.g., $ { \left. 4 2 \right. } ^ { } { \mathbf { \cdot } } )$ , or final answers in specific units (e.g., $\cdot 5 \mathrm { k m } ^ { \prime \prime }$ or $" 1 2 \% " \$ ). An additional round of answer extraction and formatting helps reduce such mispredictions [44]. This challenge is even more pronounced in code generation tasks, where predicted code may fail to execute or not pass all test cases. To mitigate this, $\mathbb { X }$ olver includes a Verifier Agent $\nu$ , which operates differently based on the output type. For math and QA problems, $\nu$ extracts the final reasoning $T _ { F }$ , response $R _ { F }$ , and answer $y$ from the response associated with the top-ranked entry BESTRESPONSE in $\mathcal { D } _ { S }$ , ensuring adherence to the expected output format. For executable code, $\mathbb { X }$ olver invokes an external debugger (LDB [70]), where $\nu$ interacts with a Python runtime to capture execution feedback and iteratively fix runtime errors.
Tools $\tau$ . Integrating natural language reasoning with tools like Python execution is a proven way to boost performance on complex reasoning tasks [37, 57]. We observe that even advanced reasoning models often make mistakes in intermediate steps, particularly when computations become non-trivial. To address this, each dynamic agent $a ^ { j }$ is explicitly instructed to use Python execution during reasoning when needed. While $\mathbb { X }$ olver currently limits $\tau$ to Python, our prompting strategy is tool-agnostic, allowing an interface for future extensions to richer toolsets [32, 33].
All agents are built using the underlying LLM. All prompts are 0-shot and provided in Appendix A
# 2.2 Memory Components
Episodic Memory $\mathcal { D } _ { E }$ . $\mathbb { X }$ olver maintains two forms of episodic (long-term) memory: (1) an external memory corpus $\mathcal { D } _ { E } ^ { \mathrm { e x t } } = \{ ( q ^ { \prime } , T ^ { \prime } , R ^ { \prime } ) \}$ , which consists of past problem instances $q ^ { \prime }$ , their corresponding reasoning traces $T ^ { \prime }$ (optional), and solution responses $R ^ { \prime }$ ; and (2) the internal parametric memory encoded in the weights of the agent-specific language model $\mathrm { L L M } _ { j }$ .
We define a general retrieval operator $\mathcal { R } ( \mathcal { D } _ { E } )$ that returns a set of $K$ examples relevant to the query $q$ . When $\mathcal { D } _ { E } ^ { \mathrm { e x t } }$ is available, retrieval is conducted using similarity-based search (e.g., BM25):
$$
\begin{array} { r } { \mathcal { R } ( \mathcal { D } _ { E } ) = \{ ( q _ { k } ^ { \prime } , T _ { k } ^ { \prime } , R _ { k } ^ { \prime } ) \} _ { k = 1 } ^ { K } \mathrm { R e t r i e v e } _ { j } ( q , \mathcal { D } _ { E } ^ { \mathrm { e x t } } ) . } \end{array}
$$
Otherwise, $\mathbb { X }$ olver falls back to internal self-retrieval by sampling from the agent model itself:
$$
\begin{array} { r } { \mathcal { R } ( \mathcal { D } _ { E } ) = \{ ( q _ { k } ^ { \prime } , T _ { k } ^ { \prime } , R _ { k } ^ { \prime } ) \} _ { k = 1 } ^ { K } \sim \mathrm { L L M } _ { j } ( q ) . } \end{array}
$$
In the case of an external episodic memory, $\mathcal { D } _ { E }$ can also be updated with UPDATEEPISODICMEMORY by adding the top-ranked reasoning and response from $\mathcal { D } _ { S }$ , paired with the problem $q$ , into the external corpus $\mathcal { D } _ { E } ^ { \mathrm { e x t } }$ . That is, ${ \mathcal { D } } _ { E } ^ { \mathrm { e x t } } { \mathcal { D } } _ { E } ^ { \mathrm { e x t } } \cup ( q , T , R )$ , where $( T , R , S , a )$ is the top-ranked entry in $\mathcal { D } _ { S }$ .
Intermediate Shared Memory $\mathcal { D } _ { S }$ . The shared memory $\mathcal { D } _ { S }$ maintains a fixed-size set of highquality intermediate reasoning, responses, and metadata generated by the dynamic agents during inference on the current query $q$ . For simplicity and to preserve the dynamic nature of the framework, we constrain $| \mathcal { D } _ { S } | = m$ , where $m$ is the number of dynamic agents in $\mathcal { A }$ . Initially, $\mathcal { D } _ { S } \emptyset$ . At each iteration $i$ , each agent $a _ { j } \in { \mathcal { A } }$ produces a reasoning trace $T _ { i } ^ { j }$ , response $R _ { i } ^ { j }$ , and receives structured feedback $S _ { i } ^ { j } = ( T _ { S } ^ { ( i , j ) } , \bar { s } _ { i , j } )$ from the judge agent $\mathcal { I }$ , where $T _ { S } ^ { ( i , j ) }$ is a natural language explanation and $s _ { i , j }$ is a scalar score reflecting the quality of the tuple $( T _ { i } ^ { j } , R _ { i } ^ { j } )$ . After collecting the new outputs
$$
\tau _ { i } ^ { j } = ( T _ { i } ^ { j } , R _ { i } ^ { j } , S _ { i } ^ { j } , a ^ { j } ) , \quad j = 1 , \dots , m ,
$$
(RUNAGENTS)
we form the candidate pool $\mathcal { M } = \mathcal { D } _ { S } \cup \{ \tau _ { i } ^ { 1 } , . . . , \tau _ { i } ^ { m } \}$ . We then update the fixed-size shared memory by keeping only the top- $\mathbf { \nabla } \cdot m$ tuples by score
$$
\mathcal { D } _ { S } \gets \mathrm { T o p K } \big ( \mathcal { M } , m ; \mathrm { k e y } ( e ) = s ( e ) \big ) , ,
$$
(UPDATESHAREDMEMORY)
where $s ( e )$ extracts the scalar score from $e = ( T , R , ( T _ { S } , s ) , a )$ .
This replacement mechanism ensures that $\mathcal { D } _ { S }$ always contains exactly $m$ entries with the highest observed scores across all iterations. By maintaining only the strongest reasoning-response-feedback tuples, the shared memory facilitates knowledge transfer between agents and across iterations, enabling collaborative improvement through exposure to diverse high-quality solutions.
# 2.3 Inference Protocol
Algorithm 1 summarizes the $\mathbb { X }$ olver inference protocol, which operates in three structured stages. Stage-1, which emulates initialization with prior experience, involves the planner constructing a team of agents $\mathcal { A }$ (lines 2–3). Stage-2, embodying symbolic experience accumulation and refinement, iterates for $\boldsymbol { \mathcal { T } }$ rounds (lines 4–10). In each round, all agents receive access to $\mathcal { D } _ { S }$ and $\mathcal { D } _ { E }$ , build their contexts, and generate structured trajectories and responses $\mathcal { D } _ { E }$ is only used for context construction at the first iteration). These are evaluated by the judge agent $\mathcal { I }$ , and $\mathcal { D } _ { S }$ is updated with the resulting feedback tuples (line 7). Upon convergence or after $\boldsymbol { \mathcal { T } }$ rounds, Stage
# Algorithm 1 Xolver Inference Protocol
1: Input: Query $q$ , Tools $\tau$ , Episodic Memory $\mathcal { D } _ { E }$
parameters $m , k , I$
2: Init: $\mathcal { D } _ { S } \emptyset$
3: $\mathcal { A } \gets \mathrm { P L A N N E R } ( q , m )$
4: for $i = 0$ to $\boldsymbol { \mathcal { T } }$ do
5: $\{ \mathcal { C } _ { i } \} _ { c = 1 } ^ { m } \gets \mathrm { \mathbf { B U I L D C O N T E X T } } ( \mathcal { A } , \mathcal { D } _ { E } , \mathcal { D } _ { S } , q , i )$
6: $\{ \tau _ { i } ^ { \jmath } \} _ { j = 1 } ^ { m } \gets \mathrm { R U N A G E N T S } ( A , \mathcal { C } _ { i } , \mathcal { T } , \mathcal { I } )$
7: $\mathcal { D } _ { S } \gets$ UPDATESHAREDMEMORY $( \mathcal { D } _ { S } , \{ \tau _ { i } ^ { j } \} )$
8: if CONVERGED $( \mathcal { D } _ { S } )$ then
9: break
10: end if
11: end for
12: $y \mathcal { V } ( \mathtt { B E S T R E S P O N S E } ( \mathcal { D } _ { S } ) )$
13: UPDATEEPISODICMEMORY $( \mathcal { D } _ { E } , q , \mathcal { D } _ { S } )$
14: Return y
# 3 Experiments
# 3.1 Evaluation Setup
Evaluation Benchmarks We evaluate $\mathbb { X }$ olver across five diverse and challenging benchmarks covering both mathematical and coding reasoning. For math, we use GSM8K [6], Math-500[15], and the AIME 2024 [34] and 2025 [35], comprising high-school level competition problems requiring multi-step symbolic reasoning. For coding, we use LiveCodeBench (v5) [20], a dynamic benchmark that ensures no data leakage by periodically releasing new problems. These benchmarks span arithmetic, algebra, number theory, geometry, combinatorics, and algorithmic problem solving.
Baselines and Metrics We compare Xolver against directly using leading reasoning models– (a) proprietary models: Gemini 2.5 (Pro and Flash Think) [7], Grok-3 Beta Think and Grok-3 Mini (Beta) Think [63], Claude 3.7 Sonnet Think [2], o1 [41], o3-mini, o3, and o4-mini [42]; (b) openweight LLMs, e.g., Qwen3-235B [48], QWQ-32B [49], and DeepSeek-R1 [8]; (c) math- and codespecialized models, e.g., AlphaOne [68], OpenMathReason [37], rStar-Math [12], rStar-Coder [30], OpenCodeReason [1], and Kimi-K1-1.6 [22]. We also compare with (d) agents or frameworks: SelfReflexion [52], agentic search based framework Search-o1 [28], specialized tool based framework OctoTools [33] which excels general purpose agent platforms outperforming AutoGen or LangChain, cross-problem baseline framework CheatSheet [55], and multi-agent code generation framework CodeSim [18], which leverage refinement, retrieval or online search, fine-grained tool augmentation in addition to online search, dynamic memory updates after solving new problems, and multi-agent reasoning techniques respectively. For agent-based baselines (d), we reproduce results using the same backbone LLMs as $\mathbb { X }$ olver for fair comparison; for model-based baselines (a–c), we report official results from their technical reports or corresponding benchmark leaderboards. As evaluation metric, we use accuracy using GPT-4o [40] for math problems, and $p a s s @ l$ for code tasks.
Inference Details We use both open-weight QWQ-32B [48] and proprietary o3-mini (medium and high) [42] as the backbone. To mitigate performance variance inherent in single-run evaluations, we report the average accuracy and $p a s s @ l$ metric, calculated by averaging 32 inference runs for competitive benchmarks LIVECODEBENCH and AIME $^ { , } 2 5$ , and 16 runs for AIME ’24, ensuring standard deviation within $\sim 1 \%$ (Appendix D.1). For simpler tasks GSM8K and MATH-500, we follow DeepSeek-v3 [29], using a single greedy-decoded generation. By default, we set temperature to 0.2, number of agents $m = 3$ , and max iterations $\mathcal { T } = 2$ . $\mathbb { X }$ olver iteration terminates either when the maximum number of iterations $\boldsymbol { \mathcal { T } }$ is reached, or when all entries in the shared memory $\mathcal { D } _ { s }$ converge—i.e., they achieve perfect scores of 1.0 (correct) for math tasks, or pass all test cases (both sample and synthesized) for code tasks. As the external retrieval corpus $\mathcal { D } _ { E } ^ { \mathrm { e x t } }$ in coding task, we collect a 9-million-token dataset of algorithmic code problems and their $^ { C + + }$ solutions with explanations from $\mathrm { \ G i t H u b } ^ { 2 }$ (details in Appendix C). For math, we use the OPENMATHREASON dataset [37] as $\mathcal { D } _ { E } ^ { \mathrm { e x t } }$ . We evaluate two variants of $\mathbb { X }$ olver: (i) Xolver with in-competition cross-problem experience $\mathbb { X }$ olver $\mathrm { ( + ) } { \dot { } }$ ), which dynamically updates the episodic memory after solving each problem to utilize accumulated knowledge across problems; and (ii) $\mathbb { X }$ olver $( - )$ , which keeps the episodic memory static, focusing solely on problem-specific experience. By default, we refer to $\mathbb { X }$ olver $( + )$ as our method if not specified otherwise.
# 3.2 Main Results
Table 1 evaluates $\mathbb { X }$ olver across diverse mathematical and coding reasoning benchmarks, highlighting its effectiveness compared to state-of-the-art LLMs, specialized models, and other frameworks.
Strong Gains Across Benchmarks Overall, $\mathbb { X }$ olver consistently delivers significant improvements over the backbone LLMs’ standard LongCoT prompting. Both the problem-specific $\mathbb { X }$ olver (–) and the cross-problem $\mathbb { X }$ olver $( + )$ variants outperform their respective backbone LLM (LongCoT) baselines across all datasets. For example, with the o3-mini-medium backbone, $\mathbb { X }$ olver $( + )$ improves from 75.8 to 93.8 on AIME’24, and from 66.3 to 79.6 on LiveCodeBench, while the QWQ-32B backbone sees gains from 78.1 to 89.9 on AIME’24 and from 63.4 to 76.2 on LiveCodeBench.
Surpassing Prior Agents Compared to previous frameworks such as Search-o1, OctoTools, and CheatSheet, $\mathbb { X }$ olver demonstrates consistent and significant gains. With o3-mini-medium, $\mathbb { X }$ olver $( + )$ improves over the best baseline by $+ 1 2 . 7$ points on AIME’25 and $+ 1 3 . 5$ points on LiveCodeBench, highlighting its superior reasoning capabilities by integrating diverse forms of experience.
In Comparison to Leading LLMs Despite using weaker backbones, $\mathbb { X }$ olver, specifically $( + )$ variant, matches or surpasses proprietary frontier LLMs like o3 and o4-mini-high on key benchmarks. With o3-mini-medium, $\mathbb { X }$ olver ( $^ +$ outperforms o4-mini-high on AIME’24 (93.8 vs. 93.4) and substantially exceeds it on LiveCodeBench (87.3 vs. 69.5), demonstrating that structured reasoning and dynamic memory can rival even the strongest closed-source models.
Table 1: Comparison of Xolver against SoTA reasoning models, specialized models, and other reasoning agents across mathematical and coding tasks. Best results are boldfaced and second-best results are underlined. T: Think models, LongCoT\*: standard prompting for reasoning models. "-" denotes either $\mathrm { { \ n / a } }$ (e.g., only math/code specialized models) or results not reported.
Backbone Agnostic Improvements from $\mathbb { X }$ olver are consistent across different backbone LLMs. Both o3-mini-medium and QWQ-32B benefit substantially from the framework, demonstrating its model-agnostic design. For example, on GSM8K, $\mathbb { X }$ olver $( + )$ achieves 97.1 (o3-mini-medium) and 98.0 (QWQ-32B), both surpassing baseline variants by significant margins.
Effectiveness of Dynamic Episodic Memory While both variants excel, the cross-problem variant $\mathbb { X }$ olver $( + )$ consistently outperforms the problem-specific version $\mathbb { X }$ olver (-) in all benchmarks. On average, episodic memory integration yields a $+ 3 . 5$ point improvement across both backbones and datasets where the largest gain is $+ 7 . 7$ points with o3-mini-medium on coding (LiveCodeBench).
Scales with Backbone LLM’s Strength $\mathbb { X }$ olver’s performance scales consistently with the strength of its backbone LLM. With o3-mini-high, it sets new state-of-the-art results across all benchmarks (98.1 on GSM8K, 94.4 on AIME’24, 93.7 on AIME’25, 99.8 on Math-500, and 91.6 on LiveCodeBench).
# 4 Ablation and Analyses
Ablations: Quantifying Component Impact In Figure 3, we present an ablation study quantifying the contribution of individual components in $\mathbb { X }$ olver to overall performance, measured by the average performance drop on math reasoning (Math Avg) and programming (LiveCodeBench) tasks.
Each component plays a necessary role, with the most significant degradation observed when removing Multi-iteration and Multi-Agent followed by Judge Agent, highlighting their central importance in complex reasoning and code synthesis. In contrast, removing components like Verifier/Debugger and Tool leads to comparatively smaller drops, suggesting a more auxiliary role in the overall system. Likewise self-retrieval can also work in-place of external retrieval with some drop in accuracy.
Impact of Agent Count and Iterations, and Emerging Benefits of Collaboration We analyze the effect of varying the number of agents and rea
Figure 3: Performance drop when removing each component from Xolver. Bars show average drop on Math (bottom) and LiveCodeBench (top). Verifier is critical for math tasks and cannot be removed, while Tool (Python) and test cases apply only to math and coding respectively.
soning iterations on Xolver’s performance. In a controlled setup, we fix one variable (e.g., 3 agents or 2 iterations) and incrementally increase the other. As shown in Figure 4, performance improves consistently on both AIME $^ { , } 2 5$ and LIVECODEBENCH with more agents or iterations, highlighting the advantage of collaborative and iterative problem solving.
To probe deeper, we conduct a budget-controlled experiment on the AIME $^ { , } 2 5$ dataset, where the total reasoning budget (i.e., number of agents $\times$ number of iterations) is fixed. While iterative reasoning remains a crucial factor for $\mathbb { X }$ olver’s performance, we find that increasing the number of agents—particularly beyond a minimum of three—yields additional, emergent improvements, leading to over a $4 \%$ performance
Figure 4: Impact of iterations and agents in $\mathbb { X }$ olver on AIME ’25 (QWQ-32B) and LIVECODEBENCH (o3-mini-medium).
gain. This suggests that agent diversity and parallelism complement iterative depth, together producing stronger collaborative problem-solving benefits than either alone.
Effect of Retrieval Strategies on $\mathbb { X }$ olver Performance. We evaluate the impact of different retrieval strategies on $\mathbb { X }$ olver by comparing three settings: (1) External Retrieval, where the model retrieves the top- $k$ (e.g., $k = 5$ ) most similar problems and their solutions from an external corpus using a BM25 retriever; (2) Self-Retrieval, where the model recalls the top- $k$ most similar problems and solutions from its own internal memory; and (3) No Retrieval, where neither external nor self-retrieval is used.
Figure 5: Impact of different retrievals in $\mathbb { X }$ olver.
As shown in Figure 5, performance on both AIME $^ { , } 2 5$ and LIVECODEBENCH follows the trend: External Retrieval $>$ Self-Retrieval $>$ No Retrieval, indicating that external retrieval significantly enhances $\mathbb { X }$ olver’s performance. We note that for code tasks, although the external retrieval corpus contains solutions written in $\mathrm { { C + + - a } }$ different language from the target Python—external retrieval still provides a substantial performance boost. Nonetheless, while self-retrieval results in a notable performance drop compared to external retrieval, it still outperforms the no-retrieval baseline with notable margins, serving as a viable alternative when external resources are unavailable.
Fine-grained Performance Analysis We perform a fine-grained analysis of Xolver’s performance across both MATH-500 and LIVECODEBENCH, as shown in Figure 6 and Figure 7.On MATH-500, Xolver (both o3-mini-medium and QWQ-32B) consistently outperforms CHEATSHEET across nearly all seven subject categories, despite the latter relying on costly per-problem
memory updates. The only exception is in Number Theory, where o3-mini-medium scores 99.2 compared to CHEATSHEET’s 99.5. As for QWQ-32B, $\mathbb { X }$ olver achieves substantial accuracy gains over CheatSheet across all categories, with improvements of $+ 9 . 0 \%$ in Prealgebra, $+ 8 . 5 \%$ in Algebra, $+ 1 1 . 0 \%$ in Number Theory, $+ 8 . 5 \%$ in Counting and Probability, $+ 8 . 8 \%$ in Geometry, $+ 1 0 . 0 \%$ in Intermediate Algebra, and $+ 7 . 5 \%$ in Precalculus. These consistent gains highlight $\mathbb { X }$ olver’s strong performance across both symbolic and numerical reasoning.
Figure 6: Fine-grained performance comparison in MATH-500.
On LiveCodeBench, $\mathbb { X }$ olver demonstrates even more pronounced gains. The o3-mini-medium variant achieves $9 5 . 6 \%$ , $9 0 . 4 \%$ , and $8 5 . 8 \%$ accuracy on Easy, Medium, and Hard problems respectively, significantly outperforming CodeSim by $+ 4 . 5 \%$ , $+ 1 1 . 9 \%$ , and a striking $+ 3 2 . 3 \%$ margin on hard examples. Even with a weaker QWQ32B backbone, $\mathbb { X }$ olver $( 9 5 . 2 \%$ , $8 7 . 5 \%$ , $7 0 . 0 \%$ ) surpasses all baselines and achieves similar gains. In contrast to CheatSheet and CodeSim, $\mathbb { X }$ olver leverages multi-agent collaborations and holistic experience learning. These consistent and backbone-agnostic gains across different reasoning tasks underscore $\mathbb { X }$ olver’s robustness and position it as a breakthrough in retrieval and tool-augmented, multi-agent and evolving reasoning systems.
Can a Self-Judge Replace a Judge Agent? We analyze the effect of different judging mechanisms on $\mathbb { X }$ olver’s performance by comparing two setups: (1) self-judging, where each dynamic agent evaluates its own response through self-reflection without altering its role, and (2) external judging, where a separate judge agent is used to assess the responses. We find that self-judging agents tend to be biased in favor of their own outputs, occasionally validating incorrect solutions. This self-bias leads to a noticeable drop in overall performance—specifically, a $9 . 9 \%$ decrease in coding tasks and a $3 . 8 8 \%$ decrease in math tasks, on average.
Cost Analysis and How Long Do Xolver Agents Think? We perform a detailed analysis of token usage in Figure 8, reporting input, reasoning, and output statistics for $\mathbb { X }$ olver (QWQ32B) across all datasets. Our LLM token usage has computational complexity of $O ( m \mathcal { I } )$ , where $m$ is the number of agents and $\boldsymbol { \mathcal { T } }$ is the number of reasoning iterations. However, the run
Figure 7: Performance comparison per difficulty levels in LiveCodeBench
Average Token Usage (Input, Think, Output) per Dataset
Figure 8: Avg numbers of token usage across datasets in Xolver $( + )$ .
time complexity remains $O ( \mathcal { T } )$ since the dynamic agents operate in parallel. This is significantly more efficient than the self-consistency [59], which typically require 32–64 generations per example, as well as the baseline CheatSheet framework, which incurs a memory update complexity of ${ \bar { \boldsymbol { O } } } ( n ^ { 2 } )$ —quadratic in the test dataset size—due to usefulness estimation over all previous examples after solving each new example. As a multi-agent system, $\mathbb { X }$ olver allocates a majority of its tokens to context sharing and inter-agent communication, while approximately $2 5 \%$ are spent on actual reasoning steps.
Nonetheless in Figure 8, we also compare the total token usage of $\mathbb { X }$ olver with a single agent reasoning framework Search-o1 using tiktoken for o3-mini-medium and AutoTokenizer for QWQ-32B for token count. As expected, $\mathbb { X }$ olver incurs higher token costs—approximately $1 . 5 \times$ that of Searcho1—due to its collaborative and iterative multi-agent reasoning. However, this moderate increase represents a highly efficient trade-off given the substantial performance improvements observed. As shown in Figure 6 and Figure 7, $\mathbb { X }$ olver achieves remarkable gains across both domains, including a $+ 3 2 . 3 \%$ absolute improvement on hard coding problems with o3-mini-medium and $9 . 0 5 \%$ accuracy boosts across all Math-500 categories with QWQ-32B. These findings demonstrate that $\mathbb { X }$ olver ’s slightly higher reasoning cost is well-justified by its superior, generalist performance across diverse problem-solving scenarios.
Does Data Shuffling Affect $\mathbb { X }$ olver $( + )$ Performance? Xolver $( + )$ updates its external memory incrementally after solving each new problem. To examine whether the order of test instances impacts performance, we conduct an ablation study by randomly shuffling the sequence of problems in each task. This helps determine if there is any dependency on the data order. Results in Appendix D.3 show that $\mathbb { X }$ olver exhibits minimal performance variation across different shuffles, with a standard deviation of approximately 1 within only 5 runs, indicating that its performance is largely stable regardless of data ordering.
Qualitative Examples In Appendix B, we present qualitative examples along with all the prompts of full-cycle $\mathbb { X }$ olver on both math and code reasoning tasks. These examples illustrate how $\mathbb { X }$ olver initiates reasoning from external or self-retrieved exemplars, engages in multi-agent collaboration, and incrementally accumulates experiences through inter-agent propagation and refinement. The full interaction trace highlights $\mathbb { X }$ olver’s ability to iteratively decompose, solve, and adapt solutions across reasoning steps, showcasing its capacity for dynamic knowledge construction and generalizable problem solving.
More Error Analysis in Math and Code In Figure 9, we present an error analysis across both math and code tasks that goes beyond simple accuracy or pass $\ @ 1$ metrics. While Xolver significantly improves reasoning
Wrong Reso. Wrong Calc. Other Wrong Ans. TLE RTE Syntax + Other AIME '25 LiveCodeBench o3-mini-medium 3.6% 3.0% 1.8% Total: 8.4% 3.9% 2.6% 2.1% 0.8%Total: 9.4% QWQ-32B 7.7% 7.3% 3.6% Total: 18.6% 5.9% 5.3% 3.1% 1.5% Total: 15.8% 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Error Rate (%) Error Rate (%)
and generation capabilities in both domains, both (o3-mini-medium and QWQ-32B) backbone LLMs can still produce solutions that are syntactically correct yet semantically flawed, resulting in failed executions due to incorrect reasoning, incomplete logic, unoptimized implementations, or misaligned tool usage. In code tasks, failure modes include incorrect final code, time limit exceeded (TLE), runtime errors (RTE), and syntax issues. In math tasks, remaining errors are primarily due to flawed logical derivations or faulty intermediate calculations. Although Python-based tools are available, such calculation errors often occur when agents choose not to invoke these tools—highlighting that tool usage remains decoupled from the model’s core reasoning process (see Appendix A for our prompt design). These findings provide insights for future improvements by exposing the variety of failure modes across domains, and further emphasize the importance of robust self-verification and refinement mechanisms, as employed by Xolver.
Dynamics of Reasoning Patterns in Xolver Traces To understand how $\mathbb { X }$ olver adapts its reasoning process to perform complex reasoning, we analyze the dynamics of reasoning pattern frequencies across difficulty levels in LiveCodeBench, as shown in Table 2. Detailed description of how we collected the reasoning patterns is provided in the Appendix D.1. Our analysis reveals that $\mathbb { X }$ olver dynamically increases self-evaluation and exploratory strategies (e.g., trying new approaches) as problem difficulty grows. Correct solutions demonstrate a declining need for problem rephrasing and subgoal decomposition, indicating more direct and confident reasoning. In contrast, incorrect
Table 2: Changes in major reasoning pattern frequencies as problem difficulty increases in LiveCodeBench, comparing correct vs. incorrect solutions. Green and red indicate statistically significant increases or decreases $( p < 0 . 0 5 )$ . Underlined cells highlight patterns where $\mathbb { X }$ olver improves over OpenCodeReasoning, which otherwise shows a declining trend. Direction arrows denote: $\uparrow =$ increase, $\downarrow =$ decrease, $\downarrow \uparrow =$ mixed trend (decrease in correct, increase in incorrect). $\mathbb { X }$ olver increases use of self-evaluation and new approaches with task difficulty, and demonstrates targeted subgoal setup and problem rephrasing when solutions fail—reflecting its adaptive, collaborative reasoning.
solutions show increased subgoal setup and rephrasing attempts—suggesting that the system recognizes failure and attempts recovery through restructuring. Compared to OpenCodeReasoning, which shows stagnation or regression in key patterns (e.g., self-evaluation), $\mathbb { X }$ olver exhibits robust and adaptive reasoning behavior, supported by multi-agent collaboration and judge feedback. This behavior highlights the generality and flexibility of Xolver ’s reasoning model.
# 5 Case-Study: How Xolver Enhances Reasoning
To further understand the reasoning and problem-solving strategies behind our multi-agent, iterative framework Xolver, we conduct an in-depth analysis combining qualitative runtime inspection with controlled experiments. We begin by manually studying $\mathbb { X }$ olver’s agent interaction traces on AIME $^ { , } 2 5$ and LiveCodeBench. These case studies reveal that at each iteration, dynamic agents attempt to improve
Figure 10: Agents Accuracy and Agreement over iterations.
upon earlier failures by leveraging Judge agent feedback and by aligning with top-ranked outputs stored in the shared memory $\mathcal { D } _ { S }$ . This process results in progressively refined outputs, increased agent alignment, and eventual convergence toward correct solutions.
To verify this behavior systematically, we conduct a controlled experiment across both math and code tasks. We instantiate two dynamic agents with complementary strengths: a Coder agent and a Mathematician agent, each proficient in one domain but suboptimal in the other. We then measure their performance and agreement across iterations—defined as the percentage of problems in which both agents independently produce the same correct answer (for math) or code that passes the same test cases (for code). As shown in Figure 10, both agents demonstrate consistent accuracy improvements over time, accompanied by a rising agreement rate. This not only illustrates mutual influence and learning-by-alignment but also validates the emergence of collaborative synergy.
Crucially, we observe that the presence of the Judge agent plays a vital role in this convergence process. When the Judge agent is removed—as shown in our first ablation—performance degrades significantly. These findings collectively affirm that $\mathbb { X }$ olver’s iterative memory-sharing, feedbackdriven refinement, and role-specialized agents contribute to its strong reasoning performance across domains, making it a compelling framework for general-purpose, self-improving problem solving.
# 6 Related Work
Memory-Augmented and Retrieval-Augmented LLMs. Memory-augmented language models have evolved from static retrieval systems like RAG [25] and REALM [14] to dynamic approaches such as Reflexion [53], MemGPT [43], and Scratchpads [39]. However, these systems operate on isolated tasks, lack cross-problem experience accumulation, and employ single-agent architectures. $\mathbb { X }$ olver addresses these limitations through a novel dual-memory architecture combining episodic long-term memory with dynamic intermediate memory, enabling specialized agents to collectively build and refine experiential knowledge. While prior work has explored cross-trial information sharing [69, 53] and multi-source memory integration [66], these approaches remain confined to single-agent settings. Our framework creates a persistent knowledge base through multi-agent collaboration [10], allowing agents to accumulate expertise from solved problems and leverage collective experience for future tasks.
Multi-Agent Problem Solving. Multi-agent LLM systems address the limitations of single models by leveraging collaborative approaches for improved reliability and task specialization [13, 10]. From early frameworks like CAMEL [27] with fixed role assignments, the field progressed to dynamic role adjustment in AgentVerse [5] and code execution in AutoGen [62]. Recent advances include layered agent networks in DyLAN [31], multi-agent code generation and problem solving [17, 18] and multi-agent debate frameworks [9, 50, 54]. While these systems demonstrate effective collaboration, they operate on isolated problems without cross-task experience accumulation. $\mathbb { X }$ olver introduces dual-memory architecture, holistic experience integration, judge-mediated selection, and continuous episodic corpus expansion—transforming single-problem solvers into experience-aware agents.
LLM Reasoning Enhancement Techniques. Various techniques have emerged to enhance LLM reasoning capabilities beyond standard prompting. Chain-of-Thought [61] introduced step-by-step reasoning, Self-Consistency [58] explores multiple reasoning paths with majority voting, and Tree of Thoughts [64] enables exploration of reasoning branches—yet all remain limited to single-pass generation. Self-reflective approaches like Reflexion [53] enable iterative improvement but operate within single tasks, while retrieval-enhanced methods like CheatSheet [55] and Search-o1 [28] remain confined to single-agent architectures. These approaches share fundamental limitations: no cross-problem learning, no persistent memory, and no multi-agent collaboration. Xolver unifies these enhancements within a multi-agent framework where agents collaboratively refine solutions through judge-mediated iterations and leverage dual memory systems for cross-problem learning.
Tool-Augmented Reasoning. Tool integration extends LLM capabilities beyond language processing. Early systems like WebGPT [38] introduced single-tool integration, while PAL [11] enabled code execution for mathematical reasoning. Multi-tool frameworks evolved with ReAct [65] interleaving reasoning with actions, Chameleon [32] composing multiple tools, and OctoTools [33] standardizing tool planning—yet all remain limited to single-agent execution without iterative refinement or crossproblem learning. $\mathbb { X }$ olver transforms tool use into a collaborative, memory-enriched ecosystem where agents collectively execute tools, share outcomes, and accumulate successful strategies across problems—creating an adaptive framework that evolves with experience. | Despite impressive progress on complex reasoning, current large language
models (LLMs) typically operate in isolation - treating each problem as an
independent attempt, without accumulating or integrating experiential
knowledge. In contrast, expert problem solvers - such as Olympiad or
programming contest teams - leverage a rich tapestry of experiences: absorbing
mentorship from coaches, developing intuition from past problems, leveraging
knowledge of tool usage and library functionality, adapting strategies based on
the expertise and experiences of peers, continuously refining their reasoning
through trial and error, and learning from other related problems even during
competition. We introduce Xolver, a training-free multi-agent reasoning
framework that equips a black-box LLM with a persistent, evolving memory of
holistic experience. Xolver integrates diverse experience modalities, including
external and self-retrieval, tool use, collaborative interactions, agent-driven
evaluation, and iterative refinement. By learning from relevant strategies,
code fragments, and abstract reasoning patterns at inference time, Xolver
avoids generating solutions from scratch - marking a transition from isolated
inference toward experience-aware language agents. Built on both open-weight
and proprietary models, Xolver consistently outperforms specialized reasoning
agents. Even with lightweight backbones (e.g., QWQ-32B), it often surpasses
advanced models including Qwen3-235B, Gemini 2.5 Pro, o3, and o4-mini-high.
With o3-mini-high, it achieves new best results on GSM8K (98.1%), AIME'24
(94.4%), AIME'25 (93.7%), Math-500 (99.8%), and LiveCodeBench-V5 (91.6%) -
highlighting holistic experience learning as a key step toward generalist
agents capable of expert-level reasoning. Code and data are available at
https://kagnlp.github.io/xolver.github.io/. | [
"cs.CL",
"cs.AI"
] |
# 1. Introduction
As machine learning systems become increasingly pervasive in sensitive domains, such as medical diagnostics and userfacing recommendation engines, ensuring compliance with privacy regulations is paramount. The “right to be forgotten,” codified in regulations such as the EU’s General Data
Several architecture-based approaches tackle the problem differently. Forsaken (Ma et al., 2023) learns a mask over neurons to erase the influence of forgotten data. In generative modeling, diffusion and transformer-based methods now support object- or identity-style forgetting through finetuning or prompt editing (Zhang et al., 2024a). Panda et al. (2024) introduced a label-annealing strategy to iteratively erase high-level concepts. However, many of these approaches lack formal guarantees and are typically confined to specific architectures or datatypes. Overall, while machine unlearning is gaining traction, achieving efficient, generalizable, and certifiable forgetting remains a significant challenge.
In this study, we introduce Forget-Aligned Model Reconstruction (FAMR), a post-hoc forgetting framework that directly modifies a trained image classifier to erase specified targets—such as samples, classes, or visual styles—without retraining from scratch. The core idea is to combine a forgetting loss that drives the model’s outputs on the forget set toward a uniform (maximally uncertain) distribution, with an $\ell _ { 2 }$ anchor penalty that constrains deviations from the original parameters. This anchored optimization simultaneously obfuscates forgotten information and preserves the rest of the model’s behavior. Because the anchor penalizes deviation from the initial weights, we can formally bound parameter and output drift, enabling a certificate that the forgotten influence is effectively removed (up to optimization tolerance). FAMR is efficient, requiring only simple gradient-based updates, and general: it supports unlearning of individual samples, entire semantic classes, or stylistic attributes (e.g., background color or texture patterns). Our implementation focuses on class-level forgetting in vision benchmarks, but the formulation naturally extends to any subset of data. In summary, our contributions are as follows:
• We introduce a theoretically grounded anchored forgetting objective that combines a uniform-prediction loss on targeted data with an $L _ { 2 }$ penalty to the original model weights. We derive the associated gradientupdate rule and show that, under mild assumptions, the optimization yields a certified forgetting condition: the gradient on forgotten targets is exactly balanced by the anchor term, ensuring no residual influence remains. • We demonstrate that this framework naturally generalizes to multiple unlearning scenarios. By selecting the forgetting set $\tau$ to be individual samples, entire semantic classes, or style-based groups, • We empirically validate FAMR on standard image classification benchmarks, showing that it effectively removes targeted knowledge (samples, classes, or style cues) with minimal accuracy loss on retained data.
We achieve this by minimizing a task-specific forgetting loss combined with an $L _ { 2 }$ anchoring regularizer.
# 2.2. Forget-Aligned Optimization Objective
The general objective is:
$$
\mathcal { T } ( \theta ) = \mathcal { L } _ { \mathrm { f o r g e t } } ( \theta ) + \frac { \lambda } { 2 } \Vert \theta - \theta _ { 0 } \Vert _ { 2 } ^ { 2 } ,
$$
where $\lambda > 0$ controls the strength of the anchor.
2.2.1. (A) SAMPLE OR CLASS FORGETTING (UNIFORM KL LOSS)
To forget training samples or a full class, we enforce high uncertainty via uniform predictions:
$$
\mathcal { L } _ { \mathrm { f o r g e t } } ^ { \mathrm { K L } } ( \theta ) = \sum _ { ( x , y ) \in \mathcal { T } } \mathrm { K L } \left( \mathbf { u } \parallel p _ { \theta } \big ( \boldsymbol { y } \mid \boldsymbol { x } \big ) \right) ,
$$
where $\mathbf { u } = \left[ \textstyle { \frac { 1 } { C } } , \dots , { \frac { 1 } { C } } \right]$ is the uniform distribution over $C$ classes.
# 2.2.2. (B) STYLE FORGETTING (GRAM MATRIX LOSS)
To forget stylistic patterns, we define a perceptual feature extractor $\phi ( x )$ (e.g., activations from an intermediate CNN layer) and use the Gram matrix:
$$
G _ { \phi } ( x ) = \phi ( x ) \phi ( x ) ^ { \top } .
$$
The style loss penalizes retention of stylistic correlations:
$$
\mathcal { L } _ { \mathrm { f o r g e t } } ^ { \mathrm { s t y l e } } ( \theta ) = \sum _ { x \in \mathcal { T } } \| G _ { \phi } ( x ) - G _ { \mathrm { t a r g e t } } \| _ { F } ^ { 2 } ,
$$
where $G _ { \mathrm { t a r g e t } }$ is a neutral or baseline style (e.g., average across classes), and $\| \cdot \| _ { F }$ denotes the Frobenius norm.
# 2.2.3. (C) COMBINED FORGETTING LOSS
In general, the final forgetting loss combines uncertaintydriven and style-specific objectives:
# 2. Methodology
# 2.1. Problem Setup
Let $\boldsymbol { \mathcal { D } } = \{ ( x _ { i } , y _ { i } ) \} _ { i = 1 } ^ { N }$ be the training dataset used to fit a classifier $f _ { \theta _ { 0 } }$ with parameters $\theta _ { 0 }$ . The model produces softmax outputs $p _ { \theta } ( y \mid x ) = \operatorname { s o f t m a x } ( f _ { \theta } ( x ) )$ over $C$ class labels.
Given a forget set $\tau \subset \mathcal { D }$ , our goal is to compute new parameters $\theta ^ { * }$ such that:
1. $f _ { \theta ^ { \ast } } ( x )$ gives no confident predictions on $x \in \tau$ .
2. The model remains close to $f _ { \theta _ { 0 } }$ on ${ \mathcal { D } } \backslash { \mathcal { T } }$ .
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { f o r g e t } } ( \theta ) = \alpha \cdot \mathcal { L } _ { \mathrm { f o r g e t } } ^ { \mathrm { K L } } ( \theta ) + \beta \cdot \mathcal { L } _ { \mathrm { f o r g e t } } ^ { \mathrm { s t y l e } } ( \theta ) , } \end{array}
$$
where $\alpha , \beta \ge 0$ are task-specific weighting coefficients.
By varying the forget set $\tau$ and adapting the loss formulation ${ \mathcal { L } } _ { \mathrm { f o r g e t } } ( \theta )$ , FAMR accommodates diverse unlearning scenarios: (i) sample-level forgetting, via uniform prediction enforcement on individual instances; (ii) class- or conceptlevel forgetting, through KL divergence minimization; (iii) style-level forgetting, using perceptual Gram matrix losses.
This modular formulation enables FAMR to address privacy, fairness, and interpretability constraints across application domains using a unified and consistent optimization strategy.
# 2.3. Gradient-Based Update Algorithm
We optimize $\mathcal { I } ( \boldsymbol { \theta } )$ using gradient descent. Below is the update procedure:
Algorithm 1 Forget-Aligned Model Reconstruction (FAMR)
Require: Initial weights $\theta _ { 0 }$ , forget set $\tau$ , anchor coefficient $T$
$\lambda$ , learning rate $\eta$ , iterations
1: Initialize $\theta \theta _ { 0 }$
2: for $t = 1$ to $T$ do
3: Sample batch $( x , y ) \sim T$
4: Compute outputs $p _ { \theta } ( y \mid x ) = \operatorname { s o f t m a x } ( f _ { \theta } ( x ) )$
5: Compute forgetting gradient: $g _ { \mathrm { f o r g e t } } \nabla _ { \theta } \mathcal { L } _ { \mathrm { f o r g e t } } ( \theta )$
6: Compute anchor gradient: $g _ { \mathrm { a n c h o r } } \lambda ( \theta - \theta _ { 0 } )$
7: Update: $\theta \theta - \eta \cdot ( g _ { \mathrm { f o r g e t } } + g _ { \mathrm { a n c h o r } } )$
8: end for
9: Return Updated weights $\theta$
This lightweight gradient-based routine optimizes the anchored forgetting objective with minimal computational overhead, enabling efficient post-hoc unlearning in deep networks without retraining or architectural modifications.
# 3. Theoretical Analysis
We present a theoretical analysis of the FAMR objective, characterizing its behavior and demonstrating its approximation to ideal retraining.
# 3.1. Local Convergence and Stationarity
Assuming ${ \mathcal { L } } _ { \mathrm { f o r g e t } } ( \theta )$ is smooth and differentiable, and the anchor term $\frac { \lambda } { 2 } \bar { \| \theta - \theta _ { 0 } \| _ { 2 } } ^ { 2 }$ is strongly convex, the full objective $\mathcal { I } ( \boldsymbol { \theta } )$ is locally strongly convex around $\theta _ { 0 }$ . Gradient descent thus converges to a unique local minimum $\theta ^ { * }$ satisfying:
$$
\begin{array} { r } { \nabla \mathcal { L } _ { \mathrm { f o r g e t } } ( \theta ^ { * } ) + \lambda ( \theta ^ { * } - \theta _ { 0 } ) = 0 . } \end{array}
$$
This stationarity condition ensures the model is maximally uncertain on the forget set while minimally deviating from the original model.
# 3.2. Approximation to Ideal Retraining
Let $w ^ { \ast }$ denote the weights obtained by retraining from scratch on $\mathcal { D } \setminus \mathcal { T }$ . Influence-function theory provides a first-order approximation:
$$
\boldsymbol { w } ^ { * } \approx \boldsymbol { \theta } _ { 0 } - \boldsymbol { H } ^ { - 1 } \sum _ { ( \boldsymbol { x } , \boldsymbol { y } ) \in \mathcal { T } } \nabla \ell ( \boldsymbol { x } , \boldsymbol { y } ; \boldsymbol { \theta } _ { 0 } ) ,
$$
where $H$ is the Hessian of the loss over $\mathcal { D }$ . FAMR’s update solves:
$$
( H + \lambda I ) ( \theta ^ { * } - \theta _ { 0 } ) = - \sum _ { ( x , y ) \in \mathcal { T } } \nabla \ell ( x , y ; \theta _ { 0 } ) ,
$$
implying:
$$
\left. \theta ^ { * } - w ^ { * } \right. = \mathcal { O } \left( \frac { \lambda } { \lambda _ { \operatorname* { m i n } } ^ { 2 } ( H ) } \left. \sum \nabla \ell ( x , y ; \theta _ { 0 } ) \right. \right) .
$$
Hence, as $\lambda 0$ , $\theta ^ { * } \to w ^ { * }$ .
# 3.3. Certified Output Divergence Bound
Let $f _ { \theta ^ { * } }$ be the output of FAMR and $f _ { w ^ { \ast } }$ be the retrained model. If $f$ is Lipschitz with constant $L _ { f }$ , then for any input $x$ :
$$
\lVert f _ { \theta ^ { * } } ( x ) - f _ { w ^ { * } } ( x ) \rVert \leq L _ { f } \cdot \lVert \theta ^ { * } - w ^ { * } \rVert .
$$
Thus, output differences are tightly controlled by $\lambda$ , providing an approximate certificate of removal fidelity.
# 4. Experiments and Results
We evaluate FAMR on two standard image classification datasets: CIFAR-100 (Krizhevsky et al., 2009) and ImageNet-100 (Deng et al., 2009). For backbone architectures, we use four pretrained Vision Transformer (ViT) models—ViT-Tiny (ViT-Ti), ViT-Small (ViT-S), ViT-Base (ViT-B), and ViT-Large (ViT-L)—sourced from HuggingFace’s transformers and timm libraries. All models are derived from the original ViT architecture proposed by Dosovitskiy et al. (Dosovitskiy et al., 2020), and were pretrained on the full ImageNet-1K dataset using supervised learning. Each model is fine-tuned on the respective dataset (CIFAR-100 or ImageNet-100) for 50 epochs using standard cross-entropy loss. Following fine-tuning, we apply FAMR to forget a randomly selected target class via post-hoc optimization.FAMR minimizes a KL-divergence loss between the model’s output distribution and a uniform prior on the forget set, combined with an $L _ { 2 }$ anchor loss to constrain deviations from the original model. The optimization is performed for 10 epochs with a learning rate of $1 0 ^ { - 4 }$ and anchor strength $\rho = 0 . 1$ .
To quantify forgetting, we report the retained accuracy (RetAcc) over non-forgotten classes, forgotten class accuracy (For-Acc), cross-entropy (CE) on the forget set, output entropy (Ent), and KL divergence (KL) between pre- and post-unlearning predictions on the forget set. Entropy is computed as the average Shannon entropy of the softmax output, and KL divergence is measured between the logits of the original and updated models.
As shown in Tables 1 and 2, FAMR drives For-Acc to nearzero values across all ViT variants, while preserving high performance on retained classes. Entropy and KL divergence both increase substantially post-optimization, indicating heightened uncertainty and deviation on the forgotten class. Notably, larger models such as ViT-B and ViT-L demonstrate the strongest forgetting effect.
Table 1. FAMR Unlearning Results on CIFAR-100 using Vision Transformer Variants
Table 2. FAMR Unlearning Results on ImageNet-100 using Vision Transformer Variants
We analyze the temporal evolution of our forgetting process across different model architectures and datasets, as shown in Figure 1. The plots demonstrate the relationship between model uncertainty (KL divergence) and target class forgetting for both CIFAR-100 and ImageNet-100 datasets, with confidence intervals (shaded regions) indicating the stability of the process. Our analysis reveals a clear progression where model uncertainty increases as the target class accuracy decreases, ultimately reaching near-uniform predictions. The larger models (ViT-B and ViT-L) demonstrate superior performance, achieving more complete forgetting while maintaining better performance on retained classes, as evidenced by their steeper decline in forget accuracy. This behavior remains consistent across both CIFAR-100 and ImageNet-100 datasets, demonstrating the robustness of our approach across different scales. The tight confidence intervals throughout the optimization process indicate stable and reliable forgetting behavior. Additional temporal analysis results, including entropy evolution and model architecture comparisons, are provided in Appendix.
# Impact Statement
This work advances machine unlearning to enhance data privacy and model accountability in deployed ML systems. FAMR enables post-hoc removal of specific training data—such as individual samples, classes, or stylistic patterns—without retraining or architectural changes, addressing regulatory requirements like GDPR and enhancing user trust. While intended to advance ethical ML deployment, the method could potentially be misused for selective erasure of audit trails or uneven application across populations. We encourage responsible deployment with transparency and fairness. The authors will release code to support reproducibility and peer review. This work does not involve human subjects, personally identifiable data, or dual-use
Figure 1. Evolution of model uncertainty and forgetting process. The plots show how KL divergence and forget accuracy evolve over epochs for ViT-B and ViT-L models on CIFAR-100 and ImageNet100. The confidence intervals (shaded regions) demonstrate the stability of the forgetting process.
applications. | As machine learning systems increasingly rely on data subject to privacy
regulation, selectively unlearning specific information from trained models has
become essential. In image classification, this involves removing the influence
of particular training samples, semantic classes, or visual styles without full
retraining. We introduce \textbf{Forget-Aligned Model Reconstruction (FAMR)}, a
theoretically grounded and computationally efficient framework for post-hoc
unlearning in deep image classifiers. FAMR frames forgetting as a constrained
optimization problem that minimizes a uniform-prediction loss on the forget set
while anchoring model parameters to their original values via an $\ell_2$
penalty. A theoretical analysis links FAMR's solution to
influence-function-based retraining approximations, with bounds on parameter
and output deviation. Empirical results on class forgetting tasks using
CIFAR-10 and ImageNet-100 demonstrate FAMR's effectiveness, with strong
performance retention and minimal computational overhead. The framework
generalizes naturally to concept and style erasure, offering a scalable and
certifiable route to efficient post-hoc forgetting in vision models. | [
"cs.LG",
"cs.CV"
] |
# 1 INTRODUCTION
V IDEO Scene Parsing (VSP) is a fundamental problem in pixel in a video sequence. It includes key tasks such as Video Semantic Segmentation (VSS), Video Instance Segmentation (VIS), and Video Panoptic Segmentation (VPS). By bridging the gap between static image analysis [1] and dynamic scene understanding [2], VSP plays a vital role in both academic research and industrial applications. Academically, VSP poses unique challenges, such as ensuring temporal consistency across frames [3]–[5], effectively extracting spatiotemporal features [6], [7], and accurately tracking dynamic objects in complex environments [8]. Addressing these challenges not only advances the theoretical foundations of computer vision but also drives innovation in related domains like pattern recognition and machine learning. From an industrial perspective, VSP underpins a wide range of critical applications, including autonomous driving, intelligent surveillance, robotics, and video editing. The ability to understand and interpret dynamic visual scenes is essential for enhancing decision-making processes and enabling robust performance in real-world scenarios.
Historically, early efforts in VSP relied heavily on handcrafted features such as color histograms, texture descriptions, and optical flow [9]–[11], as well as classical machine learning models. Among these were clustering methods [12], graph-based approaches [13], support vector machines (SVMs) [14], random forests [15], and probabilistic graphical models like Markov random fields and conditional random fields [16], [17]. While these foundational techniques laid the groundwork for the field, their limited scalability and reliance on domain-specific feature engineering hindered their applicability to complex video data.
The advent of deep learning, particularly Fully Convolutional Networks (FCNs) [1], [18]–[20], marked a substantial paradigm shift in the field of VSP. FCNs, with their ability to learn hierarchical feature representations and predict pixel-level labels, have significantly enhanced the accuracy and efficiency of VSP tasks. Over the last decade, FCN-based methods [21]–[25] have emerged as the dominant approach, establishing new benchmarks and demonstrating their versatility across various VSP scenarios.
Building upon the advancements of deep learning, the rise of transformer architectures [26] has further revolutionized the landscape of computer vision [27]–[35]. Originally developed for natural language processing (NLP), transformers [26] introduced the self-attention mechanism, which excels at capturing longrange dependencies and contextual relationships. Inspired by their success in NLP, vision transformers (e.g., ViT [36], DETR [37]) have been adapted for visual tasks, thereby redefining the stateof-the-art in image and video segmentation. These transformerbased models leverage self-attention to model global interactions across spatial and temporal dimensions, overcoming the locality constraints of traditional Convolutional Neural Networks (CNNs) and paving the way for innovation in VSP.
In response to these technological advancements, the scope of VSP has broadened significantly to encompass increasingly sophisticated tasks. Video Tracking & Segmentation (VTS) represents a critical extension where the objective is not only to segment objects but also to maintain their identities consistently across frames [38], [39]. This task demands robust association strategies and the ability to handle occlusions, abrupt motion changes, and complex interactions, making it indispensable for applications such as multi-object tracking in crowded scenes and advanced video editing workflows.
Fig. 1. Structure of this survey. The second row corresponds to seven sections.
Another emerging frontier is Open-Vocabulary Video Segmentation (OVVS), which integrates the CLIP model [40] to transcend the limitations of fixed label sets in VSS. By leveraging multi-modal learning and natural language cues, open-vocabulary approaches [41]–[45] empower systems to segment objects beyond predefined categories, thereby accommodating the vast diversity of objects encountered in real-world videos. This paradigm shift is particularly relevant in dynamic environments where new or rare objects frequently appear, demanding models that are both adaptable and capable of zero-shot generalization.
In light of these advancements, our survey provides a systematic exploration of the multifaceted progress in VSP. Unlike existing surveys that often emphasize specific subfields or techniques, our work bridges the gap between convolutional and transformerbased methodologies while adopting a unified perspective that encompasses VSS, VIS, VPS, VTS, and OVVS. Previous surveys, such as [46], have primarily focused on Video Object Segmentation (VOS), offering limited coverage of semantic, instance, and panoptic segmentation—key components for a holistic understanding of VSP. Similarly, the work in [47] centers extensively on transformer architectures, often sidelining convolution-based approaches that remain foundational to this field. By highlighting these shortcomings, our survey not only synthesizes the entire spectrum of VSP techniques but also critically assesses the evolution of both convolution- and transformer-based methods.
By addressing both longstanding challenges, such as temporal consistency and dynamic scene interpretation, and emerging demands like tracking, segmentation, and open-vocabulary recognition, this survey offers a comprehensive overview of the current state-of-the-art while laying the groundwork for future research directions. The integration of these diverse tasks reflects the natural progression of VSP towards a more holistic understanding of dynamic environments, ultimately driving innovations that are critical for real-world applications.
# 2 BACKGROUND
In this section, we first provide a formalized definition for the three primary tasks in Video Scene Parsing (VSP): Video Semantic
Segmentation (VSS), Video Instance Segmentation (VIS), and Video Panoptic Segmentation (VPS). We also include definitions for Video Tracking & Segmentation (VTS), as well as OpenVocabulary Video Segmentation (OVVS), which are emerging tasks that expand the scope of VSP. This classification clarifies the scope and focus of VSS research $\ S$ . Subsequently, we present an overview of the history of VSP in $\ S 2 . 2$ . Finally, in $\ S$ , we introduce several related research areas that intersect with VSP.
# 2.1 Task Definition
To establish a framework for understanding the various tasks, let $X$ and $Y$ represent the input and output segmentation spaces, respectively. Deep learning-based approaches for video segmentation aim to learn an optimal mapping function $f ^ { * } : X \to Y$ , where the objective is to map video data to the corresponding segmentation labels. Visual examples summarizing the differences among VSS, VIS, VPS, VTS, and OVVS with $T \ = \ 4$ are presented in Fig. 2.
Video Semantic Segmentation (VSS). The task of VSS focuses on predicting pixel-wise segmentation masks for each frame in a video clip $\bar { V } \overset { } { \in } \mathbb { R } ^ { T \times H \times W \overset { \mathbf { \bullet } } { \times } 3 }$ , where $T$ represents the number of frames, $H$ is the height, $W$ is the width, and the last dimension represents color channels. These masks classify each pixel into predefined semantic categories such as road, sky, person, and so on, without distinguishing individual object instances. Notably, VSS does not require assigning unique IDs to objects or maintaining temporal consistency for object tracking.
Video Instance Segmentation (VIS). VIS builds upon VSS by incorporating instance-level masks associated with each object within the video. In this task, each object instance receives a unique ID, facilitating instance tracking across frames. The input video $V ~ \in ~ \mathbb { R } ^ { T \times H \times W \times 3 }$ is segmented into a set of object masks $\{ ( m _ { i } , c _ { i } ) \} _ { i = 1 } ^ { N }$ , where $m _ { i } \ \stackrel { \bf { \bar { \Pi } } } { \in } \ \{ 0 , 1 \} ^ { T \times H \times W }$ denotes the mask for the $i$ -th instance, and $c _ { i }$ represents its class label. VIS necessitates maintaining temporal consistency in tracking each instance throughout the video.
Video Panoptic Segmentation (VPS). VPS further merges the objectives of VSS and VIS by jointly predicting segmentation masks for both “stuff” (e.g., amorphous regions like roads or skies) and “thing” (i.e., countable object instances such as cars and people) classes. Each “thing” mask is attributed a unique ID for tracking purposes, while “stuff” masks do not require unique identifiers. VPS requires temporally consistent segmentation and tracking results for all pixels, culminating in a comprehensive segmentation output that captures both “things” and “stuff”.
Fig. 2. Illustration of different VSP tasks. The examples are sampled from the VIPSeg dataset [48]. For VSS, the same color indicates the same semantic class across the entire video frame, without distinguishing instances. For VIS and VPS, different object instances are represented by different colors, with VPS further including both foreground instances and background semantics. VTS maintains consistent instance colors across video frames to reflect temporal identity. OVVS supports segmenting novel object categories beyond the predefined label set, with category names overlaid on each segment for clarity.
Video Tracking & Segmentation (VTS). This task is an integrated approach where both pixel-level segmentation and temporal tracking of objects across frames are performed. Given a video clip input as $\mathbf { \bar { \Sigma } } _ { V } \in \mathbf { \Sigma } _ { \mathbb { R } ^ { T \times H \times W \times 3 } }$ , VTS predicts segmentation masks for each object and assigns unique instance IDs to ensure consistent tracking over time. The output consists of object masks represented as $\{ ( m _ { i } , c _ { i } , \mathrm { i d } _ { i } ) \} _ { i = 1 } ^ { N }$ , where each mask $\mathbf { \bar { \Lambda } } _ { m _ { i } } \in \{ 0 , 1 \} ^ { T \times \mathbf { \bar { H } } \times W }$ corresponds to the $i$ -th object, $c _ { i }$ denotes its class label, and $\mathrm { i d } _ { i }$ represents its unique identity. This formulation effectively addresses challenges such as object motion, occlusion, and appearance variations across frames.
Open-Vocabulary Video Segmentation (OVVS). OVVS broadens the traditional scope of VSS by removing the constraint of a fixed label set. For a given video clip V ∈ RT ×H×W ×3, the goal of OVVS is to generate pixel-level segmentation masks while assigning semantic labels derived from an open vocabulary, thereby accommodating both seen and unseen categories. This approach accommodates both seen and unseen categories, leveraging large-scale pre-trained models and cross-modal learning frameworks. As a result, OVVS significantly enhances the adaptability of video segmentation methods in real-world scenarios where new or rare object classes may emerge, allowing for more flexible and robust segmentation capabilities.
# 2.2 History
The origins of image segmentation can be traced back to early methods developed for object boundary detection [49], which subsequently catalyzed the development of a wide array of segmentation algorithms. Owing to the inherent similarities between image and video segmentation, many of these techniques have been extended to the video domain, spurring rapid advancements in video segmentation methodologies.
Initial attempts at VSP primarily relied on simple and efficient over-segmentation techniques [13], [50]–[53]. These methods segmented continuous video into multiple regions by detecting abrupt changes in pixel intensities or by grouping pixels based on similarity. Such segmentation provided a rudimentary partitioning of the video for subsequent post-processing. However, despite their ability to delineate regions of interest to some extent, these approaches lacked an effective mechanism for modeling the spatiotemporal information inherent in videos, making it difficult to directly produce accurate and consistent segmentation masks.
As machine learning and computer vision technologies advanced, researchers began to recognize the limitations of relying solely on low-level pixel features. This recognition led to a growing demand for high-level semantic cues and an understanding of spatiotemporal correlations to enhance parsing quality. Consequently, some scholars started incorporating sophisticated methods, such as optical flow techniques [54]–[57], graph models [13], [58], and graph cut-based methods [59] into the video segmentation process. These innovations aimed to harness motion information between consecutive frames to improve temporal stability and boundary consistency in the segmentation results.
The transformative success of deep CNNs in image segmentation [1], [18]–[20] fueled significant interest in extending these approaches to VSP. Early methods often involved a “frame-wise parsing followed by post-processing” strategy, where a trained image segmentation network parsed each individual frame. Subsequently, techniques like optical flow [60] or conditional random fields (CRF) [61] were applied to smooth the segmentation results across frames, addressing some of the temporal coherency issues encountered in earlier methods.
In recent years, the field of VSP has witnessed significant advancements, propelled by a range of innovative methodologies. Instance segmentation techniques have been successfully extended into the video domain [62], enabling more precise objectlevel understanding across frames. To alleviate reliance on largescale annotated datasets, unsupervised and self-supervised learning strategies have emerged as powerful alternatives [63]–[65], effectively leveraging unlabeled data to enhance representation learning, thus addressing one of the significant bottlenecks in the field. Moreover, in the pursuit of achieving real-time performance, researchers have developed efficient architectures to strike a balance between accuracy and computational cost [24], [66]– [69]. The incorporation of Transformer-based models [70], [71] has further enhanced the ability to capture long-range temporal dependencies, enabling models to better comprehend complex scene dynamics. Additionally, advancements in dynamic network designs, predictive feature learning mechanisms, and spatiotemporal memory networks have significantly improved the ability of models to handle temporal variations in complex video scenes.
Overall, although traditional VSP methods have achieved commendable results in specific contexts, they remain constrained by the intricacies of handcrafted feature engineering. The advent of deep learning techniques in recent years has ushered VSP into a new era, significantly enhancing its performance in complex environments. In the following sections, we provide a comprehensive introduction to the recent advancements in this domain.
# 2.3 Related Research Areas
Several research areas are closely related to VSP. Below is an overview of these closely related topics of VSP.
Image Semantic Segmentation. The success of image semantic segmentation [25], [72]–[76] has significantly accelerated the rapid development of the VSP field. Early VSS approaches [77], [78] primarily relied on applying image semantic segmentation methods to individual frames. However, more recent methods have systematically explored spatiotemporal consistency to improve both accuracy and efficiency. Despite these advances, image semantic segmentation remains a fundamental cornerstone for stateof-the-art VSS techniques.
Video Object Segmentation. Advancements in Video Object Segmentation (VOS), exemplified by seminal works such as [79]– [83], have significantly influenced VSP. These studies demonstrated that fine-tuning deep networks with minimal supervision and integrating spatiotemporal memory mechanisms can achieve robust, temporally consistent segmentation. Many methodologies developed in VOS have been directly adopted in VSP to enhance semantic coherence across frames, addressing complex challenges such as occlusions and rapid motion in dynamic scenes.
Video Object Detection. To extend object detection into the video domain, video object detectors have incorporated temporal cues into their conventional frameworks [84]–[89]. Both video object detection and instance-level video segmentation share core technical challenges, including the maintenance of temporal consistency, mitigation of motion blur, and handling of occlusions. By leveraging advanced temporal modeling techniques, these approaches effectively detect and segment objects in dynamic environments. Moreover, the integration of temporal information not only enhances detection accuracy but also establishes a strong foundation for VSP, where understanding scene dynamics and object interactions is essential.
# 3 METHODS: A SURVEY
# 3.1 Video Semantic Segmentation
Expanding the advancements of deep learning in image semantic segmentation into video analysis has emerged as a prominent research focus in computer vision [114]. While the most straightforward approach involves applying image segmentation models to each video frame individually, this strategy neglects the temporal continuity and coherence inherent in video data. To address this limitation, research has evolved in four key directions, each interlinked and often overlapping in methodology.
Flow-based Methods. Optical flow is a fundamental technique used to capture the apparent motion of objects across consecutive video frames, resulting from the relative motion between the camera and the observed scene [23]. Rooted in the assumption of brightness consistency, optical flow posits that the intensity of a pixel remains unchanged over time, such that any displacement in pixel position directly correlates with motion. By estimating flow vectors for each pixel, optical flow methods generate a dense motion field that encodes the temporal relationships between frames, thereby enabling enhanced modeling of dynamic phenomena. These techniques have found widespread application in diverse video analysis tasks, including object tracking, scene flow estimation, and video segmentation, where they effectively capture the spatiotemporal dependencies that characterize motion. A key strength of optical flow-based approaches lies in their ability to preserve temporal coherence across video sequences, ensuring stable and accurate segmentation even in highly dynamic scenes.
By leveraging the dense motion field generated by optical flow, advanced models propagate temporal information and refine spatial features across frames, achieving robust segmentation under diverse conditions. One prominent strategy is the use of feature propagation, where approaches like NetWarp [3] employ optical flow to guide feature alignment across consecutive frames, ensuring temporal consistency. Similar to this idea, predictive frameworks such as PEARL [60] combine optical flow with feature prediction to further improve segmentation accuracy by capturing future motion trends. Adaptive feature propagation has also been explored through architectures like the Dynamic Video Segmentation Network (DVSNet) [68], which incorporates dynamic update mechanisms for efficient segmentation.
Moreover, the use of advanced techniques to explicitly model spatiotemporal dependencies has led to significant improvements in segmentation robustness. Methods such as GCRF [90] leverage deep spatiotemporal conditional random fields, optimizing consistency through probabilistic pixel-level interactions. Recurrent architectures like STGRU [95] introduce gating mechanisms to selectively propagate optical flow information, allowing for adaptive handling of occlusions and non-linear motion. Efficiencyfocused approaches such as Accel [4] balance speed and accuracy by integrating low-resolution motion prediction with highresolution corrections, offering a scalable solution for real-time applications. Meanwhile, joint learning frameworks like EFC [97] simultaneously optimize optical flow and segmentation tasks using bidirectional propagation, significantly improving temporal coherence. Finally, efficient temporal consistency frameworks, exemplified by ETC [100], balance computational efficiency and segmentation quality through frame-wise inference and featurelevel temporal aggregation. Together, these methods highlight the versatility and effectiveness of optical flow in addressing the intricate spatiotemporal challenges of VSS, pushing the boundaries of accuracy, efficiency, and robustness in dynamic video analysis.
Attention-based Methods. Attention mechanisms have recently emerged as a popular approach in VSS, enabling models to prioritize critical spatial and temporal features dynamically. By assigning varying weights to different regions of the input, attention models selectively focus on the most relevant information, enhancing segmentation accuracy in complex and dynamic scenes [107]. Spatial attention mechanisms highlight important regions within individual frames, while temporal attention captures key moments across the video sequence. When combined with optical flow, attention mechanisms refine motion cues and improve the handling of occlusions, leading to more precise and temporally consistent segmentation. This synergy has proven effective in addressing the challenges posed by dynamic video data, offering a powerful tool for robust and adaptive segmentation.
Furthermore, it enables models to focus on the most relevant spatiotemporal features selectively and thereby improves segmentation accuracy and efficiency. By leveraging both spatial and temporal dependencies, these mechanisms address the challenges posed by dynamic and cluttered video scenes. For instance, TDNet [98] introduces a time-distributed network architecture that combines feature distillation and multi-scale temporal aggregation, significantly enhancing segmentation speed and precision through a lightweight temporal branch. Building upon this, CFFM [70] employs a feature pyramid for adaptive, multi-scale feature fusion, while $\mathrm { C F F M + + }$ [112] further refines this approach with a global temporal context extraction mechanism, optimizing both segmentation accuracy and computational efficiency. Similarly, MRCFA [71] utilizes cross-frame feature correlation analysis to capture complex temporal dependencies, improving the model’s ability to handle intricate motion patterns.
TABLE 1 Summary of essential characteristics for reviewed VSS methods.
In parallel, CIRKD [102] leverages knowledge distillation to transfer temporal information from low-resolution features to high-resolution ones, optimizing the utilization of temporal cues. MVNet [105] adopts a multi-view feature fusion approach, enabling precise segmentation across multiple perspectives, while SSLTM [65] introduces a spatiotemporal context modeling module to model short- and long-term temporal dependencies jointly. Furthermore, MPVSS [106] employs an adaptive mask propagation strategy to balance accuracy and efficiency by utilizing keyframe segmentation results to guide non-keyframe segmentation. Finally, VPSeg [109] enhances segmentation by incorporating vanishing point (VP) priors, using both sparse-to-dense feature mining and VP-guided motion fusion to handle dynamic and static motion contexts. Collectively, these attention-based methods demonstrate the power of selectively focusing on relevant spatiotemporal features, offering robust solutions for video segmentation tasks in complex, dynamic environments.
Real-time Methods. Real-time optimization methods have emerged as a critical focus in VSS, addressing the need for highspeed processing in latency-sensitive applications without compromising accuracy. These methods achieve efficiency through lightweight architectures, adaptive scheduling, and feature reuse. For instance, Clockwork [66] employs stage-wise clock signals to selectively control computation in a fully convolutional network, using cached results to avoid redundant processing. Similarly, LVS [69] integrates feature propagation modules with adaptive schedulers, ensuring low-latency segmentation.
Building on this foundation, dynamic methods such as DVSNet [68] excel in optimizing resource allocation and synchronizing segmentation with optical flow networks, thereby accelerating processing while maintaining accuracy. In contrast, attentionbased approaches like TDNet [98], ETC [100], CFFM [70], and MPVSS [106] enhance efficiency and real-time performance by employing innovative techniques focused on low-resolution feature management and mask propagation. TV3S [113] takes a distinctive approach by employing the Mamba state space model to independently process spatial patches, integrating a selective gating mechanism, shift operations, and a hierarchical structure. This design enables TV3S to balance accuracy and efficiency, delivering robust performance in VSS tasks. These innovations collectively demonstrate how real-time optimization in VSS can seamlessly integrate efficiency and precision, enabling robust segmentation for dynamic, resource-constrained environments.
Semi-/Weakly-supervised Methods. Semi-supervised and weakly-supervised methods have become pivotal in VSS, offering practical solutions to reduce dependence on extensively labeled datasets while ensuring high segmentation accuracy. By leveraging unlabeled or sparsely labeled video data, these methods utilize temporal coherence and spatiotemporal correlations to extract meaningful supervision signals [22], [104], [111]. A representative example is Naive-Student [63], which introduces an iterative semisupervised learning strategy where pseudo-labels are generated for unlabeled data and refined with annotated data, bypassing the need for complex label propagation architectures.
Building on this foundation, DepthMix [64] innovatively incorporates unlabeled data into training by leveraging depth mixing techniques, combining depth supervision and distillation to maximize the utility of unlabeled samples. Additionally, SSLTM [65] utilizes an attention-based mechanism to effectively model complex spatiotemporal relationships, enhancing the integration of unlabeled data into the learning process and improving segmentation performance. Together, these approaches demonstrate the potential of semi- and weakly-supervised methods to balance annotation efficiency and segmentation accuracy, paving the way for scalable and robust VSS in real-world scenarios.
TABLE 2 Summary of essential characteristics for reviewed VIS methods.
# 3.2 Video Instance Segmentation
To tackle the challenge of simultaneously detecting, segmenting, and tracking instances in videos, the VIS framework, introduced by [62], was proposed. VIS integrates these tasks to enable a unified solution for instance-level video analysis. For VIS, the research has focused on three main directions, which frequently overlap to a considerable extent.
Tracking-based Methods. Tracking-based VIS methods have seen significant advancements in recent years, leading to enhanced accuracy and temporal consistency in VIS tasks [116], [117], [121], [123], [124], [128], [130]. One notable approach is MaskTrack R-CNN [62], a two-stage object detection and instance segmentation algorithm. In the first stage, it employs a Region Proposal Network (RPN) [7] to generate object candidate boxes. The second stage utilizes three parallel branches for object classification, bounding box regression, and instance segmentation, respectively, effectively achieving precise object detection and segmentation in dynamic scenes.
Building upon the concept of multi-object tracking, MultiObject Tracking and Segmentation (MOTS) [38] incorporates pixel-level segmentation capabilities into tracking. By generating segmentation masks for each detection box, MOTS enables the precise segmentation and tracking of individual objects, enhancing the robustness of multi-object tracking in complex environments. Besides, the TraDeS model [127] introduced a joint detection and tracking framework that leverages tracking cues to assist detection. By inferring tracking offsets through a cost volume, TraDeS improves both object detection and segmentation accuracy, facilitating better handling of occlusions and motion dynamics. The
StemSeg [115] adopts a proposal-free, single-stage framework that treats the video as a 3D spatiotemporal volume, allowing for temporally consistent instance segmentation without explicit tracking. Following a different design, IFC [122] proposes an efficient perclip architecture that processes frames with a CNN backbone, and exchanges temporal information through a two-stage interframe communication encoder, followed by a lightweight decoder with class and mask heads. More recently, the STC model [134] proposed a spatiotemporal consistency framework that integrates spatiotemporal features to enhance the accuracy and consistency of VIS. By modeling temporal dependencies alongside spatial features, STC ensures better continuity across video frames, making it particularly effective in capturing dynamic and evolving motion patterns in video sequences. Collectively, these methods highlight the progression of tracking-based detection approaches, which continue to push the boundaries of precision, consistency, and efficiency in dynamic video analysis.
Semi-/Weakly-supervised Methods. Semi-supervised and weakly-supervised methods have emerged as critical strategies in VIS, providing efficient solutions to mitigate the reliance on largescale labeled datasets while maintaining high segmentation accuracy. These approaches capitalize on the availability of unlabeled or sparsely labeled video data, leveraging temporal coherence and spatiotemporal relationships to generate meaningful supervisory signals. By integrating these weak supervision cues, semi-/weaklysupervised models can effectively learn to detect, segment, and track instances across video frames, even with limited annotations. A prominent semi-supervised approach is SemiTrack [120], a onestage method that facilitates instance tracking through training on both images and unlabeled video data. This model utilizes the inherent temporal consistency in video sequences to improve the learning process and enhance tracking performance.
In contrast, fIRN [125] presents a weakly-supervised instance segmentation method that harnesses motion and temporal consistency signals within the video to refine segmentation accuracy. By exploiting the spatiotemporal cues inherent in video data, fIRN addresses the challenge of achieving accurate instance segmentation with minimal supervision. Additionally, VideoCutLER [141] proposes a simple unsupervised VIS approach, which consists of three primary steps: first, performing unsupervised segmentation on individual frames; second, synthesizing the segmentation results into a video for training; and finally, training a video segmentation model on the synthesized data. This method offers a novel pathway for unsupervised learning, making it particularly useful when labeled data is scarce or unavailable. Furthermore, MOTSNet [118] introduces an innovative pipeline for automatic training data generation, simultaneously improving the existing MOTS methods. By automating the generation of training data, MOTSNet significantly reduces the need for manual annotation, thereby enabling the development of more efficient and scalable models. This contribution enhances the ability of MOTS to handle complex multi-object scenarios, offering improved performance in dynamic environments. Collectively, these methods showcase the increasing reliance on weak and semi-supervised learning paradigms, pushing the boundaries of VIS by achieving robust performance with limited supervision.
TABLE 3 Summary of essential characteristics for reviewed VPS methods.
Attention-based Methods. In recent years, attention-based architectures have redefined the landscape of VIS by leveraging attention mechanisms to cohesively integrate spatial and temporal cues [133], [135], [138]. VisTR [126] pioneers this paradigm by employing self-attention to aggregate features of the same instance across frames, subsequently applying a 3D convolutional head to predict mask sequences end to end. This unified formulation obviates the need for separate detection and tracking stages, yielding tightly coupled spatiotemporal representations, though at the expense of quadratic attention cost for long sequences. Building on this, VISOLO [131] introduces a grid-based representation enriched by a dual collaborative module: a memorymatching component that computes affinities between current and historical grid cells, and a temporal-aggregation unit that fuses past information to bolster frame-level classification and segmentation performance. In contrast, MinVIS posits that the learned query embeddings inherently capture temporal consistency, eliminating the necessity for explicit video-centric training. By simply linking instances via cosine similarity of queries, MinVIS [133] achieves comparable association accuracy with substantially reduced training complexity and accelerated inference.
Subsequent advances have sought to refine the balance between per-frame precision and long-range coherence. SeqFormer [137] advocates for decoupling temporal aggregation from spatial attention by employing standalone instance queries that independently interact with each frame’s features, thereby preserving single-frame fidelity while capturing time series dynamics. TeViT [139] further enhances this strategy by constructing a multi-scale feature pyramid via a transformer backbone, randomly initialized instance queries then attending to these scales to directly predict video instances, enabling flexible cross-scale temporal modeling. Beyond architectural innovations, GenVIS [140] adopts a novel multi-clip training regime, sampling diverse video segments per iteration to more faithfully emulate real-world temporal variability and improve generalization. CTVIS [142] introduces a contrastive learning framework that aligns training and inference through long-sequence sampling, a memory bank of historical embeddings, and momentum-averaged query updates, substantially reinforcing temporal stability and association robustness. Finally, DVIS [143] demonstrates that decomposing VIS into separate detection, segmentation, and association streams can both elevate accuracy and dramatically lower resource consumption, highlighting the value of task-specific specialization within an attention-driven ecosystem. Collectively, these methods underscore the potency of attention for dynamic scene understanding, charting a course toward ever more efficient and coherent VIS in increasingly complex and diverse real-world environments.
# 3.3 Video Panoptic Segmentation
VPS [148] is a more comprehensive video segmentation task that integrates the characteristics of VSS and VIS. The goal of VPS is to assign a semantic label to every pixel in a video while simultaneously distinguishing and tracking foreground object instances and providing semantic annotations for background regions. Existing approaches to VPS can generally be categorized into three main directions.
Query-based Methods. Query-based VPS approaches formulate segmentation and tracking as a query interaction problem, enabling unified, end-to-end solutions without reliance on handcrafted tracking mechanisms [150]. A prominent example, Video K-Net [152], extends the K-Net paradigm by introducing a set of learnable convolutional kernels that jointly represent semantic categories and object instances. These kernels dynamically interact with pixel-level features, allowing them to perform segmentation and implicitly maintain temporal consistency across frames. By learning such representations end-to-end, Video K-Net elegantly unifies semantic segmentation, instance segmentation, and tracking within a single framework. Building upon this query-centric perspective, Tube-Link [153] proposes a temporal linking strategy that segments videos into short clips, each processed by a Transformer to generate spatiotemporal tube masks. Cross-clip associations are then established through inter-query attention, capturing long-range dependencies without the need for external tracking networks. Complementing these, PolyphonicFormer [154] introduces a unified query learning framework that effectively harnesses the mutual reinforcement between panoptic segmentation and depth information. By leveraging the synergistic interplay among semantic segmentation, instance delineation, and depth cues, PolyphonicFormer facilitates a more holistic and robust understanding of complex scenes. Collectively, these methods exemplify the versatility and strength of query-based representations in addressing the multifaceted challenges of VPS.
TABLE 4 Summary of essential characteristics for reviewed VTS methods.
Depth-aware Methods. Depth-aware methods integrate the challenges of both depth estimation and semantic instance segmentation, addressing the complex task of reconstructing 3D point clouds from 2D image sequences while simultaneously providing both semantic and instance-level labels. One prominent approach, ViP-DeepLab [149], tackles the inverse projection problem in vision by effectively leveraging monocular depth estimation to infer 3D structures, thereby enhancing VPS performance. By combining these two sub-tasks, ViP-DeepLab generates a unified framework that not only recovers spatial depth information but also assigns accurate semantic and instance labels to each 3D point in the scene. Complementing this, PolyphonicFormer [154] introduces a novel query-based learning framework, which effectively harnesses the mutual enhancement between panoptic segmentation and depth information. This approach facilitates more robust scene understanding by exploiting the synergistic relationship between semantic segmentation, instance delineation, and depth cues. Through these advancements, both methods push the boundaries of depth-aware segmentation, offering a comprehensive solution for high-fidelity video scene analysis.
Dual-branch Methods. Dual-branch methods perform panoptic segmentation by decoupling the task into two parallel branches: one dedicated to semantic segmentation for stuff classes and the other focused on instance segmentation for thing classes. The final panoptic prediction is obtained by fusing the outputs from both branches in a unified representation. Panoptic-DeepLab [151] exemplifies this architecture by extending it to the VPS setting. Specifically, it reconstructs a wide $2 2 0 ^ { \circ }$ field of view through the stitching of images captured from five synchronized cameras. Using known camera parameters, the method projects 2D pixels into a shared 3D space, enabling segmentation to be performed within a unified geometric context. By integrating spatial cues across multiple viewpoints, Panoptic-DeepLab achieves consistent segmentation across the scene, effectively handling complex layouts and occlusions in wide-angle video environments.
# 3.4 Video Tracking & Segmentation
VTS is a critical task in computer vision that combines two essential processes: identifying and tracking objects over time, and segmenting these objects from the background. The goal of VTS is to detect and follow the movement of objects across frames while simultaneously segmenting these objects from the surrounding environment, providing precise pixel-wise annotations for both foreground and background regions. This task is pivotal in a range of applications, from autonomous driving to surveillance and action recognition. Existing approaches to VTS can generally be categorized into three main directions.
Point-based Methods. Traditional VTS methods struggle with distinguishing foreground and background features due to the limitations of convolutional receptive fields, which affects tracking accuracy. The method in [157] overcomes this by treating both the foreground and surrounding regions as 2D point clouds, learning from four data modalities—offset, color, category, and position—leading to more precise segmentation and tracking. Similarly, [165] propagates positive and negative points across video frames, refining segmentation masks through iterative interactions with the SAM model [167], discarding occluded points and adding new visible ones to maintain tracking accuracy. Lastly, [162] combines segmentation masks with bounding boxes by using Transformer layers with self-attention to pinpoint object edges, enabling precise localization and unified tracking. These methods, through their innovative use of points for feature extraction, tracking, and refinement, highlight how leveraging pointbased approaches can significantly enhance MOTS performance.
Two-stage Methods. Two-stage methods in VIS first generate region proposals and then perform object classification, segmentation, and association based on these proposals. This pipeline separates object localization from refinement tasks, allowing for more precise instance-level predictions and temporal tracking. A representative example is TrackR-CNN [38], which augments a ResNet-101 backbone with 3D convolutions to model spatiotemporal features. These features are processed by a Region Proposal Network (RPN), and the model further incorporates an association head that predicts embedding vectors for matching instances across frames via Euclidean distance. Bertasius et al. [116] extend the Mask R-CNN framework by introducing a mask propagation branch that learns instance-specific features and temporally propagates them, enabling more accurate segmentation across time. Bras´o and Leal-Taix´e [159] propose a graph-based formulation where nodes and edges, representing object hypotheses and their associations, are updated through neural message passing under flow conservation constraints. Their framework classifies association edges and predicts instance masks using CNN-based embeddings. More recently, SAM-Track [164] introduces a unified tracking framework supporting both interactive and automatic modes. In interactive mode, the model leverages SAM [167] and
Grounding DINO [168] to enhance reference frame segmentation and semantics. In automatic mode, the system autonomously identifies and segments new instances, reducing identity drift and improving robustness in unconstrained scenarios.
Efficient Methods. The ASB method [158] addresses local tracking issues and the combinatorial explosion of detection space in classical global methods by formulating the problem using assignment-based optimization. It calculates the top-k best detection assignments between frames using the Hungarian-Murty algorithm and a custom cost function, which combines IoU, appearance, and distance features. Dynamic programming is then applied to find the global optimal solution efficiently. Additionally, the method jointly learns tracking and deep network parameters and improves long-distance assignment by constructing a cost matrix to recover links across multiple interrupted detections. [39] introduces DEVA, which decouples segmentation and tracking tasks. It uses task-specific image-level segmentation models for single-frame segmentation and a generic bidirectional temporal propagation model to generalize across different tasks, enabling the propagation of segmentation results across video frames.
# 3.5 Open-Vocabulary Video Segmentation
OVVS is a cutting-edge task in computer vision that combines semantic segmentation, temporal tracking, and language-based recognition to segment and track objects in videos using natural language prompts. This approach surpasses traditional fixedcategory methods by leveraging vision-language models to generalize to novel object classes, ensuring precise pixel-level segmentation across frames while maintaining consistent tracking despite challenges like occlusion and rapid motion. With applications spanning video editing, augmented reality, and autonomous driving, current methods rely on large-scale pre-training and advanced fusion architectures, although handling diverse natural language expressions and dynamic real-world conditions remains a significant challenge. In this context, the following papers introduce some innovative frameworks that push the boundaries of OVVS and offer compelling solutions to these ongoing challenges.
OVFormer [144] addresses the domain gap between visionlanguage model features and instance queries through unified embedding alignment. Exploiting video-level training and semionline inference, OVFormer harnesses temporal consistency to deliver efficient and accurate open-vocabulary VIS. For this task, ${ \mathrm { O V } } 2 { \mathrm { S e g } } +$ [146] employs a universal proposal generator to detect objects without category bias, a memory-induced tracking module that refines instance features across frames for stable association, and an open-vocabulary classifier by leveraging pre-trained visuallanguage models. CLIP-VIS [169] proposes an encoder-decoder framework that adapts the frozen CLIP model for open-vocabulary VIS by introducing class-agnostic mask generation, temporal topK enhanced query matching, and weighted open-vocabulary classification. Meanwhile, ODISE [155] integrates pre-trained text-toimage diffusion models with a discriminative network. Through techniques such as implicit caption generation and diffusion-based mask generation and classification, ODISE achieves comprehensive open-vocabulary VPS. In parallel, OV2VSS [170] presents a spatiotemporal fusion module to capture inter-frame relationships, complemented by a random frame augmentation module that enhances semantic understanding. A dedicated video-text encoding module reinforces textual information processing, culminating in effective open-vocabulary VSS.
# 4 DATASETS AND METRICS
In this section, we provide a systematic overview of the datasets $\ S$ and evaluation metrics $\ S$ commonly used in VSP. By analyzing these fundamental resources in depth, we aim to establish a solid foundation for subsequent methodological comparisons and assessments, while highlighting the challenges and opportunities in data acquisition and performance evaluation within the field.
# 4.1 Video Scene Parsing Datasets
# 4.1.1 VSS Datasets
In the dynamic and rapidly evolving field of VSS, the importance of robust and diverse datasets cannot be overstated. These datasets facilitate groundbreaking research and directed advancements in algorithms and applications. Below, we provide a comprehensive overview of several key datasets that have greatly influenced the VSS landscape, each offering distinct features and annotations tailored for specific research areas.
CamVid [93] is the first video dataset to offer semantic labels for object categories, pioneering a critical resource for studies in autonomous driving. Captured from the perspective of a driving vehicle, this dataset provides a total of ground truth semantic labels, encompassing over 10 minutes of high-quality footage at $3 0 \mathrm { { H z } }$ across five continuous videos. Semantic label images are provided at 1Hz for four of the videos, and at $1 5 \mathrm { H z }$ for one video, resulting in a total of 701 labeled frames. The dataset is carefully partitioned into training, validation, and test sets, with frame distributions of 367, 101, and 233, respectively.
NYUDv2 [99] is a notable dataset for research focused on indoor scenes, comprising video sequences captured by the Microsoft Kinect camera. Designed primarily for image description research, it includes 1,449 densely annotated aligned RGB and depth images covering 464 novel scenes from three cities, as well as 407,024 unlabeled frames. Each object within the dataset is assigned both class and instance identifiers, thereby enabling a thorough examination of indoor environments and facilitating the development of advanced segmentation techniques in confined settings.
Cityscapes [94] is a large and diverse dataset of stereo video sequences recorded on the streets of 50 different cities. It includes 5,000 images with high-quality pixel-level annotations and an additional 20,000 images with coarse annotations to support various methods that leverage large amounts of weakly labeled data. Cityscapes defines 30 visual classes for annotation, categorized into eight comprehensive groups: flat, construction, nature, vehicle, sky, object, human, and void.
KITTI [96] started as a multimodal dataset, comprising calibrated and synchronized images, radar scans, high-precision GPS information, and IMU acceleration data, which are essential for research towards autonomous vehicles. In 2018, the dataset was updated to include semantic segmentation ground truth, aligning the data format and metrics to be consistent with those of Cityscapes [94], although the image resolution is different, set at $3 7 5 \times 1 2 4 2$ . The dataset features 200 training images and 200 test images, establishing itself as a benchmark in autonomous driving and related research domains.
ACDC [110] is a valuable resource for understanding performance in adverse environmental conditions. It includes 8,012 images, half of which (4,006) are evenly distributed across four common adverse conditions: fog, night, rain, and snow. Each image under these adverse conditions comes with high-quality pixel-level panoramic annotations and a corresponding normalcondition image of the same scene. Additionally, a binary mask is provided to distinguish between clear and uncertain semantic content within the image regions. A total of 1,503 corresponding normal-condition images have panoramic annotations, bringing the total number of annotated images to 5,509.
TABLE 5 Statistics of video segmentation datasets.
VSPW [101] marks a significant advancement as the first largescale VSS dataset covering a multitude of diverse scenes. It includes 3,536 videos and 251,632 frames of semantic segmentation images, covering 124 semantic categories, significantly surpassing previous VSS datasets in annotation quantity. Unlike earlier datasets that primarily focused on street scenes, VSPW spans over 200 distinct video scenes, greatly enhancing the diversity of the dataset. This dataset also provides annotations at a remarkable frame rate of 15 frames per second, ensuring an extensive and densely annotated resource, with over $9 6 \%$ of the video data available in resolutions ranging from 720p to 4K.
MVSeg [105] contributes valuable insights into multi-spectral image analysis by featuring 738 calibrated RGB and thermal video pairs along with 3,545 fine-grained pixel-level semantic annotations across 26 categories. This dataset captures a variety of challenging urban scenes during both day and night, thereby bridging critical gaps in the understanding and application of multi-spectral imaging techniques.
# 4.1.2 VIS Datasets
• YouTube-VIS [62] is extended from YouTube-VOS, featuring enhanced mask annotations. It consists of 2,883 high-resolution YouTube videos, including 2,238 training videos, 302 validation videos, and 343 test videos. The dataset features a label set of 40 object categories and offers 131k high-quality instance masks, making it a valuable resource for research in this domain.
KITTI MOTS [38] offers a comprehensive dataset annotated from 21 videos in the KITTI training set. With a total of 8,008 frames across 21 scenes, it provides annotations for 26,899 cars and 11,420 pedestrians. The dataset is organized into two main subsets: the training set comprises 12 videos with 5,027 frames, annotated with 18,831 cars, 1,509 manually annotated cars, 8,073 pedestrians, and 1,312 manually annotated pedestrians. The test set consists of 9 videos with 2,981 frames, containing 8,068 cars, 593 manually annotated cars, 3,347 pedestrians, and 647 manually annotated pedestrians.
MOTSChallenge [38] is derived by selecting annotations from 4 out of the 7 videos in the MOTChallenge 2017 dataset. This dataset consists of 2,862 frames annotated with 26,894 pedestrians, out of which 3,930 are manually annotated, presenting a robust benchmark to evaluate tracking algorithms.
• OVIS [129] consists of 901 videos averaging 12.77 seconds in length, covering 25 distinct object categories. This dataset features $2 9 6 \mathrm { k }$ masks and 5,223 unique instances, thereby facilitating extensive research into occlusion handling.
# 4.1.3 VPS Datasets
Cityscapes-VPS [151] carefully annotated frames, distributed into 2,400 frames allocated for training, 300 frames designated for validation, and 300 frames reserved for testing. Each 30- frame video sequence includes annotations for 6 frames, separated by a 5-frame interval, facilitating segmentation tasks across 19 distinct semantic classes.
VIPER-VPS [151] leverages the synthetic VIPER dataset extracted from the GTA-V game engine. It includes 254K frames of driving scenes centered around the protagonist, with pixellevel semantic and instance segmentation annotations for 10 thing classes and 13 stuff classes at a resolution of $1 0 8 0 \times 1 9 2 0$ .
KITTI-STEP [171] consists of 50 videos, totaling 18,181 frames. It includes annotations for 2 thing classes and 17 stuff classes, with a total of 126,529 annotated masks.
MOTChallenge-STEP [171] consists of 4 videos, totaling 2,075 frames. The dataset includes annotations for 1 thing class and 6 stuff classes, with a total of 17,232 annotated masks.
VIPSeg [48] comprises 3,536 videos with a total of 84,750 frames. It covers 232 scenes and includes 124 categories, comprising 58 thing classes and 66 stuff classes. The dataset includes a total of 926,213 instance masks.
WOD: PVPS [151] is derived from WOD with further processing of annotations, providing a dataset with consistent panoramic segmentation annotations across multiple cameras and over time. It contains a total of 2,860 videos and 100,000 frames, including 8 tracking classes and 28 semantic classes.
# 4.1.4 VTS Datasets
APOLLO MOTS [157] is built on the ApolloScape dataset, which contains 22,480 frames with VIS labels. Focused on cars due to the lower number of pedestrians, it provides a challenging MOTS dataset for both 2D and 3D tracking. The dataset is split into training, validation, and testing sets $( 3 0 \%$ , $20 \%$ , $5 0 \%$ ), ensuring consistent tracking difficulty. APOLLO MOTS features twice as many tracks and car annotations as KITTI MOTS, with an average car density of 5.65 cars per frame, much higher than KITTI MOTS. It also contains 2.5 times more crowded cars, making tracking more complex.
HiEve [161] is a large-scale dataset designed for human-centric video analysis in complex events. It offers comprehensive annotations for human motions, poses, and actions, with a focus on crowds and complex scenarios. The dataset includes over 1 million pose annotations, more than 56,000 action instances in complex events, and one of the largest collections of longduration trajectories, averaging over 480 frames per trajectory. DAVIS16 [91] consists of 50 sequences in total, split into 30 training and 20 validation sequences. The total number of frames across all sequences is 3455, with an average of 69.1 frames per sequence. Each sequence contains one object on average, and the dataset covers 50 objects across all sequences. • DAVIS17 [92] features a larger, more complex dataset with 150 sequences, 10,459 annotated frames, and 376 objects. It includes multiple objects per scene, with increased complexity due to more distractors, smaller objects, occlusions, and fast motion.
The detailed information of the datasets used in this study is systematically summarized and presented in Tab. 5.
# 4.2 Metrics
In this chapter, we present a comprehensive review of several widely adopted metrics alongside their corresponding computational methodologies. Through a rigorous examination of both their theoretical foundations and practical implementations, we aim to establish a robust framework that not only enhances our understanding but also underpins the subsequent analyses.
Intersection over Union (IoU). IoU, also referred to as the Jaccard Index, is a fundamental metric used to evaluate segmentation performance. It quantifies the degree of overlap between the predicted segmentation mask and the corresponding ground truth. The IoU is mathematically defined as:
$$
\mathrm { I o U } = { \frac { \mathrm { T P } } { \mathrm { T P } + \mathrm { F P } + \mathrm { F N } } } ,
$$
where TP, FP, and FN are the true positives, false positives, and false negatives, respectively. An IoU value closer to one reflects a greater overlap between the predicted mask and the ground truth, indicating improved segmentation accuracy. This metric is particularly valuable as it penalizes both over-segmentation and under-segmentation errors, thereby providing a comprehensive evaluation of model performance.
Mean Intersection over Union (mIoU). The metric mIoU is a popular extension of IoU that calculates the average IoU across all $C$ classes:
$$
\mathrm { \ m I o U } = \frac { 1 } { C } \sum _ { i = 1 } ^ { C } \mathrm { I o U } _ { i } .
$$
A higher mIoU indicates improved accuracy and consistency of overlaps with the ground truth across all classes. By averaging class-wise IoU metrics, mIoU effectively balances performance among different categories, providing a holistic measure of segmentation quality that is essential for multi-class evaluations.
Video Consistency (VC). VC assesses the temporal consistency of segmentation models across consecutive frames. Let $C$ denote the total number of frames in a video and $n$ be the number of consecutive frames considered for consistency evaluation. The ground truth for the $i$ -th frame is represented as $S _ { i }$ and the predicted label as $S _ { i } ^ { \prime }$ . The VC metric over $n$ consecutive frames, denoted as $\mathrm { V C } _ { n }$ , is calculated as follows:
$$
\mathrm { V C } _ { n } = \frac { 1 } { C - n + 1 } \sum _ { i = 1 } ^ { C - n + 1 } \frac { ( \bigcap _ { j = 0 } ^ { n - 1 } S _ { i + j } ) \cap ( \bigcap _ { j = 0 } ^ { n - 1 } S _ { i - j } ) } { \bigcap _ { j = 0 } ^ { n - 1 } S _ { i + j } }
$$
To derive a comprehensive metric, the mean VC $( \mathrm { m V C } _ { n , } )$ ) across all videos in the dataset can be computed as:
$$
\mathrm { m } \mathrm { V C } _ { n } = \frac { 1 } { N } \sum _ { k = 1 } ^ { N } \mathrm { V C } _ { n } ^ { ( k ) } ,
$$
where $\mathsf { V C } _ { n } ^ { ( k ) }$ is the $\mathsf { V C } _ { n }$ value for the $k$ -th video and $N$ is the total number of videos analyzed. A higher $\mathrm { V C } _ { n }$ (and thus $\mathrm { m V C } _ { n }$ ) indicates stronger temporal consistency in the segmentation predictions across consecutive frames, reflecting a model’s ability to maintain coherent semantic labels over time.
Frames per Second (FPS). FPS is another critical performance indicator that defines the processing rate of frames. It is derived as follows:
$$
\mathrm { c o s t T i m e } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } t _ { i } ,
$$
where $t _ { i }$ denotes the time to process the $i$ -th frame, and $N$ is the total number of frames measured. The FPS is given by:
$$
\mathrm { F P S } = { \frac { 1 } { \mathrm { c o s t T i m e } } } .
$$
A higher FPS value indicates faster overall performance, as more frames can be processed per second, which is vital for applications requiring real-time performance.
Max Latency. Max Latency captures the worst-case processing time by identifying the slowest frame in the sequence:
$$
\mathrm { M a x \ L a t e n c y } = \operatorname* { m a x } _ { 1 \leq i \leq N } \bigl \{ t _ { i } \bigr \} .
$$
This metric is especially relevant for real-time applications where even a single large delay can impact the overall user experience.
Average Precision (AP). AP extends the image-based AP metric into the temporal dimension, providing a comprehensive measure for video evaluations. The AP for video is mathematically defined as:
$$
\mathrm { A P } = { \frac { 1 } { | T | } } \sum _ { t \in T } \mathrm { A P } _ { t } ,
$$
where $\mathrm { A P } _ { t }$ denotes the average precision at a specific IoU threshold $t$ , and $| T |$ represents the total number of IoU thresholds considered.
Average Recall (AR). AR measures the maximum recall rate achievable when a fixed number of segmented instances are provided per video. It is defined as:
$$
\mathrm { A R } = \frac { 1 } { \vert V \vert } \sum _ { v \in V } \frac { \mathrm { T P } _ { v } ( K ) } { \mathbf { G } \mathrm { T } _ { v } } ,
$$
where $\mathrm { T P } _ { v } ( K )$ denotes the number of true positive predictions among the top $K$ segmented instances for video $v$ , $\mathbf { G } \mathrm { T } _ { v }$ represents the total number of ground truth segments in that video, and $| V |$ is the total number of videos evaluated. A higher AR value signifies that the model effectively captures a larger proportion of the true instances, even when the prediction scope is limited.
Segmentation and Tracking Quality (STQ) : STQ synthesizes two sub-metrics: Association Quality (AQ) and Segmentation Quality (SQ). The Association Quality is defined as:
$$
\mathrm { A Q } = \frac { 1 } { | G | } \sum _ { z _ { g } \in G } \frac { 1 } { | g i d ( z _ { g } ) | } \sum _ { | z _ { f } \cap z _ { g } | \neq \emptyset } \mathrm { T P A } ( z _ { f } , z _ { g } )
$$
where $G$ represents the set of ground truth instances, $g i d ( z _ { g } )$ denotes the set of predicted instances associated with the ground truth instance $z _ { g }$ , $\mathrm { T P A } ( z _ { f } , z _ { g } )$ measures the true positive association between a predicted instance $z _ { f }$ and $z _ { g }$ , and $\mathrm { I o U } _ { i d } \big ( z _ { f } , z _ { g } \big )$ is the identity-aware IoU. The Segmentation Quality is given by:
$$
\mathrm { S Q } = \frac { 1 } { | C | } \sum _ { c \in C } \frac { | f _ { \mathrm { s e m } } ( c ) \cap g _ { \mathrm { s e m } } ( c ) | } { | f _ { \mathrm { s e m } } ( c ) \cup g _ { \mathrm { s e m } } ( c ) | } ,
$$
where $C$ denotes the set of classes, and $f _ { \mathrm { s e m } } ( c )$ and $g _ { \mathrm { s e m } } ( c )$ are the predicted and ground truth segmentation masks for class $c _ { i }$ , respectively. Finally, STQ is obtained by combining AQ and SQ as:
$$
\mathrm { S T Q } = \sqrt { \mathrm { A Q } \times \mathrm { S Q } } ,
$$
thus integrating both association and segmentation performance into a singular evaluation metric. This metric provides valuable insights into the efficacy of models in VIS challenges.
Multi-Object Tracking and Segmentation Precision (MOTSP) It quantifies the precision of segmentation masks within a multiobject tracking context. It is computed as:
$$
\mathrm { M O T S P } = \frac { \sum _ { t } \sum _ { ( i , j ) \in \mathcal { M } _ { t } } \mathrm { I o U } ( i , j ) } { \sum _ { t } | \mathcal { M } _ { t } | } ,
$$
where $\mathcal { M } _ { t }$ denotes the set of matched pairs between predicted and ground truth masks in frame $t$ , and $\mathrm { I o U } ( i , j )$ is the IoU between the $i$ -th predicted mask and the $j$ -th ground truth mask. A higher MOTSP value indicates greater overall segmentation precision in the multi-object tracking scenario.
Identity Switches (IDS). IDS measures the consistency of identity assignment in multi-object tracking and segmentation. It quantifies the number of times a ground truth object is matched to a predicted mask with a different identity than in the previous frame. Specifically, for a ground truth mask $m \in M$ , if it is matched in the current frame and also has a matched predecessor mask in the previous frame $( i . e . , p r e d ( m ) \neq \emptyset )$ , and if the identity of these two matched predictions differs, then it is counted as an identity switch. Formally, IDS is defined as:
$$
\begin{array} { r } { \mathrm { I D S } = \{ m \in M \mid c ^ { - 1 } ( m ) \neq \emptyset \land p r e d ( m ) \neq \emptyset \qquad } \\ { \land i d _ { c ^ { - 1 } ( m ) } \neq i d _ { c ^ { - 1 } ( p r e d ( m ) ) } \} . } \end{array}
$$
Here, $c ^ { - 1 } ( m )$ denotes the predicted mask matched to the ground truth mask $m$ ; pred $\mathbf { \Psi } _ { \mathrm { ' } } ^ { \prime } ( m )$ is the matched predecessor of $m$ from previous frames; and $i d _ { c ^ { - 1 } ( m ) }$ , $i d _ { c ^ { - 1 } ( p r e d ( m ) ) }$ represent the tracking IDs of the matched predicted masks in the current and predecessor frames, respectively. A lower IDS value indicates more stable and consistent identity tracking performance.
Multi-Object Tracking and Segmentation Accuracy (MOTSA). It assesses the overall accuracy of tracking and segmentation, taking into account correct matches, erroneous detections, and identity (ID) switches. It is defined as:
$$
\mathrm { M O T S A } = \frac { \sum _ { t } \left( \left| T P _ { t } \right| - \left| F P _ { t } \right| - \left| I D S _ { t } \right| \right) } { \sum _ { t } \left| G T _ { t } \right| } ,
$$
where $T P _ { t }$ denotes the number of true positives (correct matches) in frame $t$ ; $F P _ { t }$ represents the false positives (erroneous detections) in frame $t$ ; $I D S _ { t }$ indicates the number of $\mathrm { I D }$ switches in frame $t$ ; and $G T _ { t }$ is the total number of ground truth objects in frame $t$ .
Soft Multi-Object Tracking and Segmentation Accuracy (sMOTSA). The metric sMOTSA is a softened version of MOTSA that incorporates IoU values to weight true positives, thereby providing a more refined assessment of segmentation quality. It is defined as:
$$
\mathsf { s M O T S A } = \frac { \sum _ { t } \left( \sum _ { \left( i , j \right) \in \mathcal { M } _ { t } } \mathrm { I o U } ( i , j ) - \left| \mathrm { F P } _ { t } \right| - \left| \mathrm { I D S } _ { t } \right| \right) } { \sum _ { t } \left| \mathrm { G T } _ { t } \right| } ,
$$
where $\mathcal { M } _ { t }$ denotes the set of matched pairs between predicted and ground truth masks in frame $t$ ; $\mathrm { I o U } ( i , j )$ represents the IoU of the matched pair $( i , j ) ; \operatorname { F P } _ { t }$ is the number of false positives in frame $t$ ; $\mathrm { I D S } _ { t }$ indicates the number of ID switches in frame $t$ ; and $\mathrm { G T } _ { t }$ is the total number of ground-truth objects in frame $t$ . This weighting mechanism enables the sMOTSA metric to more accurately reflect the quality of the segmentation results.
Video Panoptic Quality (VPQ). It extends the concept of panoptic quality into the video domain by jointly evaluating segmentation and tracking performance across video segments. The computation process involves three steps. First, a video segment is selected. Next, for each video segment, the IoU is calculated for every pair of ground truth and predicted trajectories. True positives (TP), false positives (FP), and false negatives (FN) are determined based on a predefined IoU threshold (where TP is defined for matches with $\mathrm { I o U } > 0 . 5$ , FP for predictions without corresponding ground truth, and FN for ground truths that are missed by predictions). Finally, VPQ is defined as:
$$
\mathrm { V P Q } _ { k } = \frac { 1 } { N _ { c l a s s e s } } \sum _ { c } \frac { \sum _ { ( u , \hat { u } ) \in T _ { P } ^ { c } } \mathrm { I o U } ( u , \hat { u } ) } { | T _ { P } ^ { c } | + \frac { 1 } { 2 } | F _ { P } ^ { c } | + \frac { 1 } { 2 } | F _ { N } ^ { c } | } ,
$$
where $N _ { c l a s s e s }$ denotes the total number of classes, $T _ { P } ^ { c }$ is the set of true positives for class $c$ , $F _ { P } ^ { c }$ represents the false positives, and $F _ { N } ^ { c }$ indicates the false negatives for class $c$ . Notably, when $k = 0$ , VPQ reduces to the standard Panoptic Quality (PQ).
TABLE 6 Quantitative VSS results on the Cityscapes val set [94] in terms of mIoUclass and FPS.
# 5 PERFORMANCE COMPARISON
In this section, we present a comprehensive performance comparison of the previously discussed VSP methods, with the results succinctly summarized in a table. For clarity and brevity, we focus exclusively on those approaches—namely VSS, VIS, and VPS—that are underpinned by rigorous and standardized performance evaluations.
# 5.1 VSS Benchmark
# 5.1.1 Evaluation Metrics
The mIoU metric is the most widely adopted evaluation metric for VSS. Among existing benchmarks, Cityscapes [94] remains the most commonly utilized dataset for VSS. It comprises 19 classes grouped into 8 high-level categories, where categories denote broader semantic concepts (e.g., vehicles), and classes refer to more fine-grained distinctions (e.g., bicycle, car). In our evaluation, we use per-class IoU $\mathrm { ( m I o U _ { c l a s s } }$ , hereafter referred to as IoU) to assess segmentation accuracy, while FPS and maximum latency are employed to measure inference efficiency.
TABLE 7 Benchmark results for VSS on the VSPW validation dataset [101]. The results are with mIoU and mVC metrics. Methods in gray use open-vocabulary supervision.
# 5.1.2 Results
Tab. 6 summarizes the results of sixteen VSS approaches on Cityscapes [94] val set. VPSeg [109] achieves the highest segmentation accuracy with an mIoU of 82.5, while BLO [107] obtains the best inference speed with 30.8 FPS. Moreover, we conducted experiments on the VSPW [101] dataset using several representative VSS methods; the experimental results are summarized in Tab. 7. TubeFormer [172] achieves the best performance on all mIoU (63.2) and $\mathrm { \ m V C _ { 8 } }$ (92.1). TV3S achieves the best performance on $\mathrm { \ m V { C _ { 8 } } }$ .
# 5.2 VIS Benchmark
# 5.2.1 Evaluation Metrics
For VIS performance evaluation, precision and recall metrics are mostly employed. Specifically, we use the metrics mentioned in [62], which are average precision (AP) and average recall (AR) based on a spatiotemporal IoU metric. CTVIS [142] achieves the best performance in terms of AP50 (78.2), AP75 (59.1), AR1 (51.9), AR10 (63.2), and overall AP (55.1), setting the new stateof-the-art on the YouTube-VIS validation set.
TABLE 8 Evaluation of VIS methods on the YouTube-VIS validation set [62] in terms of AP and AR metrics. Methods in gray use open-vocabulary supervision.
# 5.2.2 Results
Tab. 8 summarizes the results of twenty-one VIS approaches on YouTube-VIS-2019 [62] val set.
# 5.3 VPS Benchmark
# 5.3.1 Evaluation Metrics
For VPS tasks, the most commonly employed metrics are VPS and STQ. We have compiled the performance of mainstream VPS models on three leading datasets—Cityscapes-VPS, KITTI-STEP, and VIPSeg.
# 5.3.2 Results
Tab. 9 summarizes the results of nine VIS approaches on Cityscapes-VPS, KITTI-STEP, and VIPSeg. PolyphonicFormer [154] achieves the best results on Cityscapes-VPS. Video KNet [152] achieves the best results on the KITTI-STEP dataset. TubeLink [153] achieves the best results on the VIPSeg dataset.
TABLE 9 Benchmark results on VPS validation datasets. The results are evaluated with VPQ and STQ metrics.
TABLE 10 Evaluation of VIS methods on the KITTI-MOTS validation set [38] using sMOTSA, MOTSA, and IDS metrics.
# 5.4 VTS Benchmark
# 5.4.1 Evaluation Metrics
For VTS tasks, the most commonly employed metrics are sMOTSA, MOTSA and IDS. We have compiled the performance of mainstream VTS models on the KITTI-MOTS validation set.
# 5.4.2 Results
Tab. 10 summarizes the results of six VIS methods on the KITTIMOTS validation set using sMOTSA, MOTSA, and IDS for both cars and pedestrians. PointTrack [157] achieves the best results across all metrics for both object categories, with the highest sMOTSA and MOTSA scores and the lowest number of identity switches (IDS).
# 5.5 OVVS Benchmark
We present the results for open-vocabulary VSS in Tab. 7 and for open-vocabulary VIS in Tab. 8. As can be observed, OVVS still has a long way to go to catch up with fully supervised VSP methods. Nevertheless, its ability to segment new categories opens the door to numerous practical applications. With the continued development of large multimodal foundation models, significant improvements in OVVS accuracy are anticipated.
# 6 FUTURE DIRECTIONS
Open-World Video Scene Parsing. Real-world environments are inherently dynamic, unstructured, and open-ended, continuously presenting novel objects, scenes, and events that defy the constraints of any predefined semantic taxonomy. This openworld nature poses a formidable challenge to VSP systems, which traditionally rely on a closed-world assumption with a fixed set of annotated categories. Consequently, state-of-the-art VSP models, despite their impressive performance in benchmark settings, often exhibit brittle generalization, semantic rigidity, and a tendency to misclassify or ignore unseen categories when deployed in unconstrained, real-world scenarios. To address these limitations, a growing body of research [177]–[183] has begun to tackle the open-world VSP problem, aiming to endow models with the ability to recognize known categories while simultaneously discovering, segmenting, and adapting to novel or evolving classes over time. These methods leverage a range of strategies, including open-set recognition, continual learning, prompt-based adaptation, and foundation models trained on large-scale web data. Encouragingly, these efforts are not only improving the robustness and adaptability of VSP systems but also establishing a solid foundation for building models capable of generalizing to novel categories and sustaining performance in open-world, real-world environments.
Unified Video Scene Parsing. Most video segmentation tasks face a common set of challenges, including occlusion, deformation, and the handling of long-term dependencies. Many approaches converge on similar solution frameworks to address these issues. One particularly promising research direction involves unifying multiple video segmentation tasks within a single model that operates across diverse datasets, thereby achieving robust and generalizable segmentation performance under various conditions. Recent investigations [184]–[191] have increasingly focused on the development of unified architectures, which hold significant practical value—especially in applications such as robotics and autonomous driving.
Multimodal Fusion for Segmentation. As video understanding increasingly requires precise scene decomposition, integrating heterogeneous modalities—such as RGB, depth, text, motion, and audio—has become a key strategy for enhancing segmentation accuracy and robustness. Multi-modal fusion methods leverage complementary signals to resolve ambiguous boundaries, suppress modality-specific noise, and recover fine-grained structures often missed by unimodal approaches. Recent advances span crossmodal attention and unified encoder–decoder architectures that align and aggregate features across pixel and region levels, enabling robust instance and foreground–background separation in dynamic scenes [192]–[199]. In domains such as autonomous driving, human–robot interaction, and video editing, such strategies improve semantic coherence and spatial precision. Future research should focus on scalable, modality-adaptive fusion frameworks and self-supervised objectives that promote cross-modal consistency under real-world constraints.
Visual Reasoning for Segmentation. An emerging direction in video segmentation is to integrate explicit visual reasoning, enabling models to capture temporal consistency, infer occluded objects, and interpret inter-object interactions beyond raw pixel cues. By incorporating reasoning modules or leveraging vision–language models, segmentation systems can address challenges like occlusion, complex motion, and ambiguous boundaries through richer scene understanding. This paradigm shift supports downstream tasks such as action recognition and predictive scene parsing by grounding masks in relational context. Recent work [200]–[203] explores approaches including spatiotemporal tokens, graph-based modules, and physics-inspired predictors, aiming to bridge low-level perception with high-level video understanding.
Generative Segmentation. With the advancement of generative models, a remarkable ability to capture high-resolution features within images has emerged. Recent studies [145], [204]–[207] have begun to exploit the inherent fine-grained representations of generative models to address image segmentation challenges. By leveraging the multi-scale and high-resolution features embedded in these models, this approach not only enables a more precise recovery of local structural details in complex scenes but also offers an effective remedy for issues such as occlusion and blurred boundaries, providing a fresh perspective to conventional segmentation strategies. Currently, this direction is rapidly gaining attention as a frontier research hotspot in the field of image segmentation, with promising applications in areas such as medical image diagnosis, autonomous driving, and remote sensing analysis. It underscores the substantial potential and practical value of generative models in advancing segmentation tasks.
Efficient Video Understanding. With the surge of video data and growing demand for real-time scene analysis, efficient VSP has become a key research focus. Recent efforts [208]–[213] explore lightweight architectures, temporal fusion, and multiscale feature extraction to enhance spatiotemporal modeling while maintaining low latency. By leveraging motion estimation and multi-frame cues, these methods better capture rapid transitions and subtle dynamics, addressing challenges like occlusion, blur, and background clutter. Efficient VSP holds promise for applications in autonomous driving, surveillance, virtual reality, and interactive media, offering a path toward scalable, real-time video understanding.
Large Language Model-based Segmentation. With the rise of large-scale pre-trained and foundation models, segmentation methods built upon these architectures have achieved significant breakthroughs. Recent works [214]–[221] exploit their rich contextual understanding and fine-grained representations to enable accurate target delineation in complex scenes. Through multi-scale feature fusion and deep semantic modeling, these methods exhibit strong robustness to occlusion and boundary ambiguity, while improving generalization in few-shot and cross-domain settings. Such approaches show great potential in domains like medical imaging, remote sensing, and autonomous driving. | Video Scene Parsing (VSP) has emerged as a cornerstone in computer vision,
facilitating the simultaneous segmentation, recognition, and tracking of
diverse visual entities in dynamic scenes. In this survey, we present a
holistic review of recent advances in VSP, covering a wide array of vision
tasks, including Video Semantic Segmentation (VSS), Video Instance Segmentation
(VIS), Video Panoptic Segmentation (VPS), as well as Video Tracking and
Segmentation (VTS), and Open-Vocabulary Video Segmentation (OVVS). We
systematically analyze the evolution from traditional hand-crafted features to
modern deep learning paradigms -- spanning from fully convolutional networks to
the latest transformer-based architectures -- and assess their effectiveness in
capturing both local and global temporal contexts. Furthermore, our review
critically discusses the technical challenges, ranging from maintaining
temporal consistency to handling complex scene dynamics, and offers a
comprehensive comparative study of datasets and evaluation metrics that have
shaped current benchmarking standards. By distilling the key contributions and
shortcomings of state-of-the-art methodologies, this survey highlights emerging
trends and prospective research directions that promise to further elevate the
robustness and adaptability of VSP in real-world applications. | [
"cs.CV"
] |
# 1 Introduction
architectural choices can otherwise influence the interpretation of experimental data.
The Thermodynamic Kolmogorov-Arnold Model (T-KAM) provides a solution to the problems posed by arbitrarily deep, uninformed neural networks by focusing on finite sums of continuous functions at the core of generative models.
Based on the latent space energy-based prior model introduced by Pang et al. [2020], we use the Kolmogorov–Arnold Representation Theorem, (KART by Givental et al. [2009]), to structure Maximum Likelihood Estimation (MLE) for dense Kolmogorov–Arnold Networks, (KANs presented by Liu et al. [2024]), where both the depth and width of layers are precisely determined by the data and latent dimensionalities.
While T-KAM can be extended to open-ended architectures, our initial focus adheres to KART in order to introduce structural bias and inform scaling alongside formulaic laws, such as those proposed by Kaplan et al. [2020]. This standardization may also benefit scientific modelling, where
Adherence to KART requires the use of KANs, which are well-suited to the recent hardware advances introduced by Zetta [2024]. This might support the broader adoption of T-KAM, enabling accesible design, principled scaling, and reduced dependence on automated tuning or heuristics.
Moreover, by expressing the lower-dimensional latent space with single-variable functions, our method improves latent space interpretability for domain experts who may not be well-versed in machine learning.
Firstly, T-KAM embodies an empirical Bayes approach, where latent priors are initialized from a reference distribution and updated throughout training. The latent distribution can follow either a conventional EBM or an exponential tilting operation, where each latent feature is initialized from a univariate reference prior. The former offers flexibility, while the latter imposes useful structure. We argue that careful selection of reference priors can improve training efficiency by better aligning the prior with the posterior before training commences, (Fig. 1a).
Additionally, we argue that the function-based nature of KART enables inductive biases to be leveraged by the model, (Fig. 1b). The flexibility in selecting basis functions or imposing functional constraints during training remains a compelling direction within the KAN research community. For instance, Fourier bases can introduce periodic biases, while Wavelet bases might be suited for spatial or temporal biases, as introduced by Xu et al. [2024] and Bozorgasl and Chen [2024], respectively. In this work, we use generic bases to maintain representational flexibility; however, domain-specific bases are expected to better align latent encodings with the data space.
By removing redundant expressivity in generic bases and EBM priors, otherwise required to capture the vastness of unspecific data manifolds, domain-specific bases can focus representation on physically meaningful features. Future directions may involve shaping these constituent functions to enforce governing equations while integrating observational data, similar to the physicsinformed methods of Raissi et al. [2018].
After training, the interpretable structure of KANs allows the recovery of latent priors via visualization or symbolic regression, as per the approach of Liu et al. [2024], (Fig. 2). For smallerscale models, this can facilitate scientific discovery, support downstream tasks, and contribute to mechanistic interpretability research (reviewed by Bereska and Gavves [2024]).
(a) Generated FMNIST, ( Xiao et al. [2017]), after just 2,000 parameter updates using MLE / IS adhering to KART’s structure. Uniform, lognormal, and Gaussian priors are contrasted using RBF bases by Li [2024].
Figure 2: Three exponentially-tilted priors, extracted after training on MNIST (Deng [2012]) for 2,000 parameter updates using MLE / IS, while adhering to KART with RBFs. Lognormal is untrained due to posterior collapse.
These ideas define the baseline model, which adheres strictly to KART. In Sec. 2.4, we introduce practical strategies for scaling the univariate prior while relaxing KART’s constraints to address its potential inflexibility. We show how inter-dimensional relationships can be incorporated into the prior via a mixture distribution, enabling compatibility with ITS through a component-wise approach that preserves fast inference post-training.
(b) Darcy flow, ( Li et al. [2021]), after 12,000
parameter updates using MLE / IS while adhering to
KART. RBFs by Li [2024] are contrasted against FFTs by Xu et al. [2024] with lognormal priors.
Figure 1: Generative modeling using KART with different priors and basis functions. The middle image grids are coloured differently for clarity.
We also explore replacing IS with the Unadjusted Langevin Algorithm (ULA), using the ideas of Brooks et al. [2011] and Roberts and Stramer [2002], and adopting the sampling strategy proposed in Pang et al. [2020]. Although ULA is more expensive than IS, it remains efficient due to the low dimensionality of the latent space. ULA is particularly well-suited to smooth, unimodal distributions and provides more exploration, which is valuable when the prior is highdimensional and substantially misaligned with the posterior, which can compromise the reliability of the IS estimator.
In Sec.3, we propose an additional training criterion based on population-based Markov Chain Monte Carlo (pMCMC) methods, specifically Parallel Tempering (PT bySwendsen and Wang [1986]), to improve sampling performance in the presence of multimodal posteriors. While this method is more demanding, it targets the convergence issues often observed in EBMs.
Prior work addressing poor MCMC convergence in EBMs has typically relied on diffusion models, such as the hierarchical EBM by Cui and Han [2024] and the diffusion-assisted EBM by Zhang et al. [2023]. In contrast, our approach involves posterior annealing rather than denoising. Specifically, it decomposes the posterior into a sequence of power posteriors by Friel and Pettitt [2008], maintaining prior interpretability and enabling fast inference post-training. Although this may reduce expressivity compared to hierarchical EBMs, it ensures that inductive biases are easy to embed into the latent prior and retain during training.
Our criterion is derived from the discretized Thermodynamic Integral and the Steppingstone Estimator (SE), as described by Calderhead and Girolami [2009] and Annis et al. [2019]. This addresses concerns raised by Zhang et al. [2023] regarding the limited impact of PT on learning dynamics and parameter updates.
We evaluate these scalable extensions in Sec. 4.3, finding that the mixture prior outperforms a deep prior that is architecturally similar to the EBM used by Pang et al. [2020], while also offering greater computational efficiency. In contrast, the use of pMCMC and SE is not justified in our setting, given the significant computational overhead and limited gains. However, this assessment may change as better posterior sampling strategies become feasible with advances in hardware, such as those proposed by Zetta [2024].
To the best of our knowledge, this is the first application of a representation theorem to a probabilistic domain without compromising its deterministic structure. We introduce a practical and expressive framework for generative modelling, naturally suited to multimodal (multiple data type) tasks, as demonstrated by Yuan et al. [2024], and extendable to interpretable classification and text-to-text generation, following the principles of Pang and Wu [2021].
T-KAM also addresses a core limitation of the EBM prior framework proposed by Pang et al. [2020] by enabling inexpensive, yet high-quality inference through ITS, avoiding the computational overhead of iterative MCMC methods. Future work may further eliminate MCMC from training by incorporating amortized inference and variational methods, as guided by Kingma and Welling [2022]. This supports our claim that TKAM can potentially support a sustainable life cycle in data-driven science and engineering, justified against previous findings by Wu et al. [2022] in Sec. A.3.
We present T-KAM as an initial step to inspire further adaptations of KART in machine learning. This work aims to lay the groundwork for more exploration amongst the research community, with the far-reaching hope of eventually advancing towards a broader conceptualization: “The Kolmogorov-Arnold Representation Theorem Is All You Need.“
Figure 3: SVHN by Netzer et al. [2011], trained using the different scaling strategies for just 8,000 parameter updates. The priors are realized with Chebshev bases by SS et al. [2024] from Guassian reference distributions.
# 2 The Thermodynamic Kolmogorov-Arnold Model
This section introduces the model for Marginal Likelihood Estimation (MLE) along with a summary of the sampling procedures. More detailed sampling theory is provided in Sec. A.7.
# 2.1 Kolmogorov-Arnold theorem
The Kolmogorov–Arnold Representation theorem (KART) presented by Givental et al. [2009] forms the basis of T-KAM. A structured proof of the theorem has been provided by Dzhenzher and Skopenkov [2022]. For any integer $n _ { z } > 1$ , there are continuous univariate functions $\Phi _ { q } : \mathbb { R } \mathbb { R }$ and $\psi _ { q , p } : [ 0 , 1 ] \to \mathbb { R }$ , such that any continuous multivariate function, $g : [ 0 , 1 ] ^ { n _ { z } } \mathbb { R }$ , can be represented as a superposition of those functions:
$$
g ( u _ { 1 } , \dots , u _ { n _ { z } } ) = \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Phi _ { q } \left( \sum _ { p = 1 } ^ { n _ { z } } \psi _ { q , p } ( u _ { p } ) \right) ,
$$
# 2.2 Univariate energy-based prior
In T-KAM, $u _ { p } \sim \mathcal { U } ( u _ { p } ; 0 , 1 )$ is treated as a uniform random variable. The functions $\psi _ { q , p }$ are interpreted as inverse cumulative distribution functions (CDFs) corresponding to independent univariate priors. These priors are defined over the probability space $\left( \mathcal { Z } , \bar { B } \left( \mathcal { Z } \right) , \pi \right)$ , where $\mathcal { Z } \subset \mathbb { R }$ denotes the latent space, $\boldsymbol { B } \left( \mathcal { Z } \right)$ is the Borel $\sigma$ - algebra on $\mathcal { Z }$ , and $\pi$ is a probability measure.
$$
\psi _ { q , p } ( u _ { p } ) = F ^ { - 1 } ( \pi _ { q , p } ) ( u _ { p } ) ,
$$
where $F ^ { - 1 } ( \pi _ { q , p } ) : [ 0 , 1 ] \to \mathcal { Z }$ is the generalized inverse CDF associated with a parameterized latent distribution, $\pi _ { q , p }$ . This is valid under standard regularity assumptions on $F$ , since it is a measurable, monotone, increasing function, and thus $F ^ { - 1 } ( u ) = \operatorname* { i n f } { \{ z \in \mathcal { Z } : F ( z ) \geq u \} }$ is a measurable mapping between probability spaces:
$$
( [ 0 , 1 ] , \mathcal { B } ( [ 0 , 1 ] ) , \mu ) \overset { F ^ { - 1 } } { \longrightarrow } ( \mathcal { Z } , \mathcal { B } ( \mathcal { Z } ) , \pi ) ,
$$
where $\mu$ is the Lebesgue measure on [0, 1]. This induces a deterministic pushforward measure via a Markov kernel from the uniform space to the latent space:
$$
K ( u _ { p } , A ) = \mathbf { 1 } _ { A } ( \psi _ { q , p } ( u _ { p } ) ) = \delta _ { \psi _ { q , p } ( u _ { p } ) } ( A ) ,
$$
where $\delta _ { z }$ denotes the Dirac measure centered at $z$ . Consequently, the transformation $u _ { p } \mapsto \psi _ { q , p } ( u _ { p } )$ defines a valid sampling procedure for latent variables. We parameterize the latent distribution $\pi _ { q , p }$ via exponential tilting of a base prior $\pi _ { 0 }$ :
$$
\pi _ { q , p } ( z ) = \frac { \exp ( f _ { q , p } ( z ) ) } { Z _ { q , p } } \pi _ { 0 } ( z ) ,
$$
where $f _ { \boldsymbol { q } , \boldsymbol { p } } ( \boldsymbol { z } )$ is a learned energy function and $Z _ { q , p }$ is the normalizing constant (partition function) that ensures $\pi _ { q , p }$ defines a valid probability.
The inspiration for this exponential tilting form can be traced to the energy-based prior introduced by Pang et al. [2020], adapted to the univariate case. The base prior $\pi _ { 0 }$ serves as a reference measure that guides the form of the learned distribution $\pi _ { q , p }$ , allowing for flexible reweighting via the energy function fq,p.
For $\pi _ { q , p }$ to define a valid probability distribution, it must satisfy countable additivity while being non-negative and normalized. This is guaranteed if $f _ { \boldsymbol { q } , \boldsymbol { p } } : \mathcal { Z } \mathbb { R }$ is Borel-measurable and integrable with respect to the reference density $\pi _ { 0 }$ , ensuring the finiteness of the partition function:
$$
Z _ { q , p } = \int _ { \mathcal { Z } } \exp ( f _ { q , p } ( z ) ) d \pi _ { 0 } ( z ) < \infty .
$$
In practice, $f _ { q , p }$ can be parameterized in various ways, made possible by the recent development of Kolmogorov–Arnold Networks (KANs). For instance, smoothing splines, such as those used by Liu et al. [2024] or Bohra et al. [2020]. Splines are typically defined on bounded domains, allowing the support of $\pi _ { q , p }$ to be explicitly controlled by the user.
The partition function $Z _ { q , p }$ is often intractable in energy-based models. However, in this univariate setting, where the energy function is smooth and defined on a bounded domain $[ z _ { \mathrm { m i n } } , z _ { \mathrm { m a x } } ]$ , it can be accurately approximated using Gauss–Kronrod quadrature Laurie [1997]:
$$
\begin{array} { c l c r } { { \displaystyle Z _ { q , p } = \int _ { z _ { \mathrm { m i n } } } ^ { z _ { \mathrm { m a x } } } \exp ( f _ { q , p } ( z ) ) \pi _ { 0 } ( z ) d z } } \\ { { \approx \displaystyle \sum _ { i = 1 } ^ { N _ { \mathrm { q u a d } } } w _ { i } , G ( z _ { i } ^ { \mathrm { n o d e } } ) , } } \end{array}
$$
where $\begin{array} { r l r } { G ( z ) } & { { } = } & { \exp ( f _ { q , p } ( z ) ) \pi _ { 0 } ( z ) } \end{array}$ , and $\{ z _ { i } ^ { \mathrm { n o d e } } , w _ { i } \} _ { i = 1 } ^ { N _ { \mathrm { q u a d } } }$ are the quadrature nodes and weights for the interval $[ z _ { \mathrm { m i n } } , z _ { \mathrm { m a x } } ]$ . The smoothness of $G$ , together with its bounded support, makes this approximation highly accurate.
With an valid form for the normalized CDF, inverse transform sampling (ITS) can be used to draw samples from $\pi _ { q , p }$ as deterministic transformations of uniform noise. This preserves the deterministic structure of KART while enabling flexible, learnable latent distributions. Inverse transform sampling remains effective even for functions that are not Riemann integrable, provided they are integrable in the Lebesgue sense, typical for the functions in KAN literature.
As a result, the model retains a coherent probabilistic interpretation of KART that is consistent with the energy-based framework proposed by Pang et al. [2020], which is well suited to maximum likelihood training.
However, this approach is significantly limited, as it ignores inter-dimensional dependencies in the latent space and results in an axis-aligned joint distribution. We address this inflexibility in Sec. 2.4.2 by relaxing our adherence to KART.
# 2.3 Kolmogorov-Arnold Network generator
Having sampled the intermediary latent variable with ITS such that $z _ { q , p } = F ^ { - \mathrm { i } } ( \pi _ { q , p } ) ( u _ { p } )$ , the generator, $\mathcal { G } : \mathcal { Z } \mathcal { X }$ , is formulated by:
$$
z _ { q , p } ^ { ( s ) } \sim \pi _ { q , p } ( z ) , \quad \mathrm { a n d } \quad \epsilon _ { o } ^ { ( s ) } \sim \mathcal { N } ( 0 , \sigma _ { \epsilon } ^ { 2 } ) ,
$$
$$
\tilde { x } _ { o } ^ { ( s ) } = \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Phi _ { o , q } \left( \sum _ { p = 1 } ^ { n _ { z } } z _ { q , p } ^ { ( s ) } \right) + \epsilon _ { o } ^ { ( s ) } ,
$$
$$
\tilde { \mathbf { x } } ^ { ( s ) } = \left( \begin{array} { c } { \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Phi _ { 1 , q } \left( \sum _ { p = 1 } ^ { n _ { z } } z _ { q , p } ^ { ( s ) } \right) } \\ { \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Phi _ { 2 , q } \left( \sum _ { p = 1 } ^ { n _ { z } } z _ { q , p } ^ { ( s ) } \right) } \\ { \vdots } \\ { \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Phi _ { n _ { x } , q } \left( \sum _ { p = 1 } ^ { n _ { z } } z _ { q , p } ^ { ( s ) } \right) } \end{array} \right) + \boldsymbol { \epsilon } ^ { ( s ) } ,
$$
where $\tilde { \mathbf { x } } ^ { ( s ) } = \{ \tilde { x } _ { o } ^ { ( s ) } \} _ { o = 1 } ^ { n _ { x } }$ represents a generated data sample corresponding to a single latent sample indexed by $s$ , $n _ { x }$ is used to specify the dimension of $\tilde { \pmb { x } } ^ { ( s ) }$ , and the noise is a Gaussian sample, ϵ(s) (ϵ; 0, σ2 I) with hyperparameter σϵ. The functions, $\Phi _ { o , q } : \mathbb { R } \mathbb { R }$ , are realizable as basis or KAN functions similar to $f _ { \boldsymbol { q } , \boldsymbol { p } } : \mathcal { Z } \mathbb { R }$ .
# 2.4 Scaling T-KAM
The trainable components of the prior and inner functions are shared across the features of each input $\mathbf { \boldsymbol { x } } ^ { ( b ) }$ , while the outer generator functions remain separate for each output dimension. Specifically, the learnable functions are defined as:
From this point forward, we formulate our derivations over a multivariate probability space $( \mathcal { Z } , B ( \mathcal { Z } ) , P )$ , where $\mathcal { Z }$ no longer represents a subinterval of the real line, but rather a subspace of $\mathbb { R } ^ { ( 2 n _ { z } + 1 ) \times n _ { z } }$ . The measure $P$ defines a product distribution over the latent variables:
$$
P ( z \mid f ) = \pi _ { 1 , 1 } \otimes \cdot \cdot \cdot \otimes \pi _ { 2 n _ { z } + 1 , n _ { z } } ,
$$
where each component $\pi _ { j , k }$ is defined via exponential tilting, as introduced in Sec. 2.2. The full latent variable $z$ is a matrix-valued random variable composed of these independent elements:
$$
z = \left( \begin{array} { c } { { ( z _ { 1 , 1 } , z _ { 1 , 2 } , \ldots , z _ { 1 , n _ { z } } ) } } \\ { { ( z _ { 2 , 1 } , z _ { 2 , 2 } , \ldots , z _ { 2 , n _ { z } } ) } } \\ { { \vdots } } \\ { { \vdots } } \\ { { ( z _ { 2 n _ { z } + 1 , 1 } , z _ { 2 n _ { z } + 1 , 2 } , \ldots , z _ { 2 n _ { z } + 1 , n _ { z } } ) } } \end{array} \right) .
$$
This generalizes the latent representation to a high-dimensional space.
# 2.4.1 Generator
The generator functions, $\Phi _ { o , q }$ , can be considered a separate generator network allowing for more flexible realizations, such as convolutional layers when adherence to Eq. 1 is not required. Deepening this generator, as guided by Liu et al. [2024] for the Kolmogorov-Arnold Network (KAN), does not replace KART as the innermost foundation of T-KAM:
$$
\begin{array} { c } { \displaystyle \boldsymbol { x } _ { o } ^ { ( s ) } = \sum _ { l _ { n } = 1 } ^ { L _ { n } } \boldsymbol { h } _ { o , l _ { n } } \left( \ldots h _ { l _ { 3 } , l _ { 2 } } \left( \sum _ { l _ { 1 } = 1 } ^ { L _ { 1 } } h _ { l _ { 2 } , l _ { 1 } } \right. \right. } \\ { \displaystyle \left. \left. \left( \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Phi _ { l _ { 1 } , q } \left( \sum _ { p = 1 } ^ { n _ { z } } z _ { q , p } ^ { ( s ) } \right) \right) \right) \right) + \boldsymbol { \epsilon } _ { o } ^ { ( s ) } . } \end{array}
$$
Here, each $h$ represents a hidden layer introduced between the outer sum of Eq. 1, and the output feature space to allow for more expressivity when required. In fact, each successive instance of $h$ can be interpreted as a distinct outer sum of Eq. 1, where the inner sum corresponds to the previous layer. This is valid provided that:
$$
\begin{array} { r l } & { \pmb { f } = \left( \begin{array} { c } { \left( f _ { 1 , 1 } , \ldots , f _ { 1 , n _ { z } } \right) } \\ { \left( f _ { 2 , 1 } , \ldots , f _ { 2 , n _ { z } } \right) } \\ { \vdots } \\ { \left( f _ { 2 n _ { z } + 1 , 1 } , \ldots , f _ { 2 n _ { z } + 1 , n _ { z } } \right) } \end{array} \right) , } \\ & { \pmb { \Phi } = \left( \begin{array} { c } { \left( \Phi _ { 1 , 1 } , \ldots , \Phi _ { 1 , 2 n _ { z } + 1 } \right) } \\ { \left( \Phi _ { 2 , 1 } , \ldots , \Phi _ { 2 , 2 n _ { z } + 1 } \right) } \\ { \vdots } \\ { \left( \Phi _ { n , 1 } , \ldots , \Phi _ { n , 2 n _ { z } + 1 } \right) } \end{array} \right) , } \end{array}
$$
where $n _ { x }$ denotes the dimensionality of the observed data.
$$
\begin{array} { c } { { L _ { 1 } = 2 \left( 2 n _ { z } + 1 \right) + 1 , } } \\ { { L _ { 2 } = 2 L _ { 1 } + 1 , } } \\ { { \ } } \\ { { \vdots \ } } \\ { { L _ { n } = 2 L _ { n - 1 } + 1 . } } \end{array}
$$
Thus, a smooth transition between the latent space and data space can be achieved by progressively doubling layer widths, eliminating the need for arbitrary choices while maintaining relaxed compliance with KART.
Strict adherence to KART can be achieved for hidden layer $h _ { l _ { k } , l _ { k - 1 } }$ by applying either a sigmoid or affine transformation after layer $h _ { l _ { k - 2 } , l _ { k - 3 } }$ to shift its range back to $[ 0 , 1 ] ^ { L _ { k - 1 } }$ . However, using sigmoid may lead to vanishing gradients and we contend that function domains are less critical to KART than its structural properties and should not be prioritized over maintaining gradient flow and avoiding saturation.
Adopting a KAN-based generator also allows for training with L1 regularization, followed by pruning with the method proposed by Liu et al. [2024]. This enables further model compression beyond the baseline structure defined by KART.
# 2.4.2 Mixture prior
As noted in Sec. 2.2, the independence of the priors $\pi _ { q , p }$ limits expressivity. Moreover, the summation over $p$ in Eq. 7 accumulates the variances of sampled $z _ { q , p }$ , which can lead to overly diffuse representations. To address this, particularly for more complex generative tasks such as those in Sec. 4.3, we relax the strict adherence to KART by reformulating the prior as a mixture distribution. This involves moving the summation over $p$ out of the generator and absorbing it into the prior, yielding a mixture model:
$$
\sum _ { p = 1 } ^ { n _ { z } } \pi _ { q , p } ( z ) = \sum _ { p = 1 } ^ { n _ { z } } \alpha _ { q , p } \frac { \exp ( f _ { q , p } ( z ) ) } { Z _ { q , p } } \pi _ { 0 } ( z ) ,
$$
$$
\sum _ { p = 1 } ^ { n _ { z } } \alpha _ { q , p } = 1 , \quad \alpha _ { q , p } \geq 0 .
$$
Here, $\alpha _ { q , p }$ denote mixture weights, practically enforced via a softmax function. The resulting distribution defines a mixture of energy-based priors. Sampling becomes more efficient than in the previous formulation, as it proceeds component-wise using ITS for discrete variables, following Devroye [2006]. However, this violates strict adherence to KART, since $F$ now corresponds to a discrete rather than continuous CDF.
To encourage balanced weighting and reduce model complexity, L1 regularization can be applied to $\alpha _ { q , p }$ and small values can be pruned. The mixture proportions are learned by adding $\log \alpha _ { q , p }$ to the gradients of the prior in Sec. 2.5.1.
Component selection follows a categorical distribution over $p = 1 , \ldots , n _ { z }$ , determined by $\alpha _ { q , p }$ . Let $u _ { \mathrm { m i x } } \sim \mathcal { U } ( u _ { \mathrm { m i x } } ; \ 0 , 1 )$ . Guided by Devroye [2006] regarding how ITS applied to discrete variables, the selected component $p ^ { * }$ is given by:
$$
p ^ { * } = \operatorname* { m i n } \left\{ j \mid \sum _ { p = 1 } ^ { j } \alpha _ { q , p } \geq u _ { \operatorname* { m i x } } \right\} .
$$
This ensures that each component $p ^ { * }$ is selected with probability $\alpha _ { q , p ^ { * } }$ . ITS can then be used to sample from the chosen component, (see 2.6.1).
# 2.4.3 KAN prior
If required, we may further deepen the energy functions $f _ { q , p }$ using the KAN approach. In fact, each $f _ { q , p }$ can itself be a realization of Eq. 1 when its domain is restricted to $[ 0 , 1 ]$ . This extension introduces new challenges:
1. It reduces the interpretability of the prior, complicating the recovery and visualization of trained priors.
2. It compromises the component-wise or univariate structure of the prior, undermining the validity of ITS and KART.
Instead, we propose removing the summation over $p$ in Eq. 7 and reparameterizing the prior using a multilayer KAN energy function, which can be deepened as needed:
$$
P \left( z \mid f \right) = \frac { \exp \left( \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \Psi _ { q , p } \left( \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z ) \right) \right) } { Z } \pi _ { 0 } ( z ) ,
$$
where $z \in \mathcal { Z } \subset \mathbb { R } ^ { n _ { z } }$ , and is constrained to $[ 0 , 1 ] ^ { n _ { z } }$ when strict adherence to KART is required. Sampling from this prior can be carried out using the Unadjusted Langevin Algorithm (ULA), as described in Sec. 3.4.
# 2.5 Maximum likelihood training
With $N _ { x }$ training examples in a batch, the optimization objective is the minimization of the negative log-marginal likelihood:
$$
\operatorname* { m i n } _ { f , \Phi } { \mathcal { L } } ( { \pmb x } ^ { ( 1 ) } , \ldots , { \pmb x } ^ { ( N _ { x } ) } ) = \operatorname* { m i n } _ { f , \Phi } \left[ - \sum _ { b = 1 } ^ { N _ { x } } \log \left( P ( { \pmb x } ^ { ( b ) } \mid f , \Phi ) \right) \right] .
$$
The conditional dependency on the collection, $f , \Phi$ , represents all learnable elements of T-KAM. For each sample of $\pmb { x } ^ { ( b ) }$ , the log-marginal likelihood in Eq. 15 can be separated out into two logdistributions by first introducing the latent vector, $z$ , otherwise absent upon marginalization over the
latent distribution. This is shown under Sec. A.6.2, and the resulting expectation exhibits a lack of dependence on sampling, as proven under Sec. A.6.3:
$$
\nabla _ { f , \Phi } \left[ \log P ( \boldsymbol { x } ^ { ( b ) } \mid \boldsymbol { f } , \Phi ) \right] = \mathbb { E } _ { P ( \boldsymbol { z } \mid \boldsymbol { x } ^ { ( b ) } , \boldsymbol { f } , \Phi ) } \left[ \nabla _ { f } \left[ \log P \left( \boldsymbol { z } \mid \boldsymbol { f } \right) \right] + \nabla _ { \Phi } \left[ \log P \left( \boldsymbol { x } ^ { ( b ) } \mid \boldsymbol { z } , \Phi \right) \right] \right] ,
$$
# 2.5.1 Log-prior
Given the independence of the prior’s parameters, the log-prior can be modeled as a sum over the univariate cases:
$$
\displaystyle \log \Big ( P \left( z \mid f \right) \Big ) = \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } \log \left( \pi _ { q , p } ( z _ { q , p } ) \right) = \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) + \log \left( \pi _ { 0 } ( z _ { q , p } ) \right) - \log \left( Z _ { q , p } \right)
$$
Tractable normalization is available through Eq. 6, however it may be more efficient to discard normalization in the learning gradient. In Sec. A.6.4, we derive the contrastive divergence learning gradient for T-KAM, which faster to train and more typical of energy-based modeling:
$$
\begin{array} { r } { \mathbb { E } _ { P ( z | x , f , \Phi ) } \Bigg [ \nabla _ { f } \left[ \log \ P \left( z \mid f \right) \right] \Bigg ] = \mathbb { E } _ { P ( z | x , f , \Phi ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \Bigg ] } \\ { - \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \Bigg ] . } \end{array}
$$
Here, $z$ is sampled from $P ( z \mid f )$ and $P ( \boldsymbol { z } \mid \boldsymbol { x } ^ { ( b ) } , f , \Phi )$ and reflects a univariate element of the prior model, (vectorized in Eq. 9). The expectation with respect to the posterior can be approximated using Importance Sampling (IS), outlined under Secs. 2.6.2 & A.7.2, or a standard Monte Carlo estimator when using Langevin sampling methods. The prior expectation may always be approximated with a Monte Carlo estimator taken across samples:
$$
\mathbb { E } _ { P ( z | f ) } \left[ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \right] \approx \frac { 1 } { N _ { s } } \sum _ { s = 1 } ^ { N _ { s } } \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \quad \mathrm { w h e r e } \quad z _ { q , p } ^ { ( s ) } \sim \pi _ { q , p } ( z ) .
$$
The gradient of Eq. 18 can therefore be viewed as a element-wise correction to the prior, incorporating information from the observations to align the prior with the posterior, thereby following the philosophy of empirical Bayes. The gradient can then be evaluated via autodifferentiation.
# 2.5.2 Log-likelihood
Depending on the task the log-likelihood can be modeled in multiple ways. For example, a Gaussian model might be adopted for images, and an autoregressive model for text. It can also be realized into a multimodal generative modeling framework using the principles outlined by Yuan et al. [2024].
$$
\begin{array} { r } { \log \left( P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \right) = \left\{ \begin{array} { l l } { - \frac { \left\| \pmb { x } ^ { ( b ) } - \tilde { \pmb { x } } \right\| _ { 2 } ^ { 2 } } { 2 \sigma _ { \mathrm { l i h o o } } ^ { 2 } } + \mathrm { c o n s t } , } \\ { \sum _ { a = 1 } ^ { n _ { x } } \sum _ { c = 1 } ^ { K } x _ { o , c } ^ { ( b ) } \log \tilde { x } _ { o , c } + \mathrm { c o n s t } , } \\ { \sum _ { t = 1 } ^ { T } \log \left( P ( \pmb { x } ^ { ( b , t ) } \mid x ^ { ( b , 1 ) } , \dots , x ^ { ( b , t - 1 ) } , z , \pmb { \Phi } ) \right) , } \end{array} \right. } \end{array}
$$
Here, $\tilde { \pmb { x } }$ is a generated sample from Eq. 7 corresponding to a single batch item, $\mathbf { \boldsymbol { x } } ^ { ( b ) }$ , $\sigma _ { \mathrm { { l l h o o d } } }$ is a hyperparameter subject to assumptions regarding the variance of the data, and $K$ denotes the number of discrete classes. The learning gradient attributed to the log-likelihood can also be evaluated using autodifferentiation, after first approximating the posterior expectation with IS or MC estimators.
# 2.6 Sampling procedures
There are two cases in T-KAM where samples are drawn from distributions that do not offer a means of sampling supported by off-the-shelf frameworks. Differentiability does not need to be maintained given the lack of gradient dependence on sampling, proven under Sec. A.6.3.
1. In Eqs. 7 & 19, $z _ { q , p }$ is drawn from $\pi _ { q , p }$ with inverse transform sampling (ITS).
2. The log-marginal likelihood and its learning gradient presented in Eqs. 15 & 16, involve expectations with respect to the posterior $P ( \boldsymbol { z } \mid \boldsymbol { x } ^ { ( b ) } , f , \Phi )$ , which is potentially multimodal and complex. We propose Importance Sampling (IS) as an efficient method for estimating its expectation. IS provides a quick approximation of the posterior expectation without requiring extensive exploration. In contrast, PT with ULA, (introduced in Sec. 3.4) is more stochastic and exploratory, often yielding higher-quality samples from the posterior.
A more comprehensive overview of the sampling methods is presented in Sec. A.7. The main body solely presents the summarized procedures.
# 2.6.1 Inverse transform sampling
To sample from a component of the latent distribution, $\pi _ { q , p }$ , we use inverse transform sampling (ITS), made possible given the unique characteristics of T-KAM. Specifically, the univariate, finitegrid structure of the constituent functions enables quadrature/interpolating integral approximations.
1. CDF inversion: Sampling occurs using the inverse cumulative distribution function (CDF): The CDF for a single component, $F _ { \pi _ { q , p } }$ , is:
$$
F _ { \pi _ { q , p } } ( z ^ { \prime } ) = \frac { \int _ { z _ { \operatorname* { m i n } } } ^ { z ^ { \prime } } \exp ( f _ { q , p } ( z ) ) \pi _ { 0 } ( z ) d z } { Z _ { q , p } } ,
$$
$$
Z _ { q , p } = \int _ { z _ { \mathrm { m i n } } } ^ { z _ { \mathrm { m a x } } } \exp ( f _ { q , p } ( z ) ) \pi _ { 0 } ( z ) d z .
$$
These two integrals can be conveniently and efficiently approximated using Gauss-Kronrod quadrature, (Laurie [1997]), given that $f _ { q , p }$ is univariate and defined on a bounded grid:
$$
G ( z ) = \exp ( f _ { q , p } ( z ) ) \pi _ { 0 } ( z ) ,
$$
$$
\int _ { z _ { \mathrm { m i n } } } ^ { z ^ { \prime } } G ( z ) d z \approx \sum _ { i = 1 } ^ { N _ { \mathrm { q u a d } } } w _ { i } G ( z _ { i } ^ { \mathrm { n o d e } } ) ,
$$
where $z _ { i } ^ { \mathrm { n o d e } }$ and $w _ { i }$ are the Gauss-Legendre quadrature nodes and weights for the interval $[ z _ { \mathrm { m i n } } , z ^ { \prime } ]$ . A random variable is uniformly drawn and shared across q, $u _ { p } \sim \mathcal { U } ( u _ { p } ; 0 , 1 )$ . The first interval where $F _ { \pi _ { q , p } } ( z )$ is greater than $u _ { p }$ scaled by $Z _ { q , p }$ is identified:
$$
k ^ { * } = \operatorname* { m i n } \Bigg \{ j \ \vert \ \sum _ { i = 1 } ^ { j } w _ { i } G ( z _ { i } ^ { \mathrm { n o d e } } ) \geq Z _ { q , p } \cdot u _ { p } \Bigg \} .
$$
2. Linear interpolation: The sample $z _ { p }$ is interpolated within the subset, $[ z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } , z _ { k ^ { * } } ^ { \mathrm { n o d e } } ]$ :
$$
\begin{array} { r } { z _ { p } \approx z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } + \left( z _ { k ^ { * } } ^ { \mathrm { n o d e } } - z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } \right) \cdot \delta } \\ { \delta = \frac { u _ { p } - F _ { \pi _ { q , p } } ( z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } ) } { F _ { \pi _ { q , p } } ( z _ { k ^ { * } } ^ { \mathrm { n o d e } } ) - F _ { \pi _ { q , p } } ( z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } ) } . } \end{array}
$$
The procedure ensures unbiased, high-quality, and efficient generation of samples, $z _ { q , p }$ , distributed according to $\pi _ { q , p }$ . It can be vectorized to generate samples from all prior elements.
# 2.6.2 Importance Sampling
The following procedure is elaborated in more detail under Sec. A.7.2. To approximate expectations with respect to the target, $P ( \boldsymbol { z } | \boldsymbol { x } ^ { ( b ) } , \bar { \boldsymbol { f } } , \Phi )$ , we use Importance Sampling (IS).
1. Proposal distribution: $N _ { s }$ independent samples, z(s) Ns , }s=1 are drawn from the proposal distribution, ${ \bar { Q } } ( z ) ~ = ~ P ( z ~ | ~ f )$ , which the target distribution, $P ( z \mid x ^ { ( b ) } , f , \Phi )$ must be absolutely continuous with respect to. Sampling from $Q$ is tractable with the procedure outlined in Sec. 2.6.1.
2. Importance weights: The normalized importance weights are evaluated:
$$
w _ { \mathrm { n o r m } } \bigl ( z ^ { ( s ) } \bigr ) = \frac { w \bigl ( z ^ { ( s ) } \bigr ) } { \sum _ { r = 1 } ^ { N _ { s } } w \bigl ( z ^ { ( r ) } \bigr ) }
$$
$$
\begin{array} { r l } & { = \operatorname { s o f t m a x } _ { s } \Big ( \log ( P ( \pmb { x } ^ { ( b ) } \mid z ^ { ( s ) } , \pmb { \Phi } ) ) \Big ) . } \end{array}
$$
3. Effective Sample Size: To prevent weight degeneracy, resampling is conducted when the Effective Sample Size (ESS) falls below a threshold specified by a hyperparameter, $\gamma$ :
$$
E S S = \frac { 1 } { \sum _ { s = 1 } ^ { N _ { s } } w _ { \mathrm { n o r m } } ( z ^ { ( s ) } ) } < \gamma \cdot N _ { s } .
$$
Otherwise, the weights are taken without alteration, (i.e. the next step is skipped). When the weights are uniformly spread the ESS evaluates to $N _ { s }$ . High-variance weights with a low
ESS indicate that many samples contribute minimally, resulting in inefficient sampling. Resampling focuses learning on samples that are more representative of the posterior distribution, thereby maximizing the usefulness of each parameter update.
4. Resample: If required, the population, {z(s), wnorm(z(s))}sN=s1, is resampled from. There are multiple ways to accomplish this, as compared by Douc et al. [2005]. Systematic, stratified, and residual resampling are provided in the codebase under Sec. A.1, parallelized on the CPU. In this study, we adopted residual resampling, (explained under Sec. A.7.3).
5. Expectation estimation: The posterior expectation of an arbitrary function $\rho ( z )$ , (e.g., the log-likelihood), is estimated with a Monte Carlo estimator:
$$
\begin{array} { r l } & { \mathbb { E } _ { P ( z | { \pmb x } ^ { ( b ) } , { \pmb f } , \Phi ) } \big [ \rho ( z ) \big ] } \\ & { \approx \displaystyle \sum _ { s = 1 } ^ { N _ { s } } \rho ( z _ { \mathrm { r e s a m p l e d } } ^ { ( s ) } ) \cdot w _ { \mathrm { n o r m } } \big ( \bar { z } _ { \mathrm { r e s a m p l e d } } ^ { ( s ) } \big ) . } \end{array}
$$
This estimator is used for computing the logmarginal likelihood and its gradient in Eqs. 15 and 16. However, unlike ULA (discussed in Sec. 3.4 and demonstrated in Sec. 4.3), IS fails to support posterior exploration when the prior is poorly aligned, making it unsuitable for generalpurpose tasks beyond those similar to the datasets presented in Secs. 4.1 and 4.2.
# 3 Steppingstone Estimator
This section explores the Steppingstone Estimator (SE) and Thermodynamic Integration (TI) as an alternative means of training T-KAM and estimating the marginal likelihood.
Figure 4: Demonstration of power posterior evolution with $t$ . A simple Gaussian prior is sculpted into forms that resemble a more complex, multi-modal posterior as $t$ is incremented from 0 to 1.
# 3.1 The Thermodynamic Integral
We inform our approach to tempering using the Thermodynamic Integral. In place of the log-marginal likelihood outlined in Eq. 15, this exploits the following representation derived in Sec. A.6.5:
$$
\log \left( P ( \mathbf { x } ^ { ( b ) } \mid f , \Phi ) \right) = \int _ { 0 } ^ { 1 } \mathbb { E } _ { P ( z | \mathbf { x } ^ { ( b ) } , f , \Phi , t ) } \left[ \log \left( P \left( x ^ { ( b ) } \mid z , \Phi \right) \right) \right] d t
$$
Here, $P ( z \mid x ^ { ( b ) } , f , \Phi , t )$ has been introduced as the power posterior, which can be considered a representation of the original posterior distribution, with a likelihood tempered by a temperature parameter, $t$ . This is introduced by Friel and Pettitt [2008] and organized for our purposes as follows:
$$
P ( z \mid x ^ { ( b ) } , f , \Phi , t ) = { \frac { P \left( x ^ { ( b ) } \mid z , \Phi \right) ^ { t } \cdot P ( z \mid f ) } { Z _ { t } } } ,
$$
$$
Z _ { t } = \mathbb { E } _ { P ( z \mid f ) } \left[ P \left( x ^ { ( b ) } \mid z , \Phi \right) ^ { t } \right] = \int _ { z } P \left( x ^ { ( b ) } \mid z , \Phi \right) ^ { t } d P ( z \mid f )
$$
Given the form of Eq. 30, setting $t = 0$ results in the power posterior distribution assuming the form of the prior distribution. As $t$ increments, the posterior gradually adopts a shape more closely resembling the true Bayesian posterior measure. At $t = 1$ , the posterior recovers its original form and converges to the original posterior distribution.
The effect of tempering is illustrated in Fig. 4, starting with a Gaussian prior that is entirely uninformed of $\pmb { x } ^ { ( b ) }$ . As $t$ increases from 0 to 1, the contribution of the likelihood term in Eq. 30 progressively grows, ultimately leading to the posterior at $t = 1$ . This final posterior can be interpreted as the prior updated by information from the samples of $\boldsymbol { x } ^ { ( b ) }$ .
# 3.2 Discretizing the Thermodynamic Integral
The exact calculation of the integral in Eq. 29 is feasible given that the bounds are $t = 0$ , corresponding to the expectation being taken with respect to the prior, and $t = 1$ , where it is taken with respect to the posterior. However, the integral can also be expressed exactly without collapsing the temperatures, offering greater flexibility in how the integral evolves through the temperature schedule, $\{ t _ { k } \} _ { k = 0 } ^ { N _ { t } }$ . In particular, the temperature schedule is discretized as:
$$
\pmb { t } = ( t _ { 0 } , t _ { 1 } , \dots t _ { N _ { t } } ) , \qquad t _ { k } \in [ 0 , 1 ]
$$
Here, $t _ { k }$ denotes the tempering at the $k$ -th index of the schedule, and $N _ { \mathrm { t } }$ is the number of temperatures. As derived by Calderhead and Girolami [2009], Eq. 29 can then be evaluated using:
$$
\log \Big ( P ( \mathbf { x } ^ { ( b ) } \mid f , \Phi ) \Big ) = \frac { 1 } { 2 } \sum _ { k = 1 } ^ { N _ { t } } \Delta t _ { k } ( E _ { k - 1 } + E _ { k } ) + \frac { 1 } { 2 } \sum _ { k = 1 } ^ { N _ { t } } D _ { \mathrm { K L } } ( P _ { t _ { k - 1 } } | | P _ { t _ { k } } ) - D _ { \mathrm { K L } } ( P _ { t _ { k } } | | P _ { t _ { k - 1 } } )
$$
Where:
$$
\begin{array} { r l r } & { } & { \Delta t _ { k } = t _ { k } - t _ { k - 1 } , \qquad E _ { k } = { \mathbb E } _ { P ( \boldsymbol { z } | \boldsymbol { x } ^ { ( b ) } , \boldsymbol { f } , \boldsymbol { \Phi } , t _ { k } ) } \left[ \log \left( P \left( \boldsymbol { x } ^ { ( b ) } \mid \boldsymbol { z } , \boldsymbol { \Phi } \right) \right) \right] , } \\ & { } & { D _ { \mathrm { K L } } ( P _ { t _ { k - 1 } } | | P _ { t _ { k } } ) = { \mathbb E } _ { P ( \boldsymbol { z } | \boldsymbol { x } ^ { ( b ) } , \boldsymbol { f } , \boldsymbol { \Phi } , t _ { k - 1 } ) } \left[ \log \left( \frac { P \left( \boldsymbol { z } \mid \boldsymbol { x } ^ { ( b ) } , \boldsymbol { f } , \boldsymbol { \Phi } , t _ { k - 1 } \right) } { P \left( \boldsymbol { z } \mid \boldsymbol { x } ^ { ( b ) } , \boldsymbol { f } , \boldsymbol { \Phi } , t _ { k } \right) } \right) \right] } \end{array}
$$
This reflects the trapezium rule for numerical integration, supported by bias correction to provide an estimate of the Riemann integral that is especially robust to error. The learning gradient derived from the discretized Thermodynamic Integral is unchanged from the MLE gradient in Eq. 16, as demonstrated under Sec. A.6.6.
# 3.3 Steppingstone Estimator
In Sec. A.6.6, the Steppingstone estimator (SE) introduced by Annis et al. [2019] is derived from the discretized Thermodynamic Integral, thus establishing a connection between the two:
$$
\mathrm { o g } \left( P ( \boldsymbol { x } ^ { ( b ) } \mid f , \Phi ) \right) = \sum _ { k = 1 } ^ { N _ { t } } \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) = \sum _ { k = 1 } ^ { N _ { t } } { \Big ( } \log ( Z _ { t _ { k } } ) - \log ( Z _ { t _ { k - 1 } } ) { \Big ) } = \underbrace { \log ( Z _ { t _ { N _ { t } } } ) } _ { = \mathrm { M L E } } - \underbrace { \log ( Z _ { t _ { 0 } } ) } _ { = 0 \mathrm { t z } _ { 4 } * 6 } .
$$
Following the work of Annis et al. [2019], the log-partition ratios are estimated with Monte Carlo estimators using the intermediary samples of ULA:
$$
\begin{array} { c } { { \displaystyle \log \Big ( P ( { \pmb x } ^ { ( b ) } \mid { \pmb f } , { \Phi } ) \Big ) = \log ( { \pmb Z } _ { t _ { N _ { t } } } ) ^ { \mathrm { s s } } = \displaystyle \sum _ { k = 1 } ^ { N _ { t } } \log \Bigg ( { \frac { { \pmb Z } _ { t _ { k } } } { { \pmb Z } _ { t _ { k - 1 } } } } \Bigg ) } } \\ { { \approx \displaystyle \frac { 1 } { N _ { s } } \displaystyle \sum _ { k = 1 } ^ { N _ { t } - 1 } \sum _ { s = 1 } ^ { N _ { s } } \big ( t _ { k + 1 } - t _ { k } \big ) \left[ \log P \left( { \pmb x } ^ { ( b ) } \mid { \pmb z } ^ { ( s , t _ { k } ) } , { \pmb \Phi } \right) \right] , \quad \mathrm { w h e r e } \quad { \pmb z } ^ { ( s , t _ { k } ) } \sim P ( { \pmb z } \mid { \pmb x } ^ { ( b ) } , { \pmb f } , { \pmb \Phi } , t _ { k - 1 } ) . } } \end{array}
$$
SE and its variants, such as Annealed Importance Sampling (AIS) by Neal [1998], are commonly used to estimate a model’s evidence without a tractable way to learn the prior, as they lack dependence on prior gradients beyond sampling. However, by recognizing the direct connection between the SE
and MLE learning gradients, as proven in Sec. A.6.6, one can simply add the contrastive divergence learning gradient from Eq. 18 to the SE estimator.
This is valid because the SE gradient matches the MLE gradient, and the likelihood gradient is independent of the prior gradient. The posterior expectation of contrastive divergence in Eq. 18 can be estimated with a Monte Carlo estimator using the final samples of ULA.
# 3.4 Unadjusted Langevin Algorithm (ULA) and Replica Exchange
In their implementation of Thermodynamic Integration (TI), Calderhead and Girolami [2009] estimated the Thermodynamic Integral using the discretization in Sec. 3.2, using population-based MCMC and Parallel Tempering (PT) to simultaneously sample from all power posteriors, as described by Swendsen and Wang [1986], Marinari and Parisi [1992], and Hukushima and Nemoto [1996].
Specifically, parallel chains were maintained at each temperature, (or Replica), using MetropolisHastings (MH by Metropolis and Ulam [1949]), to drive local moves for each power posterior. They also proposed a geometric path between the prior and posterior, enabling global swaps between adjacent temperatures. This approach allowed higher temperatures to leverage the efficient mixing of lower temperatures, thereby facilitating a more thorough exploration of the posterior landscape.
We adopt a similar strategy but follow Pang et al. [2020] in using the Unadjusted Langevin Algorithm (ULA) without MH correction, detailed by Brooks et al. [2011]. Specifically, we update samples using the forward Euler–Maruyama discretization of the Langevin SDE, as proposed by Roberts and Stramer [2002], with the transition kernel described in Sec. A.7.4:
$$
z ^ { ( i + 1 , t _ { k } ) } = z ^ { ( i , t _ { k } ) } + \eta \nabla _ { z ^ { ( i , t _ { k } ) } } \log \gamma \left( z ^ { ( i , t _ { k } ) } \right) + \sqrt { 2 \eta } \epsilon ^ { ( i , t _ { k } ) } ,
$$
where $\eta$ is the step size, $\epsilon ^ { ( i , t _ { k } ) } \sim \mathcal { N } ( \mathbf { 0 } , I )$ , and $\gamma _ { t _ { k } }$ denotes the target power posterior:
$$
\begin{array} { r } { \operatorname { o g } \gamma \left( z ^ { ( i , t _ { k } ) } \right) = \log P ( z ^ { ( i , t _ { k } ) } \mid x ^ { ( b ) } , f , \Phi , t _ { k } ) \propto t _ { k } \log P ( x ^ { ( b ) } \mid z ^ { ( i , t _ { k } ) } , \Phi ) + \log P ( z ^ { ( i , t _ { k } ) } \mid f ) . } \end{array}
$$
Unlike MCMC methods that solely rely on MH proposals, ULA accounts for the local geometry of the posterior, allowing more efficient exploration. In contrast, MH neglects this geometry, leading to high rejection rates, particularly problematic for the complex likelihood models used in T-KAM (Eq. 20), and requiring many iterations to sufficiently evolve the chain.
While ULA is computationally efficient and enables more flexible chain dynamics, it does not preserve detailed balance or reversibility. Thes can be recovered by incorporating MH corrections via the Metropolis-adjusted Langevin Algorithm (MALA by Rossky et al. [1978]). The codebase optionally supports MALA with locally adaptive step-size tuning, (autoMALA by Biron-Lattes et al. [2024]), to assist when a single step size is not sufficient for all power posteriors. The algorithms are discussed under Secs. A.7.4 and A.7.5.
Nevertheless, ULA remains the preferred method in our work, motivated by its demonstrated efficiency and effectiveness by Pang et al. [2020], and further supported by the findings of Nijkamp et al. [2019b] and Nijkamp et al. [2019a], with regards to how short-run, biased MCMC methods can still be highly effective for sampling from EBMs.
ULA is best suited for smooth, unimodal distributions, but annealing with power posteriors might improve sampling from multimodal distributions. Mixing can be further improved by periodically enabling global swaps between adjacent temperatures, subject to the following acceptance criteria:
$$
r = \frac { P \left( \pmb { x } ^ { ( b ) } \mid \pmb { z } ^ { ( i , t _ { k + 1 } ) } , \pmb { \Phi } \right) ^ { t _ { k } } P \left( \pmb { x } ^ { ( b ) } \mid \pmb { z } ^ { ( i , t _ { k } ) } , \pmb { \Phi } \right) ^ { t _ { k + 1 } } } { P \left( \pmb { x } ^ { ( b ) } \mid \pmb { z } ^ { ( i , t _ { k } ) } , \pmb { \Phi } \right) ^ { t _ { k } } P \left( \pmb { x } ^ { ( b ) } \mid \pmb { z } ^ { ( i , t _ { k + 1 } ) } , \pmb { \Phi } \right) ^ { t _ { k + 1 } } }
$$
The proposed swap is accepted with probability: $\operatorname* { m i n } \left( 1 , r \right)$ , such that $z ^ { ( i , t _ { k + 1 } ) } z ^ { ( i , t _ { k } ) }$ .
1.0 1.0 1.0 Integrand, Ek 0.8 Integrand, Ek Integrand, Ek Area Area Area
0.6 Discretisation:p= 0.35 Discretisation: p = 1 Discretisation:p = 4 0.4 0.4 0.4 0.2 0.2 0.0 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 $1 . 0$ Temperature tk Temperature tk Temperature $t _ { k }$
(a) $p = 0 . 3 5$ schedule has more (b) $p = 1$ schedule is uniformly (c) $p = 4$ schedule has more bins bins clustered towards $t = 1$ . distributed between the bounds. clustered towards $t = 0$ .
Figure 5: Plots depicting how the power-law schedule clusters integral approximation error. Evaluation points can be skewed towards a particular bound by choosing $p$ . Increasing $p$ provides more tempering, concentrating more evaluation points on smoother likelihoods. Tempering helps to mitigate practical challenges associated with the non-smoothness of the univariate functions in Eq. 1.
# 3.5 Cyclic annealing
Calderhead and Girolami [2009] demonstrated that uniformly scheduling the discretized temperatures in Eq. 31 results in unequal contributions of adjacent power posteriors to the Thermodynamic Integral. The success of TI and SE, relies on selecting each $t _ { k }$ to ensure sufficient overlap of adjacent power posteriors.
This overlap ensures a well-conditioned RadonNikodym derivative between adjacent power posteriors, as introduced in Sec. A.7.2, which influences the Monte Carlo estimator in Eq. 35, given that SE is an application of importance sampling, as shown by Annis et al. [2019].
This overlap is quantified by the KL divergence term in Eq. 32. Maintaining small KL divergences between adjacent power posteriors can be achieved by clustering temperatures in regions where $P ( z \mid \dot { \mathbf { \delta } } \mathbf { x } ^ { ( b ) } , \mathbf { \delta } \mathbf { f } , \dot { \Phi _ { \mathbf { \delta } } } { t _ { k } } )$ changes rapidly. To accomplish this, we follow Calderhead and Girolami [2009] in adopting a power-law relationship to schedule the temperatures of Eq. 31:
$$
t _ { k } = \left( \frac { k } { N _ { t } } \right) ^ { p } , \quad k = 0 , 1 , \hdots , N _ { t } ,
$$
The effect of varying $p$ on the schedule between 0 and 1 is visualized in Fig. 6. When $p > 1$ , spacing becomes denser near $t = 0$ , while $p < 1$ creates denser spacing near $t = 1$ . The impact of varying $p$ on the discretized Thermodynamic Integral of Eq. 32 is illustrated in Fig. 5. The plots demonstrate that for $p > 1$ , the trapezoidal bins are clustered closer to $t = 0$ , concentrating the discrete approximation error towards $t = 1$ .
The study by Calderhead and Girolami [2009] derives an analytic term for an optimal tempering distribution in the context of linear regression models, where the power posterior in Eq. 30 is used to evaluate the plausibility of model parameters in relation to the data, rather than reflecting a latent variable posterior. However, their findings may not hold in the context of T-KAM and generative modeling, making $p$ and the temperature schedule difficult variables to tune.
Instead, we schedule $p$ cyclically across all parameter updates of T-KAM’s training procedure, similar to the previous work of Fu et al. [2019]. The initial and final $p$ ’s and the number of cycles are treated as tunable hyperparameters:
$$
\begin{array} { c } { \delta = \mathrm { l i n s p a c e } \left( 0 , 2 \pi ( N _ { \mathrm { c y c l e s } } + 0 . 5 ) ; \mathrm { l e n } = N _ { \mathrm { u p d a t e s } } \right) } \\ { p ( i ) = p _ { \mathrm { i n i t } } + \displaystyle \frac { p _ { \mathrm { e n d } } - p _ { \mathrm { i n i t } } } { 2 } \left( 1 - \cos \delta \right) } \end{array}
$$
Figure 6: Power-law schedule.
where $N _ { \mathrm { u p d a t e s } }$ denotes the total number of parameter updates, $\mathbf { \chi } _ { i }$ indexes the current training iteration, $N _ { \mathrm { c y c l e s } }$ is the number of cycles, and $\delta$ is a vector of length $N _ { \mathrm { u p d a t e s } }$ .
Initializing $p > 1$ focuses learning on smoother likelihoods during the early stages of the cycle, where global features emerge between adjacent power posteriors. As training progresses, T-KAM will have learned the global features sufficiently well, reducing the change of measure between adjacent power posteriors of a $p > 1$ schedule. At this stage, $p ( i )$ will have transitioned to $p \leq 1$ , thereby shifting T-KAM’s focus towards capturing more detailed features of the posterior landscape, where training is expected to enhance its ability to generate finer details.
Guidance on selecting optimal values for the tempering hyperparameters is provided in Tab. 3.
# 4 Experiments
Gaussian Radial Basis Functions (RBF KANs), as presented by Li [2024], were used instead of the cubic B-spline bases proposed by Liu et al. [2024]. RBFs are more efficient for GPU implementation, and Poggio and Girosi [1990] previously demonstrated that radial functions provide an optimal solution to the regularized approximation/interpolation problem.
Fourier bases, (FFTs by Xu et al. [2024]), were also considered in Secs. 4.1 and 4.2, and Chebyshev bases, (Chebyshev by SS et al. [2024]), were considered in Sec. 4.3. However, FFTs were found to be non-smooth and potentially discontinuous, as illustrated by the prior distributions plotted in Sec. A.9. This may have compromised adherence to KART and future work might explore incorporating regularization to improve their smoothness.
To quantify the fidelity of generated data in a more reliable manner than previous studies in the literature, we used effectively unbiased metrics based on the principles outlined by Chong and Forsyth [2020]. By applying linear regression to extrapolate the values of standard metrics to an infinitely sized sample set, Monte Carlo error was mitigated, (otherwise associated with finite sample sizes).
For Sec. 4.3, we gathered $\overline { { \mathrm { F I D } } } _ { \infty }$ and $\overline { { \mathrm { K I D } } } _ { \infty }$ , which are effectively unbiased estimators of the Fréchet Inception Distance and Kernel Inception Distance, introduced by Heusel et al. [2018] and Bi´nkowski et al. [2021] respectively. The set of sample sizes used for the linear regression were: 1000, 1200, 1400, 1600, 1800, 2000, 2200, 2400, 2600, 2800, 3000.
However, we recognise that quantifying the fidelity of generated data is inherently challenging and not an exact science. While the metrics used in this study have limitations, they align with those commonly found in the literature. Our conclusions are also informed by our subjective perception of the images. Furthermore, no metrics were applied in Secs. 4.1 & 4.2, as vision metrics rely on the Inception V3 model trained on ImageNet (Szegedy et al. [2015]), potentially introducing bias against other datasets.
More image samples are provided in Sec. A.8.
# 4.1 MNIST & FMNIST with MLE / IS and adherence to KART
The MNIST and FMNIST datasets, introduced by Deng [2012] and Xiao et al. [2017] respectively, are grayscale datasets that can be modeled with the first likelihood model from Eq. 20. The first collection of benchmarks presented in Sec. A.2 were obtained using the hyperparameters and architecture specified for this experiment in Tab. 4. T-KAM was trained for 10 epochs with 20,000 training examples, an importance sample size of $N _ { s } = 1 0 0$ , and a batch size of $N _ { x } = 1 0 0$ , (corresponding to 2,000 parameter updates).
Figure 7: Generated NIST images using three different exponentially-tilted priors, initialized from $\mathcal { N } ( z ; \ \mathbf { 0 } , \mathbf { 1 } )$ , Lognormal $( z ; { \mathbf 0 } , { \mathbf 1 } )$ , and $\mathcal { U } ( z ; \mathbf { 0 } , \mathbf { i } )$ . Lognormal is inverted for clarity.
Lognormal priors introduced skew, leading to posterior mode collapse. T-KAM learned to generate an average of all MNIST or FMNIST classes, and failed to learn a meaningful latent prior. In contrast, Gaussian priors produced the sharpest images for FMNIST, and uniform priors yielded the sharpest MNIST digits.
# 4.2 2D Darcy flow pressures with MLE / IS and adherence to KART
The dataset used in this experiment was sourced from Li et al. [2021]. The Darcy flow equation is a second-order elliptic partial differential equation (PDE) that models various physical phenomena, including fluid flow through porous media and heat conduction. At steady state, its twodimensional form on the unit square with Dirichlet boundary conditions is given by:
$$
\begin{array} { r l } { - \nabla \cdot ( k ( z ) , \nabla u ( z ) ) = f ( z ) , } & { z \in [ 0 , 1 ] ^ { 2 } , } \\ { u ( z ) = 0 , } & { z \in \partial [ 0 , 1 ] ^ { 2 } , } \end{array}
$$
where $k ( z )$ is the 2D permeability field, $u ( z )$ represents the 2D pressure field, and $f ( z )$ is a forcing function evaluated at $z$ .
Generating new flow-pressure samples is a challenging task in machine learning. One approach is to use input permeability samples while employing the Fourier Neural Operator (FNO), (presented by Li et al. [2021]), to learn the solution operator mapping between permeability and flow pressure. However, in this study, we use the purely generative task to illustrate how weak inductive biases can influence what T-KAM learns.
T-KAM was trained to generate new Darcy flow pressures using a batch size and importance sample size of $\bar { N _ { x } } = 5 0 , N _ { s } = 5 0$ . The training spanned 600 epochs with 1,000 training examples, resulting in 12,000 parameter updates. All other hyperparameters are provided in Tab. 5.
Figure 8: Generated Darcy flow pressures with exponentially-tilted priors initialized from standard Gaussian, lognormal, and uniform distributions. Radial Basis Functions, (RBFs by Li [2024]), are contrasted against Fourier bases, (FFTs by Xu et al. [2024]). RBF is colored differently for clarity.
While RBF bases led to oversmoothing, FFTs better captured large-scale variations. However, all priors produced similar generations, despite constraining different latent representations, plotted in Sec. A.9.
# 4.3 CIFAR-10 & SVHN with convolutional neural network and scaling strategies
T-KAM, when formulated with KART, can be interpreted as a dense or spline-based network. However, for spatial data, convolutional neural networks (CNNs), introduced by LeCun et al. [2015], are more suitable. In addition to their computational efficiency through weight sharing, CNNs offer a key advantage over dense networks due to their spatial inductive bias: sequences of convolutional layers implement shift-equivariant transformations that are well-suited for images.
T-KAM can be adapted to image generation by moving away from KART, and implementing the generator as a CNN.
To verify that T-KAM exhibits behaviour comparable to the model proposed by Pang et al. [2020] under similar posterior sampling strategies, we trained T-KAM on the CIFAR-10 and SVHN datasets, (introduced by Krizhevsky [2009] and Netzer et al. [2011], respectively), using a KAN prior and the CNN-based generator adopted in their approach.
This allowed us to compare the mixture prior with ITS against the deep prior with ULA (i.e., the approach of Pang et al. [2020]), and to assess whether SE can outperform MLE.
Due to limited computational resources during this study, (detailed in Sec. A.2), training was shortened from 70,000 to 8,000 parameter updates. Due to their parameter efficiency, Chebyshev bases were used for the prior, as introduced by SS et al. [2024]. The overall architecture mirrored that of Pang et al. [2020], though it is important to note that dense KANs do not directly replicate the behaviour of standard dense layers. Additionally, we used a single optimizer to train both models, rather than using separate optimizers for the prior and generator networks. We considered these discrepancies to be acceptable, given the comparative nature of this experiment.
We compared the following two approaches for scaling the prior:
1. the mixture distribution in Eq. 12, which uses ITS to generate samples from the prior
2. the deep prior in Eq. 14, which requires ULA for sampling from the prior.
Posterior sampling was conducted with ULA, and a Gaussian reference was used, ${ \boldsymbol \pi } _ { 0 } = { \mathcal N } ( z ; { \mathbf 0 } , { \mathbf 1 } )$ . SE and Parallel Tempering were also evaluated to verify their potential as a training strategy.
T-KAM was trained for 20 epochs with a batch size of $N _ { x } = 1 0 0$ on 40,000 training examples. All remaining hyperparameters are provided in Tab. 6, and architectural details for the mixture and deep prior models, as well as the CNN generators, are included in Sec. A.5.1.
For brevity, only the SVHN images are visualized in the main body. Samples of CIFAR-10 are provided under Sec. A.8.5
Figure 10: Generated samples from T-KAM trained on SVHN using a deep prior.
Table 1: Effectively unbiased FID and KID scores (lower is better) for SVHN.
The mixture prior with ITS outperformed the deep prior with ULA, while also being more computationally efficient and enabling faster inference. This advantage is likely due to ITS providing exact samples from the prior, whereas ULA produces only approximate, biased samples.
SE did not outperform MLE and cannot be justified in this setting given its computational overhead. Alternative approaches, such as different annealing schedules or the use of autoMALA with bias correction and temperature-adaptive step size tuning, could improve performance, but they are not worth the added complexity, hyperparameter tuning, and extended training time.
Table 2: Effectively unbiased FID and KID scores (lower is better) for CIFAR-10.
# 5 Future work
# 5.1 Hardware acceleration
Computational benchmarks are provided in Sec. A.2. The univariate and localized nature of spline interpolations complicates their parallelization on GPUs, as noted by Yang and Wang [2024]. Additionally, the search operations in Eqs. 13, 24, and 77 were parallelized on the CPU across samples, even though the remainder of T-KAM was executed on the GPU. This design choice stems from the fact that GPUs are generally ill-suited for parallelized search algorithms, which often rely on conditional logic. Parallelizing conditionals can lead to thread divergence, making CPUs more efficient for these operations.
Figure 9: Generated samples from T-KAM trained on SVHN using a mixture prior.
While ITS enables fast inference and scales when the prior is a mixture model, T-KAM’s training with ULA does not scale efficiently on GPUs due to the iterative, gradient-based nature of ULA.
Nonetheless, the unique advantages of T-KAM justify continued research and development. Notably, Zetta [2024] introduces the Learnable Function Unit (LFU), a processor designed for efficient evaluation and differentiation of univariate, nonlinear functions in parallel, while offering precise data flow control and high throughput.
The LFU presents a promising avenue for more efficient implementations of ULA, and improved posterior sampling, potentially including bias correction and adaptive step size tuning with autoMALA, (Sec. A.7.5). Even without annealing, replacing ULA with autoMALA may yield a more robust probabilistic model with stronger theoretical guarantees.
Overall, T-KAM may prove more scalable than conventional generative models, owing to its reliance on finite representations of continuous functions and its compatibility with the LFU.
# 5.2 Text-to-text, classification, and multi-modal generation
Pang and Wu [2021] provided a robust framework for high-quality text-to-text generation, classification, and semi-supervised learning, while Yuan et al. [2024] demonstrated how TKAM might be adapted into a multi-modal generative model. This holds promise for future investigations and industry adoption, especially when the hardware of Zetta [2024] becomes available. However, the symbol-vector coupling approach of Pang and Wu [2021] must first be adapted to remain compatible with ITS.
# 5.3 Improved sampling for multimodal distributions
Despite its theoretical guarantees, annealing is difficult to justify given its computational cost and limited practical benefit. Future work could explore adaptations of the hierarchical EBMs proposed by Cui and Han [2024], without sacrificing the interpretability or efficiency of our model.
# References
Jeffrey Annis, Nathan J. Evans, Brent J. Miller, and Thomas J. Palmeri. Thermodynamic integration and steppingstone sampling methods for estimating bayes factors: A tutorial. Journal of Mathematical Psychology, 89:67–86, April 2019. doi: 10.1016/j. jmp.2019.01.005. URL https://osf.io/jpnb4. Epub 2019 Feb 13.
Leonard Bereska and Efstratios Gavves. Mechanistic interpretability for ai safety – a review, 2024. URL https://arxiv.org/abs/2404.14082.
Mikołaj Bin´kowski, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans, 2021.
Miguel Biron-Lattes, Nikola Surjanovic, Saifuddin Syed, Trevor Campbell, and Alexandre Bouchard
Côté. automala: Locally adaptive metropolisadjusted langevin algorithm, 2024. URL https: //arxiv.org/abs/2310.16782.
Pakshal Bohra, Joaquim Campos, Harshit Gupta, Shayan Aziznejad, and Michael Unser. Learning activation functions in deep (spline) neural networks. IEEE Open Journal of Signal Processing, 1:295– 309, 2020. doi: 10.1109/OJSP.2020.3039379.
Zavareh Bozorgasl and Hao Chen. Wav-kan: Wavelet kolmogorov-arnold networks, 2024. URL https: //arxiv.org/abs/2405.12832.
Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. Chapman and Hall/CRC, May 2011. ISBN 9780429138508. doi: 10.1201/b10905. URL http://dx.doi.org/10.1201/b10905.
Ben Calderhead and Mark Girolami. Estimating bayes factors via thermodynamic integration and population mcmc. Computational Statistics & Data Analysis, 53(12):4028–4045, 2009. ISSN 0167- 9473. doi: https://doi.org/10.1016/j.csda.2009.07. 025. URL https://www.sciencedirect.com/ science/article/pii/S0167947309002722.
Min Jin Chong and David Forsyth. Effectively unbiased fid and inception score and where to find them, 2020.
Jiali Cui and Tian Han. Learning latent space hierarchical ebm diffusion models, 2024. URL https: //arxiv.org/abs/2405.13910.
Li Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing Magazine, 29(6):141–142, 2012.
Luc Devroye. Chapter 4 nonuniform random variate generation. In Shane G. Henderson and Barry L. Nelson, editors, Simulation, volume 13 of Handbooks in Operations Research and Management Science, pages 83–121. Elsevier, 2006. doi: https://doi.org/10.1016/S0927-0507(06)13004-2. URL https://www.sciencedirect.com/ science/article/pii/S0927050706130042.
Randal Douc, Olivier Cappé, and Eric Moulines. Comparison of resampling schemes for particle filtering, 2005. URL https://arxiv.org/abs/cs/ 0507025.
Simon Duane, A.D. Kennedy, Brian J. Pendleton, and Duncan Roweth. Hybrid monte carlo. Physics Letters B, 195(2):216–222, 1987. ISSN 0370-2693. doi: https://doi.org/10.1016/0370-2693(87)91197-X. URL https://www.sciencedirect.com/ science/article/pii/037026938791197X.
S. Dzhenzher and A. Skopenkov. A structured proof of kolmogorov’s superposition theorem, 2022. URL https://arxiv.org/abs/2105.00408.
N. Friel and A. N. Pettitt. Marginal likelihood estimation via power posteriors. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 70(3):589–607, 2008. ISSN 13697412, 14679868. URL http://www.jstor. org/stable/20203843.
Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. Cyclical annealing schedule: A simple approach to mitigating kl vanishing, 2019. URL https://arxiv.org/ abs/1903.10145.
Alexander B. Givental, Boris A. Khesin, Jerrold E. Marsden, Alexander N. Varchenko, Victor A. Vassiliev, Oleg Ya. Viro, and Vladimir M. Zakalyukin, editors. On the representation of functions of several variables as a superposition of functions of a smaller number of variables, pages 25–46. Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. ISBN 978-3-642-01742-1. doi: 10.1007/ 978-3-642-01742-1_5. URL https://doi.org/ 10.1007/978-3-642-01742-1_5.
W. K. Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57(1):97–109, 04 1970. ISSN 0006-3444. doi: 10.1093/biomet/57.1.97. URL https://doi.org/ 10.1093/biomet/57.1.97.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2018.
Koji Hukushima and Koji Nemoto. Exchange monte carlo method and application to spin glass simulations. Journal of the Physical Society of Japan, 65 (6):1604–1608, June 1996. ISSN 1347-4073. doi: 10.1143/jpsj.65.1604. URL http://dx.doi.org/ 10.1143/JPSJ.65.1604.
JuliaCI. Benchmarktools.jl, 2024. URL https: //juliaci.github.io/BenchmarkTools.jl/ stable/. Accessed on August 20, 2024.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. URL https://arxiv.org/abs/2001. 08361.
Diederik P. Kingma and Max Welling. Auto-encoding variational bayes, 2022. URL https://arxiv. org/abs/1312.6114. arXiv:1312.6114.
T. Kloek and H. K. van Dijk. Bayesian estimates of equation system parameters: An application of integration by monte carlo. Econometrica, 46(1): 1–19, 1978. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1913641.
Alex Krizhevsky. Learning multiple layers of features from tiny images. pages 32–33, 2009. URL https://www.cs.toronto.edu/ \~kriz/learning-features-2009-TR.pdf.
R. Larson and B.H. Edwards. Calculus of a Single Variable. Cengage Learning, 2008. ISBN 9780547209982. URL https://books.google. co.uk/books?id=gR7nGg5_9xcC.
Dirk Laurie. Calculation of gauss-kronrod quadrature rules. Math. Comput., 66:1133–1145, 07 1997. doi: 10.1090/S0025-5718-97-00861-2.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, May 28 2015. ISSN 1476-4687. doi: 10.1038/nature14539.
Ziyao Li. Kolmogorov-arnold networks are radial basis function networks, 2024. URL https://arxiv. org/abs/2405.06721.
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations, 2021. URL https://arxiv.org/abs/ 2010.08895.
Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljaˇci´c, Thomas Y. Hou, and Max Tegmark. Kan: Kolmogorov-arnold networks, 2024. URL https://arxiv.org/abs/2404.19756.
E. Marinari and G. Parisi. Simulated tempering: A new monte carlo scheme. Europhysics Letters, 19(6):451, jul 1992. doi: 10.1209/0295-5075/ 19/6/002. URL https://dx.doi.org/10.1209/ 0295-5075/19/6/002.
Nicholas Metropolis and S. Ulam. The monte carlo method. Journal of the American Statistical Association, 44(247):335–341, 1949. ISSN 01621459, 1537274X. URL http://www.jstor. org/stable/2280232.
Radford M. Neal. Annealed importance sampling, 1998. URL https://arxiv.org/abs/physics/ 9803008.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011. URL http://ufldl.stanford.edu/ housenumbers/nips2011_housenumbers.pdf.
Erik Nijkamp, Mitch Hill, Tian Han, Song-Chun Zhu, and Ying Nian Wu. On the anatomy of mcmc-based maximum likelihood learning of energy-based models, 2019a. URL https://arxiv.org/abs/1903. 12370.
Erik Nijkamp, Mitch Hill, Song-Chun Zhu, and Ying Nian Wu. Learning non-convergent nonpersistent short-run mcmc toward energy-based model, 2019b. URL https://arxiv.org/abs/ 1904.09770.
Bo Pang and Ying Nian Wu. Latent space energybased model of symbol-vector coupling for text generation and classification, 2021. URL https: //arxiv.org/abs/2108.11556.
Gloria Chang, Fiona Aga Behram, James Huang, Charles Bai, Michael Gschwind, Anurag Gupta, Myle Ott, Anastasia Melnikov, Salvatore Candido, David Brooks, Geeta Chauhan, Benjamin Lee, Hsien-Hsin S. Lee, Bugra Akyildiz, Maximilian Balandat, Joe Spisak, Ravi Jain, Mike Rabbat, and Kim Hazelwood. Sustainable ai: Environmental implications, challenges and opportunities, 2022. URL https://arxiv.org/abs/2111.00364.
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms, 2017. URL https: //arxiv.org/abs/1708.07747.
Jinfeng Xu, Zheyu Chen, Jinze Li, Shuo Yang, Wei Wang, Xiping Hu, and Edith C. H. Ngai. Fourierkangcf: Fourier kolmogorov-arnold network – an effective and efficient feature transformation for graph collaborative filtering, 2024. URL https://arxiv. org/abs/2406.01034.
Bo Pang, Tian Han, Erik Nijkamp, Song-Chun Zhu, and Ying Nian Wu. Learning latent space energybased prior model, 2020. URL https://arxiv. org/abs/2006.08205.
T. Poggio and F. Girosi. Networks for approximation and learning. Proceedings of the IEEE, 78(9): 1481–1497, 1990. doi: 10.1109/5.58326. URL https://doi.org/10.1109/5.58326.
Maziar Raissi, Alireza Yazdani, and George Em Karniadakis. Hidden fluid mechanics: A navier-stokes informed deep learning framework for assimilating flow visualization data, 2018. URL https: //arxiv.org/abs/1808.04327.
G. O. Roberts and O. Stramer. Langevin diffusions and metropolis-hastings algorithms. Methodology and Computing in Applied Probability, 4(4): 337–357, 2002. ISSN 1573-7713. doi: 10.1023/ A:1023562417138. URL https://doi.org/10. 1023/A:1023562417138.
P. J. Rossky, J. D. Doll, and H. L. Friedman. Brownian dynamics as smart monte carlo simulation. The Journal of Chemical Physics, 69(10):4628–4633, 11 1978. ISSN 0021-9606. doi: 10.1063/1.436415. URL https://doi.org/10.1063/1.436415.
Sidharth SS, Keerthana AR, Gokul R, and Anas KP. Chebyshev polynomial-based kolmogorov-arnold networks: An efficient architecture for nonlinear function approximation, 2024. URL https:// arxiv.org/abs/2405.07200.
Robert Swendsen and Jian-Sheng Wang. Replica monte carlo simulation of spin-glasses. Physical review letters, 57:2607–2609, 12 1986. doi: 10.1103/PhysRevLett.57.2607.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision, 2015.
Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng,
Xingyi Yang and Xinchao Wang. Kolmogorov-arnold transformer, 2024. URL https://arxiv.org/ abs/2409.10594.
Shiyu Yuan, Jiali Cui, Hanao Li, and Tian Han. Learning multimodal latent generative models with energy-based prior, 2024. URL https://arxiv. org/abs/2409.19862.
Zetta. Zetta, the next paradigm of sustainable computing. https://zettalaboratories.com/ litepaper, August 2024. Zetta introduces a polymorphic computing chip technology, achieving up to 27.6x efficiency gains over the H100 GPUs through dynamically reconfigurable hardware that adapts to diverse AI models via software configuration.
Xinwei Zhang, Zhiqiang Tan, and Zhijian Ou. Persistently trained, diffusion-assisted energy-based models, 2023. URL https://arxiv.org/abs/2304. 10707.
Song Chun Zhu and D. Mumford. Grade: Gibbs reaction and diffusion equations. In Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), pages 847–854, 1998. doi: 10.1109/ICCV.1998.710816.
# A Appendix
# A.1 Codebase
The study was implemented in Julia at https://github.com/exa-laboratories/T-KAM.
# A.2 Computational benchmarks
The following benchmarks were collected for FP32 on a GPU (NVIDIA GeForce GTX 1650) with Julia’s BenchmarkTools package, provided by JuliaCI [2024]. The index search operations in Eqs. 24, 13, & 77 were parallelized across 20 threads on a CPU (i7-9750H $@$ 2.60GHz). They reflect the computational cost of initializing T-KAM and performing a single reverse autodifferentiation of its loss functions, (Eqs. 16 and 34).
Figure 11: Computational benchmarks for varying latent dimensions, $n _ { z }$ , while adhering to KART with MLE / IS. The latent prior had an input size of $n _ { z }$ and an output size of $2 n _ { z } + 1$ . The generator received inputs of size $2 n _ { z } + 1$ , with a hidden size of $4 n _ { z } + 2$ and an output size of $2 8 \times 2 8$ . All other hyperparameters were kept consistent with Tab. 4.
Figure 12: Computational benchmarks for varying amounts of ULA steps, $N _ { t }$ , while adhering to KART with SE / ULA. All other hyperparameters were kept consistent with Tab. 4 using the mixture univariate prior model.
# A.3 Sustainability
Wu et al. [2022] showed that scaling outpaces advances in system hardware, and that seemingly promising approaches to sustainable scaling, such as training large, sparsely-activated neural networks or using data centers powered by carbon-free energy, fall short as realistic solutions. They also highlighted that the share of energy footprint to support Meta’s RM1 (language model) followed a 31:29:40 ratio in 2021, dedicated to data processing, experimentation/training, and inference, respectively. This insight guides our prioritization of fast inference over training:
# 1. Experimentation:
• T-KAM is expected to require less hyperparameter optimization, as layer configurations can be determined by KART.
• Given T-KAM’s function-based interpretability, we claim that task knowledge can often inform hyperparameters for the bases used in KANs.
# 2. Training:
• Three training approaches are proposed in this study. The first approach, MLE / IS, is efficient and can be applied when prior-posterior alignment is facilitated by existing task knowledge before training.
• The second approach, MLE / ULA, relies on short-run, unadjusted MCMC to maintain computational efficiency during training.
• Once trained, latent priors can be visualized and recovered, potentially providing insight into the latent space and faciliating transfer learning to reduce training time. Otherwise, a trained prior model can also be extracted and used to train another generator model.
# 3. Inference:
• Fast yet high-quality inference is facilitated in the baseline model by its use of inverse transform sampling (ITS) from the latent prior, made feasible due to the continuity-preserving, univariate, and finite-domain nature of the prior’s components. This is scalable when using the mixture model in Eq. 12 to facilitate inter-dimensional prior relationships.
• Both MLE / ULA and SE / ULA can be conducted with unadjusted Langevin sampling from a deepened prior to ensure that inference speed is always preserved post-training. Parallel Tempering is only used for posterior sampling during training.
• Training with regularization for both the prior and generator enables pruning or size reduction, following the scheme proposed by Liu et al. [2024] for KANs. This allows further compression beyond the baseline structure defined by KART.
# A.4 Guidance on tempering hyperparameters
Table 3: Tempering hyperparameters.
# A.5 Hyperparameters and training details
Table 4: Hyperparameters for Sec. 4.1.
Table 5: Hyperparameters for Sec. 4.2.
# A.5.1 SVHN & CIFAR-10 Architectures
Table 7: Dense Chebyshev KAN architecture for mixture prior model
sizes $\ O _ { * }$ [100,201]
Table 8: Dense Chebyshev KAN architecture for deep prior model
sizes $\ c =$ [100,200,200,1]
Table 9: CNN Generator Architecture for SVHN
Config: kernel ${ = } 4$ , strides $_ { , = 1 / 2 / 2 / 2 }$ , paddings $_ { \cdot = 0 / 1 / 1 / 1 }$ , activation=LeakyReLU, batchnorm=False
Table 10: CNN Generator Architecture for CIFAR-10
Config: kerne $\scriptstyle - 8 / 4 / 4 / 3$ , strides $_ { , = 1 / 2 / 2 / 1 }$ , paddings $_ { = 0 / 1 / 1 / 1 }$ , activation=LeakyReLU, batchnorm=False
# A.6 Derivations and other identities
# A.6.1 Useful proof
The following short proof for a learning gradient with respect to an arbitrary set of parameters, $\pmb \theta$ , proves useful for other derivations in this study:
$$
\begin{array} { r } { \mathbb { E } _ { P ( \pmb { x } | \pmb { \theta } ) } \left[ \nabla _ { \pmb { \theta } } \log P ( \pmb { x } \mid \pmb { \theta } ) \right] = \mathbb { E } _ { P ( \pmb { x } \mid \pmb { \theta } ) } \left[ \frac { \nabla _ { \pmb { \theta } } P ( \pmb { x } \mid \pmb { \theta } ) } { P ( \pmb { x } \mid \pmb { \theta } ) } \right] = \int _ { \pmb { \chi } } \frac { \nabla _ { \pmb { \theta } } P ( \pmb { x } \mid \pmb { \theta } ) } { P ( \pmb { x } \mid \pmb { \theta } ) } \ d P ( \pmb { x } \mid \pmb { \theta } ) } \\ { = \int _ { \pmb { \chi } } \nabla _ { \pmb { \theta } } P ( \pmb { x } \mid \pmb { \theta } ) d \pmb { x } = \nabla _ { \pmb { \theta } } \int _ { \pmb { \chi } } P ( \pmb { x } \mid \pmb { \theta } ) d \pmb { x } = \nabla _ { \pmb { \theta } } 1 = 0 . } \end{array}
$$
# A.6.2 Marginal likelihood
$$
\begin{array} { r l } & { \log \Big ( P ( \pmb { x } ^ { ( b ) } \mid f , \pmb { \Phi } ) \Big ) = \mathbb { E } _ { P ( z \mid \pmb { x } ^ { ( b ) } , f , \pmb { \Phi } ) } \left[ \log \Big ( P ( \pmb { x } ^ { ( b ) } , z \mid f , \pmb { \Phi } ) \Big ) \right] } \\ & { \qquad = \mathbb { E } _ { P ( z \mid \pmb { x } ^ { ( b ) } , f , \pmb { \Phi } ) } \left[ \log \Big ( P \left( z \mid f \right) \cdot P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \Big ) \right] } \\ & { \qquad = \mathbb { E } _ { P ( z \mid \pmb { x } ^ { ( b ) } , f , \pmb { \Phi } ) } \left[ \log \big ( P \left( z \mid f \right) \big ) + \log \Big ( P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \Big ) \right] . } \end{array}
$$
# A.6.3 Maximum likelihood learning gradient
This proof demonstrates that the learning gradient with respect to an arbitrary set of parameters, $\theta$ , is independent of the sampling procedures in Sec. 2.6:
$$
\begin{array} { r l } & { \mathbb { E } _ { P ( z | x ) } \left[ \nabla _ { \theta } \log P ( x , z ) \right] = \mathbb { E } _ { P ( z | x ) } \left[ \nabla _ { \theta } \log P ( z \mid x ) + \nabla _ { \theta } \log P ( x ) \right] } \\ & { \qquad = \mathbb { E } _ { P ( z | x ) } \left[ \nabla _ { \theta } \log P ( z \mid x ) \right] + \mathbb { E } _ { P ( z | x ) } \left[ \nabla _ { \theta } \log P ( x ) \right] } \\ & { \qquad = 0 + \nabla _ { \theta } \log P ( x ) , } \end{array}
$$
where Eq. 40 has been used to zero the expected posterior. This allows gradient flow to be disregarded in the sampling procedures of Sec. 2.6 as shown in Eq. 16.
# A.6.4 Contrastive divergence learning gradient for log-prior
The learning gradient attributed to the prior can be derived into a contrastive divergence framework, which is more typical of energy-based modeling literature. The prior model is solely dependent on $f$ in Eq. 4. We can rewrite Eq. 17 as:
$$
\begin{array} { r } { \triangledown _ { f } \log \left( P \left( z \mid f \right) \right) = \nabla _ { f } \underbrace { \vphantom { \sum _ { f } \sum _ { f } } 2 n _ { z } } _ { q = 1 } \sum _ { p = 1 } ^ { n _ { z } + 1 } \log \left( \pi _ { q , p } ( z _ { q , p } ) \right) = \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) + \log \left( \pi _ { 0 } ( z _ { q , p } ) \right) - \log ( n _ { z } + 1 ) , } \end{array}
$$
where $\begin{array} { r } { Z _ { q , p } = \int _ { \mathcal Z } \exp ( f _ { q , p } ( z _ { p } ) ) d \pi _ { 0 } ( z _ { p } ) } \end{array}$ is the normalization constant. We can simplify the constant using Eq. 40:
$$
\begin{array} { r } { \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \log P \left( z \mid f \right) \Bigg ] = \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) + \log \left( \pi _ { 0 } ( z _ { q , p } ) \right) - \log \left( Z _ { q , p } \right) \Bigg ] = } \\ { = \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \Bigg ] - \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } \log \left( Z _ { q , p } \right) = 0 } \\ { . . . \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } \log \left( Z _ { q , p } \right) = \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \Bigg ] } \\ { . . . \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } } \sum _ { p = 1 } ^ { \infty } \log \left( Z _ { q , p } \right) = \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \Bigg ] } \end{array}
$$
Therefore the log-prior gradient of Eq. 43 can be evaluated without quadrature normalization as:
$$
\nabla _ { f } \log P \left( z \mid f \right) = \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) - \mathbb { E } _ { P \left( z \mid f \right) } \left[ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } ( z _ { q , p } ) \right] ,
$$
and the learning gradient attributed to the prior is finalized into a contrastive divergence format:
$$
\begin{array} { r } { \mathbb { E } _ { P ( z | x , f , \Phi ) } \Bigg [ \nabla _ { f } \left[ \log \ P \left( z \mid f \right) \right] \Bigg ] = \mathbb { E } _ { P ( z | x , f , \Phi ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } \big ( z _ { q , p } \big ) \Bigg ] } \\ { - \mathbb { E } _ { P ( z | f ) } \Bigg [ \nabla _ { f } \sum _ { q = 1 } ^ { 2 n _ { z } + 1 } \sum _ { p = 1 } ^ { n _ { z } } f _ { q , p } \big ( z _ { q , p } \big ) \Bigg ] . } \end{array}
$$
# A.6.5 Deriving the Thermodynamic Integral
Starting from the power posterior:
$$
P ( z \mid x ^ { ( b ) } , f , \Phi , t ) = { \frac { P \left( x ^ { ( b ) } \mid z , \Phi \right) ^ { t } P ( z \mid f ) } { Z _ { t } } }
$$
The partition function $\scriptstyle { Z _ { t } }$ is given by:
$$
Z _ { t } = \mathbb { E } _ { P ( z \mid f ) } \left[ P \left( { \pmb x } ^ { ( b ) } \mid z , { \pmb \Phi } \right) ^ { t } \right]
$$
Now, differentiating the log-partition function with respect to $t$ :
$$
\begin{array} { l } { \displaystyle \frac { \partial } { \partial t } \log ( Z _ { t } ) = \frac { 1 } { Z _ { t } } \frac { \partial } { \partial t } Z _ { t } = \frac { 1 } { Z _ { t } } \mathbb { E } _ { P ( z \mid f ) } \left[ \frac { \partial } { \partial t } P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) ^ { t } \right] } \\ { \displaystyle = \frac { 1 } { Z _ { t } } \mathbb { E } _ { P ( z \mid f ) } \left[ \log P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \cdot P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) ^ { t } \right] } \\ { \displaystyle = \mathbb { E } _ { P ( z \mid \pmb { x } ^ { ( b ) } , \pmb { f } , \pmb { \Phi } , t ) } \left[ \log P \left( \pmb { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \right] } \end{array}
$$
Integrating both sides from 0 to 1:
$$
\int _ { 0 } ^ { 1 } \frac { \partial } { \partial t } \log ( Z _ { t } ) d t = \int _ { 0 } ^ { 1 } \mathbb { E } _ { P ( z | { \mathbf { x } } ^ { ( b ) } , { \pmb { f } } , \Phi , t ) } \left[ \log P \left( { \pmb { x } } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \right] d t
$$
Using the fundamental theorem of calculus, presented by Larson and Edwards [2008]:
$$
\int _ { 0 } ^ { 1 } \frac { \partial } { \partial t } \log ( Z _ { t } ) d t = \log ( Z _ { t = 1 } ) - \log ( Z _ { t = 0 } ) = \int _ { 0 } ^ { 1 } \mathbb { E } _ { P ( z | \mathbf { x } ^ { ( b ) } , \pm , \pm , \pm ) } \left[ \log P \left( \mathbf { x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \right] d t .
$$
The limits of the integrals can be expressed as:
$$
\begin{array} { r l } & { \log ( Z _ { t = 0 } ) = \log \left( \mathbb { E } _ { P ( z \mid f ) } \left[ P \left( { \pmb x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) ^ { 0 } \right] \right) = \log ( 1 ) = 0 } \\ & { \log ( Z _ { t = 1 } ) = \log \left( \mathbb { E } _ { P ( z \mid { \pmb x } ^ { ( b ) } , { \pmb f } , \pmb { \Phi } ) } \left[ \log P \left( { \pmb x } ^ { ( b ) } \mid z , \pmb { \Phi } \right) \right] \right) = \log P ( { \pmb x } ^ { ( b ) } \mid { \pmb f } , \pmb { \Phi } ) } \end{array}
$$
Therefore, we obtain the log-marginal likelihood expressed as the Thermodynamic Integral:
$$
\log ( Z _ { t = 1 } ) - \log ( Z _ { t = 0 } ) = \log P ( { \pmb x } ^ { ( b ) } \mid f , \Phi ) = \int _ { 0 } ^ { 1 } \mathbb { E } _ { P ( z \mid { \pmb x } ^ { ( b ) } , { \pmb f } , \Phi , t ) } \left[ \log P \left( { \pmb x } ^ { ( b ) } \mid z , \Phi \right) \right] d t .
$$
# A.6.6 Learning gradient for discretized Thermodynamic Integral and power posterior partition function
We can derive the learning gradient of the discretized Thermodynamic Integral in Eq. 32 by reducing it down into a sum of normalizing constants. Starting from the definitions attributed to the KL divergence terms:
$$
\begin{array} { r l } & { D _ { \mathrm { K L } } ( P _ { t _ { k - 1 } } \| P _ { t _ { k } } ) = \mathbb { E } _ { P ( z | x ^ { ( b ) } , f , \Phi , t _ { k - 1 } ) } \bigg [ \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k - 1 } ) - \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k } ) \bigg ] } \\ & { D _ { \mathrm { K L } } ( P _ { t _ { k } } \| P _ { t _ { k - 1 } } ) = \mathbb { E } _ { P ( z | x ^ { ( b ) } , f , \Phi , t _ { k } ) } \bigg [ \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k } ) - \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k - 1 } ) \bigg ] } \\ & { \qquad = - \mathbb { E } _ { P ( z | x ^ { ( b ) } , f , \Phi , t _ { k } ) } \bigg [ \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k - 1 } ) - \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k } ) \bigg ] } \end{array}
$$
We can leverage Eq. 30 to quantify the log-difference in terms of the change in likelihood contribution plus some normalization constant:
$$
\log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k - 1 } ) - \log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k } ) = - \Delta t _ { k } \cdot \log P ( x ^ { ( b ) } \mid z , \Phi ) + \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) .
$$
The definitions in Eq. 33 can be used to express the KL divergences in terms of the expected likelihoods:
$$
\begin{array} { r l } { \cdot \sum _ { \mathbf { R } \mathbf { L } } ( P _ { t _ { k - 1 } } \| P _ { t _ { k } } ) = - \Delta t _ { k } \cdot \mathbb { E } _ { P ( z | x ^ { ( b ) } , f , \Phi , t _ { k - 1 } ) } \Big [ \log P ( x ^ { ( b ) } \mid z , \Phi ) \Big ] + \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) } & { } \\ { = - \Delta t _ { k } \cdot E _ { k - 1 } + \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) } & { ( } \\ { D _ { \mathbf { k } \mathbf { L } } ( P _ { t _ { k } } \| P _ { t _ { k - 1 } } ) = \Delta t _ { k } \cdot \mathbb { E } _ { P ( z | x ^ { ( b ) } , f , \Phi , t _ { k } ) } \Big [ \log P ( x ^ { ( b ) } \mid z , \Phi ) \Big ] - \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) } & { } \\ { = \Delta t _ { k } \cdot E _ { k } - \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) } & { ( } \end{array}
$$
Substituting back into Eq. 32, we eliminate the expected likelihood terms:
$$
\frac { 1 } { 2 } \sum _ { k = 1 } ^ { N _ { t } } D _ { \mathrm { K L } } ( P _ { t _ { k - 1 } } | | P _ { t _ { k } } ) - D _ { \mathrm { K L } } ( P _ { t _ { k } } | | P _ { t _ { k - 1 } } ) = - \frac { 1 } { 2 } \sum _ { k = 1 } ^ { N _ { t } } \Delta t _ { k } ( E _ { k - 1 } + E _ { k } ) - 2 \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right)
$$
$$
\begin{array} { c } { { { \displaystyle \cdot \log \left( P ( { \pmb x } ^ { ( b ) } \mid f , \Phi ) \right) = \frac 1 2 \sum _ { k = 1 } ^ { N _ { t } } \Delta t _ { k } ( E _ { k - 1 } + E _ { k } ) + \frac 1 2 \sum _ { k = 1 } ^ { N _ { t } } D _ { \mathrm { K L } } ( P _ { t _ { k - 1 } } | | P _ { t _ { k } } ) - D _ { \mathrm { K L } } ( P _ { t _ { k } } | | P _ { t _ { k - 1 } } ) } } } \\ { { { \mathrm { } = \displaystyle \frac 1 2 \sum _ { k = 1 } ^ { N _ { t } } \Delta t _ { k } ( E _ { k - 1 } + E _ { k } ) - \left[ \frac 1 2 \sum _ { k = 1 } ^ { N _ { t } } \Delta t _ { k } ( E _ { k - 1 } + E _ { k } ) - 2 \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) \right] } } } \\ { { { \mathrm { } = \displaystyle \sum _ { k = 1 } ^ { N _ { t } } \log \left( \frac { Z _ { t _ { k } } } { Z _ { t _ { k - 1 } } } \right) = \sum _ { k = 1 } ^ { N _ { t } } \log ( Z _ { t _ { k } } ) - \log ( Z _ { t _ { k - 1 } } ) } } } \end{array}
$$
Following a similar method to Sec. A.6.4 we can use Eq. 40 to express the learning gradient attributed to each normalization constant in a simpler manner:
$$
\begin{array} { r l } & { \quad \mathbb { E } _ { P ( z | { \mathbf x } ^ { ( b ) } , f , \Phi , t _ { k } ) } \left[ \nabla _ { f , \Phi } \log P ( z \mid { \mathbf x } ^ { ( b ) } , f , \Phi , t _ { k } ) \right] } \\ & { = \mathbb { E } _ { P ( z | { \mathbf x } ^ { ( b ) } , f , \Phi , t _ { k } ) } \left[ t _ { k } \cdot \nabla _ { \Phi } \log P ( { \mathbf x } ^ { ( b ) } \mid z , \Phi ) + \nabla _ { f } \log P ( z \mid f ) \right] - \nabla _ { f , \Phi } \log ( Z _ { t _ { k } } ) = 0 } \end{array}
$$
$$
\therefore \nabla _ { f , \Phi } \log ( Z _ { t _ { k } } ) = \mathbb { E } _ { P ( z | \mathbf { x } ^ { ( b ) } , f , \Phi , t _ { k } ) } \Big [ t _ { k } \cdot \nabla _ { \Phi } \log P ( \mathbf { x } ^ { ( b ) } \mid z , \Phi ) + \nabla _ { f } \log P ( z \mid f ) \Big ]
$$
Telescoping the sum in Eq. 58, and noting that $k = 1$ corresponds to the power posterior equaling the prior, and $k = N _ { t }$ to the posterior, Eq. 58 yields T-KAM’s learning gradient:
$$
\begin{array} { r l } { \nabla _ { f , \Phi } \log \Big ( P ( \boldsymbol { x } ^ { ( b ) } \mid f , \Phi ) \Big ) } & { = \underset { k = 1 } { \overset { N _ { t } } { \sum } } \nabla _ { f , \Phi } \log ( Z _ { t _ { k } } ) - \nabla _ { f , \Phi } \log ( Z _ { t _ { k - 1 } } ) } \\ & { = \nabla _ { f , \Phi } \log ( Z _ { t _ { N _ { t } } } ) - \nabla _ { f , \Phi } \log ( Z _ { t _ { 0 } } ) } \\ & { = \mathbb { E } _ { P ( z \mid w ^ { ( b ) } , f , \Phi ) } \Bigg [ \nabla _ { f } \log P \left( z \mid f \right) + \nabla _ { \Phi } \log P \left( x ^ { ( b ) } \mid z , \Phi \right) \Bigg ] } \\ & { \quad - \underbrace { \mathbb { E } _ { P ( z \mid f ) } \Bigg [ \nabla _ { f } \log P \left( z \mid f \right) \Bigg ] } _ { = 0 ( \mathbb { E } _ { \Phi } + 4 0 ) } . } \end{array}
$$
# A.7 Sampling theory
This section provides more detail regarding T-KAM’s sampling procedures.
# A.7.1 Inverse transform sampling
We aim to draw samples from the component distribution $\pi _ { q , p }$ defined in Eq. 4. Inverse transform sampling (ITS), as described by Devroye [2006], is a well-established method for generating highquality samples. However, its application in high-dimensional settings is often impractical, especially in energy-based models with intractable partition functions.
T-KAM’s unique properties, however, make it feasible. Specifically, the univariate and continuous nature of the prior components allows for efficient quadrature-based integral approximations.
# Quadrature Integration
The spline functions $f _ { q , p }$ are univariate, continuous, and defined on bounded grids spanning their domain, making quadrature approximations convenient for integrating each component. ITS requires the inversion of the cumulative distribution function (CDF) of the component distribution, which is defined as:
$$
z _ { q , p } = F ^ { - 1 } \left( \pi _ { q , p } \right) \left( u _ { p } \right) , \quad u _ { p } \sim \mathcal { U } ( u _ { p } ; ~ 0 , 1 ) .
$$
The cumulative distribution function (CDF) for $\pi _ { q , p }$ is defined as:
$$
F _ { \pi _ { q , p } } ( z ^ { \prime } ) = \frac { \int _ { z _ { \operatorname* { m i n } } } ^ { z ^ { \prime } } \exp ( f _ { q , p } ( z ) ) ~ \pi _ { 0 } ( z ) d z } { Z _ { q , p } } , \quad Z _ { q , p } = \int _ { z _ { \operatorname* { m i n } } } ^ { z _ { \operatorname* { m a x } } } \exp ( f _ { q , p } ( z ) ) ~ \pi _ { 0 } ( z ) d z .
$$
Both integrals can be approximated using Gauss-Konrad quadrature by Laurie [1997]:
$$
\int _ { z _ { \operatorname* { m i n } } } ^ { z ^ { \prime } } G ( z ) d z \approx \sum _ { i = 1 } ^ { N _ { \mathrm { q u a d } } } w _ { i } G ( z _ { i } ^ { \mathrm { n o d e } } ) , \quad G ( z ) = \exp ( f _ { q , p } ( z ) ) \pi _ { 0 } ( z ) ,
$$
where $z _ { i } ^ { \mathrm { n o d e } }$ and $w _ { i }$ are the Gauss-Legendre quadrature nodes and weights for the interval defined by $f _ { q , p }$ ’s grid: $[ z _ { \mathrm { m i n } } , z ^ { \prime } ]$ . Gauss-Konrad nodes and weights can be obtained from existing software packages. For Gauss-Legendre they are defined on $[ - 1 , 1 ]$ , so the following affine transformation is applied to the nodes:
$$
z _ { i } ^ { \mathrm { n o d e } } = \frac { 1 } { 2 } \Big ( ( z ^ { \prime } - z _ { \mathrm { m i n } } ) \cdot z _ { i } ^ { [ - 1 , 1 ] } + ( z ^ { \prime } + z _ { \mathrm { m i n } } ) \Big ) .
$$
The corresponding weights are scaled by the Jacobian of the transformation:
$$
w _ { i } = w _ { i } ^ { [ - 1 , 1 ] } \cdot \frac { z ^ { \prime } - z _ { \mathrm { m i n } } } { 2 } .
$$
To invert the CDF, we must identify the quadrature index $k ^ { * }$ such that:
$$
k ^ { * } = \operatorname* { m i n } \left\{ j \ | \ \sum _ { i = 1 } ^ { j } w _ { i } G ( z _ { i } ^ { \mathrm { n o d e } } ) \geq Z _ { q , p } \cdot u _ { p } \right\} .
$$
# Linear Interpolation
The required sample lies within the trapezium bounded by $\left[ z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } , z _ { k ^ { * } } ^ { \mathrm { n o d e } } \right]$ . To draw the sample $z _ { q , p }$ we interpolate as follows:
$$
z _ { q , p } \approx z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } + \big ( z _ { k ^ { * } } ^ { \mathrm { n o d e } } - z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } \big ) \cdot \frac { u _ { p } - F _ { \pi _ { q , p } } \big ( z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } \big ) } { F _ { \pi _ { q , p } } \big ( z _ { k ^ { * } } ^ { \mathrm { n o d e } } \big ) - F _ { \pi _ { q , p } } \big ( z _ { k ^ { * } - 1 } ^ { \mathrm { n o d e } } \big ) } .
$$
# A.7.2 Importance Sampling
Importance Sampling (IS) is a Monte Carlo method rooted in the measure-theoretic formulation of probability. It enables the estimation of expectations with respect to one probability measure using samples drawn from another. For a detailed exposition, see Kloek and van Dijk [1978].
# The Radon-Nikodym derivative
IS involves rewriting the expectation of an arbitrary function, $\rho ( z )$ , under the target measure, $P ( z )$ , in terms of a more tractable proposal measure, $Q ( z )$ . Specifically, we use the Radon-Nikodym derivative $\textstyle { \frac { d P } { d Q } }$ to rewrite the expectation of an arbitrary function of $z$ , denoted as $\rho ( z )$ :
$$
\mathbb { E } _ { P ( z ) } [ \rho ( z ) ] = \int _ { \bar { z } } \rho ( z ) d P ( z ) = \int _ { \bar { z } } \rho ( z ) \cdot \frac { d P } { d Q } d Q ( z ) .
$$
bHetrwe,e $\textstyle { \frac { d P } { d Q } }$ haecttsaragseat awnedigphrtotphoastaldjmuesatsutrhes.coTnhtirsibruetqiuoinr eosf tehacththseamprpolepotsoaalcmcoeausnutrfeo hise adbifsfoelruetnecley $Q$ continuous with respect to the target measure $P$ , i.e. $Q ( A ) = 0$ implies $P ( A ) = 0$ , for every measurable subset $A \in { \mathcal { Z } }$ .
In the context of T-KAM, the target distribution is the posterior, $P ( z \mid x ^ { ( b ) } , f , \Phi )$ , which is proportional to the product of the likelihood and the prior:
$$
P ( z \mid x ^ { ( b ) } , f , \Phi ) \propto P ( x ^ { ( b ) } \mid z , \Phi ) \cdot P ( z \mid f ) .
$$
The exact normalization constant, (the marginal likelihood $P ( \pmb { x } ^ { ( b ) } \mid f , \pmb { \Phi } ) )$ , is intractable, but Importance Sampling circumvents the need to compute it explicitly. Instead, the expectation of $\rho ( z )$ under the posterior can be expressed as:
$$
\mathbb { E } _ { P ( z | { \pmb x } ^ { ( b ) } , { \pmb f } , { \pmb \Phi } ) } [ \rho ( z ) ] = \int _ { \bar { \mathcal { Z } } } \rho ( z ) w ( z ) d Q ( z ) ,
$$
where the importance weights $w ( z )$ are given by:
$$
w ( z ) = { \frac { P ( z \mid x ^ { ( b ) } , f , \Phi ) } { Q ( z ) } } = { \frac { P ( { \pmb x } ^ { ( b ) } \mid z , \Phi ) \cdot P ( z \mid f ) } { Q ( z ) } } .
$$
# Proposal distribution and importance weights
A practical choice for the proposal distribution $Q ( z )$ is the prior, $P ( z \mid f )$ , given the availability of draws from the prior using the method outlined in Sec. A.7.1. This simplifies the importance weights, as the prior has a tractable form and covers the support of the posterior. This yields:
$$
w ( z ^ { ( s ) } ) = P ( { \pmb x } ^ { ( b ) } \mid z ^ { ( s ) } , { \pmb \Phi } ) \propto \exp ( \log ( P ( { \pmb x } ^ { ( b ) } \mid z ^ { ( s ) } , { \pmb \Phi } ) ) ) .
$$
This reflects that the likelihood term now directly informs how much weight to assign to each sample $_ { z }$ . Intuitively, samples that better explain the observed data $\mathbf { \boldsymbol { x } } ^ { ( b ) }$ , (as measured by the likelihood), are given higher importance. Using the definition of softmax, the normalized weights are realizable as:
$$
{ \frac { w ( z ^ { ( s ) } ) } { \sum _ { r = 1 } ^ { N _ { s } } w ( z ^ { ( r ) } ) } } = { \mathrm { s o f t m a x } } _ { s } { \Big ( } \log ( P ( { \pmb x } ^ { ( b ) } \mid z ^ { ( s ) } , \pmb \Phi ) ) { \Big ) } .
$$
# Resampling
The suitability of these weights depends on how well $Q ( z )$ matches the posterior, ensuring a wellconditioned Radon-Nikodym derivative. In T-KAM, there is likely to be significant mismatch due to the complexity of the likelihood models described in Eq. 20. Consequently, the weights will exhibit high variance, resulting in many samples contributing minimally. This reduces the Effective Sample Size (ESS) and introduces bias into the estimates.
To address this, the weights are resampled before proceeding with the posterior expectation. This is accomplished by using one of the methods outlined by Douc et al. [2005], whenever the ESS falls below a certain threshold determined by Eq. 27. Resampling redistributes the sample weights, creating a new population with uniformly distributed weights while preserving the statistical properties of the original distribution. As training progresses and the prior becomes a better match for the posterior, the need for resampling decreases.
# Importance Sampling estimator
To estimate the expectation, we draw $N _ { s }$ independent samples $\{ z ^ { ( s ) } \} _ { s = 1 } ^ { N _ { s } } \sim Q ( z )$ using the procedure outlined in Sec. A.7.1 and compute a weighted sum:
$$
\mathbb { E } _ { P ( z | x ^ { ( b ) } , \mathbf { f } , \Phi ) } [ \rho ( z ) ] \approx \sum _ { s = 1 } ^ { N _ { s } } \rho ( z _ { \mathrm { r e s a m p l e d } } ^ { ( s ) } ) \cdot w _ { \mathrm { n o r m , r e s a m p l e d } } ( \bar { z } _ { \mathrm { r e s a m p l e d } } ^ { ( s ) } ) ,
$$
$$
\mathrm { w h e r e \quad } z ^ { ( s ) } = \left( \begin{array} { c } { \left( z _ { 1 , 1 } ^ { ( s ) } , z _ { 1 , 2 } ^ { ( s ) } , \cdots , z _ { 1 , n _ { z } } ^ { ( s ) } \right) } \\ { \left( z _ { 2 , 1 } ^ { ( s ) } , z _ { 2 , 2 } ^ { ( s ) } , \cdots , z _ { 2 , n _ { z } } ^ { ( s ) } \right) } \\ { \vdots } \\ { \left. z _ { 2 n _ { z } + 1 , 1 } ^ { ( s ) } , z _ { 2 n _ { z } + 1 , 2 } ^ { ( s ) } , \cdots , z _ { 2 n _ { z } + 1 , n _ { z } } ^ { ( s ) } \right. } \end{array} \right)
$$
The estimator is unbiased, implying that its expectation matches the true posterior expectation. As $N _ { s } \infty$ , the approximation converges almost surely to the true value, provided that $Q ( \bar { z } )$ sufficiently covers the posterior to ensure a well-conditioned Radon-Nikodym derivative. This estimator is used for the log-marginal likelihood and its gradient in Eqs. 15 & 16, since it provides a means of estimating expectations with respect to the posterior distribution.
# A.7.3 Residual resampling
In this study, we adopted residual resampling to redraw weights. This redistributes the population of latent samples according to the normalized weights of Sec. A.7.2. The procedure is only conducted when Eq. 27 regarding ESS is satisfied:
1. Integer replication: The sample, $s$ , is replicated $r _ { s }$ times, where:
$$
r _ { s } = \Big \lfloor N _ { s } \cdot w _ { \mathrm { n o r m } } \big ( z ^ { ( s ) } \big ) \Big \rfloor
$$
2. Residual weights: The total number of replicated samples after the previous stage will be $\sum _ { s = 1 } ^ { N _ { s } } r _ { s }$ . Therefore, $\begin{array} { r } { N _ { s } - \sum _ { s = 1 } ^ { N _ { s } } r _ { s } } \end{array}$ remain, which must be resampled using the residuals:
$$
w _ { \mathrm { r e s i d u a l } } \big ( z ^ { ( s ) } \big ) = w _ { \mathrm { n o r m } } \big ( z ^ { ( s ) } \big ) \cdot ( N _ { s } - r _ { s } )
$$
$$
w _ { \mathrm { n o r m , \ r e s i d u a l } } \bigl ( z ^ { ( s ) } \bigr ) = \frac { w _ { \mathrm { r e s i d u a l } } \bigl ( z ^ { ( s ) } \bigr ) } { \sum _ { s = 1 } ^ { N _ { s } } w _ { \mathrm { r e s i d u a l } } \bigl ( z ^ { ( s ) } \bigr ) }
$$
If a sample has been replicated at the previous stage, then its corresponding $w _ { \mathrm { n o r m } }$ , residual will be reduced, leading to a lower probability of it being resampled again.
3. Resample: The remaining samples are drawn with multinomial resampling, based on the cumulative distribution function of the residuals:
$$
k ^ { * } = \operatorname* { m i n } \left\{ j \big | \sum _ { s = 1 } ^ { j } w _ { \mathrm { n o r m , r e s i d u a l } } \big ( z ^ { ( s ) } \big ) \geq u _ { i } \right\} \quad \mathrm { w h e r e } \quad u _ { i } \sim \mathcal { U } ( [ 0 , 1 ] ) .
$$
where $k ^ { * }$ represents the index of the sample to keep.
# A.7.4 Metropolis-adjusted Langevin algorithm (MALA)
The Metropolis-Adjusted Langevin Algorithm (MALA), presented by Rossky et al. [1978] and Zhu and Mumford [1998], is a Markov Chain Monte Carlo (MCMC) method designed to sample from complex distributions by leveraging gradient information. It constructs a proposal mechanism that efficiently explores the target measure by combining stochastic perturbations with information from the target’s local geometry to propose local moves, thereby refining the standard Metropolis-Hastings (MH) algorithm, outlined by Hastings [1970].
MALA operates on the target (power) posterior $P ( z \mid \mathbf { \delta } x ^ { ( b ) } , f , \Phi , t _ { k } )$ . The algorithm constructs a sequence of random variables $\{ \boldsymbol { z } ^ { ( i , t _ { k } ) } \} _ { i = 1 } ^ { \infty }$ that forms a Markov chain converging in distribution to the target measure. The target posterior measure, $P ( z \mid \pmb { x } ^ { ( b ) } , \pmb { f } , \pmb { \Phi } , t _ { k } )$ , is:
$$
\log P ( z \mid x ^ { ( b ) } , f , \Phi , t _ { k } ) = \log P ( x ^ { ( b ) } \mid z , \Phi ) ^ { t _ { k } } + \log P ( z \mid f ) - \mathrm { c o n s t } ,
$$
# MALA
Starting with the initial state of the first chain, $z ^ { ( 1 , t _ { 1 } ) } \sim Q ( z ) = P ( z \mid f )$ , the local state within a temperature is updated as follows:
1. Langevin diffusion: A new state $z ^ { \prime } ^ { ( i , \ t _ { k } ) }$ is proposed for a local chain, operating with a specific $t _ { k }$ , using a transition kernel inspired by (overdamped) Langevin dynamics, detailed by Roberts and Stramer [2002] and Brooks et al. [2011]:
$$
\begin{array} { r l } & { \colon \log \gamma \left( \boldsymbol { z } ^ { \prime \left( i , t _ { k } \right) } \right) \propto \log P ( \boldsymbol { x } ^ { \left( b \right) } \mid \boldsymbol { z } , \Phi ) ^ { t _ { k } } + \log P ( \boldsymbol { z } \mid f ) } \\ & { \colon \boldsymbol { z } ^ { \prime \left( i , t _ { k } \right) } \mid \boldsymbol { z } ^ { ( i , t _ { k } ) } \sim q \left( \boldsymbol { z } ^ { \prime \left( i , t _ { k } \right) } \mid \boldsymbol { z } ^ { ( i , t _ { k } ) } \right) } \\ & { \colon q ( \boldsymbol { z } ^ { \prime \left( i , t _ { k } \right) } \mid \boldsymbol { z } ^ { ( i , t _ { k } ) } ) = \mathcal { N } \Bigg ( \boldsymbol { z } ^ { \prime \left( i , t _ { k } \right) } ; \boldsymbol { z } ^ { ( i , t _ { k } ) } + \frac { \eta } { 2 } \boldsymbol { C } \cdot \nabla _ { \boldsymbol { z } } \log \gamma \left( \boldsymbol { z } ^ { \prime \left( i , t _ { k } \right) } \right) , \left( \sqrt { \eta } \cdot \boldsymbol { C } ^ { \frac { 1 } { 2 } } \right) ^ { 2 } \Bigg ) , } \end{array}
$$
where $\eta > 0$ is the step size, $\boldsymbol { \nabla } _ { z } \log \gamma$ is the log-posterior gradient with respect to $\boldsymbol { z } ^ { ( i , t _ { k } ) }$ , and $C$ is a pre-conditioning, positive-definite matrix, (for example, the identity matrix, $\boldsymbol { \mathit { I } }$ ).
2. MH criterion: Once $N _ { \mathrm { u n a d j u s t e d } }$ iterations have elapsed, Metropolis-Hastings adjustments, (MH by Metropolis and Ulam [1949]), are introduced. The MH criterion for the local proposal is:
$$
r _ { \mathrm { l o c a l } } = \frac { \gamma \left( z ^ { \prime } \left( i , t _ { k } \right) \right) q \left( z ^ { ( i , t _ { k } ) } \mid z ^ { \prime } \left( i , t _ { k } \right) \right) } { \gamma \left( z ^ { ( i , t _ { k } ) } \right) q \left( z ^ { \prime } \left( i , t _ { k } \right) \mid z ^ { ( i , t _ { k } ) } \right) } ,
$$
3. Acceptance: The proposal is accepted with probability $\operatorname* { m i n } ( 1 , r _ { \mathrm { l o c a l } } )$ . If accepted, $z ^ { ( i + 1 , t _ { k } ) } =$ $\boldsymbol { z } ^ { \prime } ^ { \mathrm { ~ } ( i , \mathrm { ~ } t _ { k } ) }$ ; otherwise, the current state is retained: $\begin{array} { r } { z ^ { ( i + 1 , t _ { k } ) } = z ^ { ( i , t _ { k } ) } } \end{array}$ .
4. Global Swaps: Global swaps are proposed and accepted subject to the criterion outlined in Eq. 37
# Convergence
Under mild regularity conditions, the local Markov chain $\{ \boldsymbol { z } ^ { ( i , t _ { k } ) } \}$ generated by MALA converges in distribution to $P ( z \mid \pmb { x } ^ { ( b ) } , \pmb { f } , \pmb { \Phi } , t _ { k } )$ as $N _ { \mathrm { l o c a l } } \infty$ . The proposal mechanism in Eq. 79 ensures that the chain mixes efficiently, particularly in high-dimensional spaces, provided that the step size $\eta$ is tuned appropriately. The algorithm provides theoretical guarantees of the target, due to its satisfaction of detailed balance under Eq. 80:
$$
\gamma \left( z ^ { ( i , t _ { k } ) } \right) q \left( z ^ { \prime \ ( i , t _ { k } ) } \mid z ^ { ( i , t _ { k } ) } \right) = \gamma \left( z ^ { \prime \ ( i , t _ { k } ) } \right) q \left( z ^ { ( i , t _ { k } ) } \mid z ^ { \prime \ ( i , t _ { k } ) } \right) ,
$$
ensuring that the stationary distribution of the local chain matches the local target.
# A.7.5 autoMALA
More details on the Metropolis-adjusted Langevin Algorithm (MALA) are provided in Sec. A.7.4.
Selecting an appropriate step size for unaltered MALA presents a challenge, particularly for SE, as each power posterior exhibits distinct local curvature. To address this, we adaptively tune the step size with autoMALA introduced by Biron-Lattes et al. [2024], which reinterprets MALA as Hamiltonian Monte Carlo (HMC) with a single leapfrog step, (introduced by Duane et al. [1987]).
The algorithm operates locally on a target power posterior $P ( z \mid \pmb { x } ^ { ( b ) } , \pmb { f } , \pmb { \Phi } , t _ { k } )$ . It constructs a sequence of random variables per temperature, $\{ \boldsymbol { z } ^ { ( i , t _ { k } ) } \} _ { i = 1 } ^ { \infty }$ , that forms a Markov chain converging in distribution to the local target:
$$
\begin{array} { r } { \log \gamma \left( { z } ^ { \prime } ^ { ( i , t _ { k } ) } \right) = \log P ( { z } ^ { ( i , t _ { k } ) } \mid { x } ^ { ( b ) } , f , \Phi , t _ { k } ) \propto \log P ( { x } ^ { ( b ) } \mid { z } ^ { ( i , t _ { k } ) } , \Phi ) ^ { t _ { k } } + \log P ( { z } ^ { ( i , t _ { k } ) } \mid f ) . } \end{array}
$$
Starting with the initial state of the first chain, $z ^ { ( 1 , t _ { 1 } ) } \sim Q ( z ) = P ( z \mid f )$ , the local state within a temperature is updated as follows:
1. Acceptance thresholds: Two bounding acceptance thresholds are sampled uniformly:
$$
a , b \sim \mathcal { U } \Big ( \left( a , b \right) ; \mathbf { 0 } , \mathbf { 1 } \Big ) , \quad ( a , b ) \quad \qquad \in [ 0 , 1 ] ^ { 2 } , \quad b > a
$$
2. Mass matrix: Random pre-conditioning matrices are initialized per latent dimension:
$$
\varepsilon ^ { ( i , t _ { k } ) } \sim \mathrm { B e t a } _ { 0 1 } \left( 1 , 1 ; \left[ \frac { 1 } { 2 } , \frac { 2 } { 3 } \right] \right) ,
$$
$$
M _ { p , p } ^ { 1 / 2 } \Lt ^ { ( i , t _ { k } , q ) } = \varepsilon ^ { ( i , t _ { k } ) } \cdot \Sigma _ { p , p } ^ { - 1 / 2 } \Lt ^ { ( i , t _ { k } , q ) } + { ( 1 - \varepsilon ^ { ( i , t _ { k } ) } ) } ,
$$
where $\begin{array} { r } { \Sigma _ { p , p } ^ { ( i , t _ { k } , q ) } = \mathrm { V A R } _ { s } \left[ \bar { z } _ { q , p } ^ { ( i , t _ { k } , s ) } \right] } \end{array}$ . Here, Beta $_ { 0 1 } \left[ a , b ; c , d \right]$ denotes a zero-one-inflated Beta distribution. The parameters $a , b > 0$ represent the shape parameters of the Beta distribution, while $c , d \in [ 0 , 1 ] ^ { \frac { \mathbf { \hat { 2 } } } { 2 } }$ specify the probabilities governing the mixture. The positive definite mass matrix, $M$ , can be related to Eq. 79 as $M = { \bf \bar { C } } ^ { - 1 }$ , and is efficient to invert as a diagonal matrix.
3. Leapfrog proposal: The following transition is proposed for a local chain, $t _ { k }$ , with an adaptive step size, η(i,tk):
$$
\begin{array} { r l } & { p ^ { ( i , t _ { k } ) } \sim \mathcal { N } \left( p ; ~ 0 , M ^ { ( i , t _ { k } ) } \right) } \\ & { ~ p _ { 1 / 2 } ^ { \prime } = p ^ { ( i , t _ { k } ) } + \frac { \eta ^ { ( i , t _ { k } ) } } { 2 } \nabla _ { z } \log \gamma \left( z ^ { \prime ( i , t _ { k } ) } \right) } \\ & { z ^ { \prime ( i , t _ { k } ) } = z ^ { ( i , t _ { k } ) } + \eta ^ { ( i , t _ { k } ) } \left( M ^ { - 1 } p _ { 1 / 2 } ^ { \prime } \right) ^ { ( i , t _ { k } ) } } \\ & { ~ \hat { p } ^ { ( i , t _ { k } ) } = p _ { 1 / 2 } ^ { \prime ( i , t _ { k } ) } + \frac { \eta ^ { ( i , t _ { k } ) } } { 2 } \nabla _ { z } \log \gamma \left( z ^ { \prime ( i , t _ { k } ) } \right) } \\ & { p ^ { \prime ( i , t _ { k } ) } = - \hat { p } ^ { ( i , t _ { k } ) } , } \end{array}
$$
4. MH criterion: Once $N _ { \mathrm { u n a d j u s t e d } }$ iterations have elapsed, Metropolis-Hastings (MH) adjustments are introduced. The MH acceptance criterion for the local proposal is:
$$
r _ { \mathrm { l o c a l } } = \frac { \gamma \left( z ^ { \prime } \left( i , t _ { k } \right) \right) \mathcal { N } \left( p ^ { \prime } \left( i , t _ { k } \right) ; \mathbf { 0 } , M ^ { ( i , t _ { k } ) } \right) } { \gamma \left( z ^ { ( i , t _ { k } ) } \right) \mathcal { N } \left( p ^ { ( i , t _ { k } ) } ; \mathbf { 0 } , M ^ { ( i , t _ { k } ) } \right) } ,
$$
5. Step size adaptation: Starting with an initial estimate, ηi(nii,ttk), set as the average accepted step size from the previous training iteration, (used as a simple alternative to the round-based tuning algorithm proposed by Biron-Lattes et al. [2024]), the step size is adjusted as follows:
• If $a < r _ { \mathrm { l o c a l } } < b$ , then $\eta ^ { ( i , t _ { k } ) } = \eta _ { \mathrm { i n i t } } ^ { ( i , t _ { k } ) }$
• If $r _ { \mathrm { l o c a l } } \le a$ , then $\eta ^ { \prime } ^ { ( i , t _ { k } ) } = \eta _ { \mathrm { i n i t } } ^ { ( i , t _ { k } ) } / \Delta \eta ^ { k }$ η n(it,tk)/∆ηk, where k ← k+1 is chosen recursively until rlocal > a,
and the step size is accepted, $\eta ^ { ( i , t _ { k } ) } = \eta ^ { \prime ( i , t _ { k } ) }$
• If $r _ { \mathrm { l o c a l } } \geq b$ , then $\eta ^ { \prime } ^ { ( i , t _ { k } ) } = \eta _ { \mathrm { i n i t } } ^ { ( i , t _ { k } ) } \cdot \Delta \eta ^ { k }$ ηi(nii,ttk) · ∆ηk, where k ← k + 1 is chosen recursively until rlocal < b,
and the step size is accepted, $\overline { { { \eta } } } ^ { ( i , t _ { k } ) } = \eta ^ { \prime } { \left( i , t _ { k } \right) } / { \Delta \eta }$
Here, $\Delta \eta$ is a tunable hyperparameter that controls the rate of step size adaptation. To improve efficiency in our implementation, tuning is prematurely terminated when $\eta ^ { ( i , \hat { t _ { k } } ) }$ falls out of a userdefined range, $[ \eta _ { \mathrm { m i n } } , \eta _ { \mathrm { m a x } } ]$ . Excessively large or small step sizes are not beneficial, so preventing the algorithm from searching extreme values saves time without compromising performance.
6. Reversibility check: If the step size was modified, the reversibility of the update in Eq. 84 is verified before proceeding with MH acceptance. If ηi(nii,ttk) cannot be recovered from the reversed step-size adjustment process with the proposed state, $\left\{ z ^ { \prime \ ( i , \ t _ { k } ) } , \ p ^ { \prime \ ( i , \ t _ { k } ) } , \eta ^ { ( i , t _ { k } ) } \right\}$ , the proposal is rejected regardless of the outcome of the MH adjustment.
7. Acceptance: The proposal is accepted with probability $\operatorname* { m i n } ( 1 , r _ { \mathrm { l o c a l } } )$ . If accepted, $z ^ { ( i + 1 , t _ { k } ) } =$ $z ^ { \prime } ^ { ( i , \stackrel { - } { t } _ { k } ) }$ ; otherwise, the current state is retained: $\begin{array} { r } { z ^ { ( i + 1 , t _ { k } ) } = z ^ { ( i , t _ { k } ) } } \end{array}$ .
8. Global Swaps: Global swaps are proposed and accepted subject to the criterion outlined in Eq. 37
A.8 $2 0 \times 2 0$ image grids
A.8.1 MNIST
Figure 13: Generated MNIST, (Deng [2012]), after 2,000 parameter updates using MLE / IS adhering to KART’s structure. Uniform, lognormal, and Gaussian priors are contrasted using Radial Basis Functions, (Li [2024]). Lognormal is inverted for clarity.
Figure 14: Generated MNIST, (Deng [2012]), after 2,000 parameter updates using MLE / IS adhering to KART’s structure. Uniform, lognormal, and Gaussian priors are contrasted using Fourier Bases, (Xu et al. [2024]). Lognormal is inverted for clarity.
# A.8.2 FMNIST
Figure 15: Generated FMNIST, (Xiao et al. [2017]), after 2,000 parameter updates using MLE / IS adhering to KART’s structure. Uniform, lognormal, and Gaussian priors are contrasted using Radial Basis Functions, (Li [2024]). Lognormal is inverted for clarity.
Figure 16: Generated FMNIST, (Xiao et al. [2017]), after 2,000 parameter updates using MLE / IS adhering to KART’s structure. Uniform, lognormal, and Gaussian priors are contrasted using Fourier Bases, (Xu et al. [2024]). Lognormal is inverted for clarity.
# A.8.3 Darcy flow
Figure 17: Generated 2D Darcy flow pressures, (Li et al. [2021]), after 12,000 parameter updates using MLE / IS while adhering to KART. Radial Basis Functions, (RBFs by Li [2024]), are contrasted against Fourier bases, (FFTs by Xu et al. [2024]), using Gaussian latent priors. RBF is colored differently for clarity.
Figure 18: Generated 2D Darcy flow pressures, (Li et al. [2021]), after 12,000 parameter updates using MLE / IS while adhering to KART. Radial Basis Functions, (RBFs by Li [2024]), are contrasted against Fourier bases, (FFTs by Xu et al. [2024]), using lognormal latent priors. RBF is colored differently for clarity.
Figure 19: Generated 2D Darcy flow pressures, (Li et al. [2021]), after 12,000 parameter updates using MLE / IS while adhering to KART. Radial Basis Functions, (RBFs by Li [2024]), are contrasted against Fourier bases, (FFTs by Xu et al. [2024]), using uniform latent priors. RBF is colored differently for clarity.
Figure 20: Generated samples from T-KAM trained on SVHN using a mixture Chebyshev KAN prior, (SS et al. [2024]), after 8,000 parameter updates with ITS prior and ULA posterior sampling.
Figure 21: Generated samples from T-KAM trained on SVHN using a deep Chebyshev KAN prior, (SS et al. [2024]), after 8,000 parameter updates with ULA prior and ULA posterior sampling.
# A.8.5 CIFAR-10
Figure 22: Generated samples from T-KAM trained on CIFAR-10 using a mixture Chebyshev KAN prior, (SS et al. [2024]), after 8,000 parameter updates with ITS prior and ULA posterior sampling.
Figure 23: Generated samples from T-KAM trained on CIFAR-10 using a deep Chebyshev KAN prior, (SS et al. [2024]), after 8,000 parameter updates with ULA prior and ULA posterior sampling.
# A.9 Top 3 extracted priors
The following priors were extracted from trained T-KAM models, each corresponding to the first three components of prior $q = 1$ .
# A.9.1 MNIST RBF
Figure 24: Three components from T-KAM’s prior after training on MNIST for 2,000 parameter updates with RBF bases and Gaussian constrained priors.
Figure 25: Three components from T-KAM’s prior after training on MNIST for 2,000 parameter updates with RBF bases and lognormal constrained priors.
Figure 26: Three components from T-KAM’s prior after training on MNIST for 2,000 parameter updates with RBF bases and uniform constrained priors.
# A.9.2 MNIST FFT
Figure 27: Three components from T-KAM’s prior after training on MNIST for 2,000 parameter updates with FFT bases and Gaussian constrained priors.
Figure 28: Three components from T-KAM’s prior after training on MNIST for 2,000 parameter updates with FFT bases and lognormal constrained priors.
Figure 29: Three components from T-KAM’s prior after training on MNIST for 2,000 parameter updates with FFT bases and uniform constrained priors.
# A.9.3 FMNIST RBF
Figure 30: Three components from T-KAM’s prior after training on FMNIST for 2,000 parameter updates with RBF bases and Gaussian constrained priors.
Figure 31: Three components from T-KAM’s prior after training on FMNIST for 2,000 parameter updates with RBF bases and lognormal constrained priors.
Figure 32: Three components from T-KAM’s prior after training on FMNIST for 2,000 parameter updates with RBF bases and uniform constrained priors.
# A.9.4 FMNIST FFT
Figure 33: Three components from T-KAM’s prior after training on FMNIST for 2,000 parameter updates with FFT bases and Gaussian constrained priors.
Figure 34: Three components from T-KAM’s prior after training on FMNIST for 2,000 parameter updates with FFT bases and lognormal constrained priors.
Figure 35: Three components from T-KAM’s prior after training on FMNIST for 2,000 parameter updates with FFT bases and uniform constrained priors.
# A.9.5 Darcy RBF
Figure 36: Three components from T-KAM’s prior after training on Darcy flow pressures for 12,000 parameter updates with RBF bases and Gaussian constrained priors.
Figure 37: Three components from T-KAM’s prior after training on Darcy flow pressures for 12,000 parameter updates with RBF bases and lognormal constrained priors.
Figure 38: Three components from T-KAM’s prior after training on Darcy flow pressures for 12,000 parameter updates with RBF bases and uniform constrained priors.
# A.9.6 Darcy FFT
Figure 39: Three components from T-KAM’s prior after training on Darcy flow pressures for 12,000 parameter updates with FFT bases and Gaussian constrained priors.
Figure 40: Three components from T-KAM’s prior after training on Darcy flow pressures for 12,000 parameter updates with FFT bases and lognormal constrained priors.
Figure 41: Three components from T-KAM’s prior after training on Darcy flow pressures for 12,000 parameter updates with FFT bases and uniform constrained priors. | We adapt the Kolmogorov-Arnold Representation Theorem to generative modeling
by reinterpreting its inner functions as a Markov Kernel between probability
spaces via inverse transform sampling. We present a generative model that is
interpretable, easy to design, and efficient. Our approach couples a
Kolmogorov-Arnold Network generator with independent energy-based priors,
trained via Maximum Likelihood. Inverse sampling enables fast inference, while
prior knowledge can be incorporated before training to better align priors with
posteriors, thereby improving learning efficiency and sample quality. The
learned prior is also recoverable and visualizable post-training, offering an
empirical Bayes perspective. To address inflexibility and mitigate
prior-posterior mismatch, we introduce scalable extensions based on mixture
distributions and Langevin Monte Carlo methods, admitting a trade-off between
flexibility and training efficiency. Our contributions connect classical
representation theorems with modern probabilistic modeling, while balancing
training stability, inference speed, and the quality and diversity of
generations. | [
"cs.LG"
] |
# 1 Introduction
Every second, streams of data have been generated enormously from both human (e.g. online social networks, content creators) and machines (e.g. data logs of systems, GPS coordinates of devices). With these mountains of data stream, there are countless patterns and insight that might be inferred and utilized from the data using methods from knowledge discovery from data (KDD). Time Series Analysis is one of the KDD fields that focuses on streaming of data, which deals with modeling and discovering patterns from streams of data or time series.
VARIABLE-LENGTH SIMILAR SUBSEQUENCES INFERENCE PROBLEM: Considering one pattern occurs in one time series, we might be wondering whether the same pattern occurs in other time series with some distortion. Given two time series, the goal is to find the most similar subsequences from each time series that might have different lengths.
One of the problems that is important in time series analysis is the problem of searching similar patterns or subsequences within large sets of time series Wu et al. (2005); Rakthanmanon et al. (2012). To find similar patterns, the first step is to measure a similarity between time series. Dynamic Time Warping (DTW) Sakoe and Chiba (1978) is one of widely-used distance measures Giorgino (2009) since it can find generic and distorted patterns between time series Rakthanmanon et al. (2012). However, to the best of our knowledge, there is no existing method developed to efficiently find the most similar subsequences within two time series s.t. the subsequence in one time series might have a different length compared to the subsequence of another time series.
In this work, we proposed a generalization measure of DTW s.t. the proposed measure is able to find most similar subsequences within DTW scheme. In the case that the lengths of subsequences are the same as the original time series, the proposed method works the same as typical DTW. Additionally, in the case that the lengths of subsequences are shorter than the original time series, our proposed method provides exact solution of most similar subsequences by using more efficient computational resources than using DTW to search for the solution. The proposed method can be used for any kind of multidimensional time series.
# 2 Related work
Searching for similarity patterns in time series have been studies in literature for many years Wu et al. (2005). There are several methods that deal with different types of patterns in time series such as motifs Alaee et al. (2020); Imamura and Nakamura (2024), discords Zhu et al. (2016), clustering Holder et al. (2024) etc. The typical way of finding similarity patterns is to use some distance/correlation functions of time series such as cross-correlation Kjærgaard et al. (2013), Levenshtein distance Navarro (2001), Longest Common Subsequence (LCSS) Soleimani and Abessi (2020), Fréchet distance Driemel et al. (2016) etc. Among these measures, DTW Sakoe and Chiba (1978) is one of the widely used approach to measure a distance between two time series since it can handle distortion of similar patterns between time series Rakthanmanon et al. (2012). Several versions of DTW have been developed. The classic one is to use window constraints to limit search space (e.g. Sakoe-Chiba band) Sakoe and Chiba (1978); Geler et al. (2022). The works in Keogh and Ratanamahatana (2005); Ratanamahatana and Keogh (2005); Vlachos et al. (2003) used a LB_Keogh lower bounding to enhance DTW. The work in Wang et al. (2016) enhanced DTW to infer multiple aliments of time series using the network flow. The work in Alaee et al. (2020) can infer one-dimensional most similar subsequences using Matrix profile inference algorithm and lower/upper bounds but the window of subsequences must be similar.
Nevertheless, there is no direct version of DTW that can find the most similar multidimensional subsequences between time series where there is a difference in length both between time series and between subsequences. The obvious solution is using DTW in the brute-force way, which can provide a solution with an expensive cost.
Hence, in this work, we propose a generalized version of DTW that can handle the problem of finding the most similar subsequences between time series that have different length that can perform efficiently. The proposed algorithm provides the following new properties.
• Inferring arbitrary-length similar subsequences: our approach can infer a pair of the most similar multidimensional subsequences that can have different lengths; • Ranking top- $\mathbf { \nabla } \cdot k$ similar subsequences: our approach ranks top- $k$ of most similar subsequences.
Our proposed approach can be used for any kind of multidimensional time series.
# 3 Problem Statement
First, we define the necessary definitions.
Definition 1 (Time series and its subsequence) Given a feature space $\mathcal { F } \left( e . g . \ \mathbb { R } ^ { n } \right)$ , time series $U$ is a sequence $( u _ { 1 } , \ldots , u _ { l } )$ of length $l$ whose element $u _ { i } \in \mathcal { F }$ is a value at time step $i \in [ 1 . . l ]$ where $[ n . . m ] = \{ x \in \mathbb { Z } \mid n \leq x \leq m \}$ . Subsequence $\boldsymbol { U } [ a , b ]$ is a sequence $( u _ { a } , \ldots , u _ { b } )$ whose element $u _ { j \in [ a . . b ] }$ is an element in $U$ . Subsequence $U _ { k , \omega _ { U } } = U [ k , k + \omega _ { U } - 1 ]$ where $\omega _ { U } \in \mathbb { Z } ^ { + }$ is a time window and $k$ is a starting index. Note that subsequence $U [ a , b ]$ is also a time series and $U [ 1 , l ] = U$ .
Definition 2 (Warping path) Warping path $P$ between subsequence $U [ a , b ] = ( u _ { a } , \ldots , u _ { b } )$ and subsequence $W [ c , d ] = ( w _ { c } , \dots , w _ { d } )$ is a sequence $( p _ { 1 } , p _ { 2 } , \ldots , p _ { n } )$ whose element $p _ { k \in [ 1 . . n ] } =$ $( i _ { k } , j _ { k } ) \in [ a . . b ] \times [ c . . d ]$ . $P$ satisfies these conditions:
$( I )$ Boundary condition: $p _ { 1 } = ( a , c )$ and $p _ { n } = ( b , d )$ (2) Continuity and monotonicity condition: $p _ { k ^ { \prime } + 1 } - p _ { k ^ { \prime } } \in \{ ( 0 , 1 ) , ( 1 , 0 ) , ( 1 , 1 ) \}$ where $k ^ { \prime } \in [ 1 . . n - 1 ]$
Definition 3 (Distance between subsequences of time series) Distance between subsequence $U [ a , b ] = ( u _ { a } , . . . , u _ { b } )$ and subsequence $W [ c , d ] = ( w _ { c } , \dots , w _ { d } )$ is given by a warping path $P$ between them. If $P = ( p _ { 1 } , p _ { 2 } , \ldots , p _ { n } ) = ( ( i _ { 1 } , j _ { 1 } ) , ( i _ { 2 } , j _ { 2 } ) , \ldots , ( i _ { n } , j _ { n } ) ) ,$ , then the distance between $U [ a , b ]$ and $W [ c , d ]$ given $P$ , denoted by $D i s t S u b ( U [ a , b ] , W [ c , d ] , P )$ , is $\textstyle \sum _ { k = 1 } ^ { n } d i s t ( u _ { i _ { k } } , w _ { j _ { k } } )$ where $d i s t : \mathcal { F } \times \mathcal { F } \to \mathbb { R } _ { \geq 0 }$ is a distance function.
Definition 4 (Dynamic time warping distance between subsequences of time series) Dynamic time warping distance between subsequence $\begin{array} { r l r } { U [ a , b ] } & { { } = } & { \left( u _ { a } , \ldots , { } u _ { b } \right) } \end{array}$ and subsequence $W [ c , d ] \ = \ ( \bar { w } _ { c } , \ldots , w _ { d } )$ , denoted by $\tilde { D T W } ( U [ a , b ] , \mathbf { \bar { W } } [ c , d ] )$ , is the minimum distance between $U [ a , b ]$ and $W [ c , d ]$ . In other words, ${ \cal D } T W ( U [ a , \dot { b } ] , \dot { W } [ c , \dot { d } ] ) = \operatorname* { m i n } _ { P } { \cal D } i s t { \cal S } u b ( U [ a , b ] , W [ c , d ] , P )$
Definition 5 (Distance matrix of time series) Given time series $U = ( u _ { 1 } , \ldots , u _ { n } )$ and time series $W = ( w _ { 1 } , \dots , w _ { m } )$ , distance matrix $M _ { U , W }$ , is a $n \times m$ matrix where $M [ i , j ] = d i s t ( u _ { i } , w _ { j } )$
Definition 6 (Lower bound of DTW) Given subsequence $U _ { i , \omega _ { U } }$ , $W _ { j , \omega _ { W } }$ where $\omega _ { U } \quad \ge \quad \omega _ { W }$ , and a distance matrix $M _ { U , W }$ , lower bound of DTW between $U _ { i , \omega _ { U } }$ and $W _ { j , \omega _ { W } }$ , denoted by $\begin{array} { r } { D T W _ { L } ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) , = \sum _ { k = i } ^ { i + \omega _ { U } - 1 } \operatorname* { m i n } \{ M [ k ; j , \dots , j + \omega _ { W } - 1 ] \} } \end{array}$ where $X [ a , \dotsc , b ; c , \dotsc , d ]$ is a submatrix formed by taking row a to $b$ and column c to d from matrix $X$ .
Definition 7 (Upper bound of DTW) Given subsequence $U _ { i , \omega _ { U } }$ , $W _ { j , \omega _ { W } }$ where $\begin{array} { l l l } { \omega _ { U } } & { \geq } & { \omega _ { W } } \end{array}$ , and a distance matrix $M _ { U , W }$ , upper bound of DTW between $U _ { i , \omega _ { U } }$ and $W _ { j , \omega _ { W } }$ , denoted by $\begin{array} { r } { D T W _ { u p } ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) , = \sum _ { k = 0 } ^ { \omega _ { W } - 2 } M [ i + k , j + k ] + \sum _ { k = \omega _ { W } - 1 } ^ { \omega _ { U } - 1 } M [ i + k , j + \omega _ { W } - 1 ] } \end{array}$
Definition 8 (Domain of interest) Domain of interest $D _ { n , m } = ( U , W , \omega _ { U } , \omega _ { W } )$ is a tuple of time series $U = ( u _ { 1 } , \dots , u _ { n } ) , W = ( w _ { 1 } , \dots , w _ { m } )$ and time window $\omega _ { U } , \omega _ { W }$ .
Definition 9 (Dynamic time warping matrix) Given domain of interest $\begin{array} { r l } { D _ { n , m } } & { { } = } \end{array}$ $( U , W , \omega _ { U } , \omega _ { W } ) ~ = ~ D _ { 3 }$ , dynamic time warping matrix $D T W M _ { D }$ , lower bound of DTW mat $r i x \ D T W M L _ { D }$ , and upper bound of DTW matrix ${ D T W M U } _ { D }$ are $\left( n - \omega _ { U } + 1 \right) \times \left( m - \omega _ { W } + 1 \right)$ matrices where
$$
\begin{array} { r l } & { D T W M _ { D } [ i , j ] = D T W ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) } \\ & { D T W M L _ { D } [ i , j ] = D T W _ { L } ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) } \\ & { D T W M U _ { D } [ i , j ] = D T W _ { u p } ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) } \end{array}
$$
Now, we are ready to formalize the problem.
# Problem 1: VARIABLE-LENGTH SIMILAR SUBSEQUENCES INFERENCE PROBLEM
Next, we provide useful propositions and lemma that we use later. The details of the proofs are provided in the Appendix Section 7.
Figure 1: A high-level overview of the proposed framework. Given a pair of time series and time window $\omega _ { 1 } , \omega _ { 2 }$ , the framework infers the most similar subsequences of length $\omega _ { 1 } , \omega _ { 2 }$ respectively.
Proposition 3.1 Given subsequence $U _ { i , \omega \tau }$ , $W _ { j , \omega _ { W } }$ where $\omega _ { U } ~ \ge ~ \omega _ { W }$ , and a distance matrix $M _ { U , W } = M$ , the following inequality holds:
$$
D T W _ { L } ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) \leq D T W ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } )
$$
Lemma 3.2 Given subsequence $U _ { i , \omega _ { U } }$ , $W _ { j , \omega _ { W } }$ where $\omega _ { U } \geq \omega _ { W }$ , if sequence $P = ( p _ { n } ) _ { n \in [ 0 . . ( \omega _ { U } - 1 ) ] }$ where $p _ { n } = ( i + n , j + n )$ if $n < \omega _ { W } - 1$ and $( i + n , j + \omega _ { W } - 1 )$ if ${ \mathrm { { \dot { \ell } } } n } \ge { \omega _ { W } } - 1$ , then $P$ is $a$ warping path between $U _ { i , \omega _ { U } }$ and $W _ { j , \omega _ { W } }$
Proposition 3.3 Given subsequence $U _ { i , \omega _ { U } }$ , $W _ { j , \omega _ { W } }$ where $\omega _ { U } ~ \ge ~ \omega _ { W }$ , and a distance matrix $M _ { U , W } = M$ , the following inequality holds:
$$
D T W ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } ) \leq D T W _ { u p } ( U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } )
$$
Proposition 3.4 Given domain of interest $D _ { n , m } = ( U , W , \omega _ { U } , \omega _ { W } ) = D$ and $( i , j ) \in [ 1 . . ( n -$ $\omega _ { U } \stackrel { - } { + } 1 ) ] \times [ 1 . . ( m - \omega _ { W } + 1 ) ] ,$ , $\begin{array} { r } { i f \operatorname* { m i n } _ { i ^ { \prime } , j ^ { \prime } } D T W M U _ { D } [ i ^ { \prime } , j ^ { \prime } ] < D T W M L _ { D } [ i , j } \end{array}$ ], then $( i , j ) \neq$ $a r g m i n _ { i , j } D T W M _ { D } [ i , j ]$
# Algorithm 2: VLSubsequenceInferFunction
input :Time series $U = ( u _ { 1 } , \ldots , u _ { n } )$ , $W = ( w _ { 1 } , \dots , w _ { m } )$ , time windows $\omega _ { U } , \omega _ { W }$ , and a distance function $d i s t ( \cdot , \cdot )$ . output :SolutionIdx, a set of $( a , b )$ where subsequence $U _ { a , \omega _ { U } }$ of length $\omega _ { U }$ is most similar to subsequence $W _ { b , \omega _ { W } }$ of length $\omega _ { W }$ and shortestDist, the distance between these subsequences. if $\omega _ { W } > \omega _ { U }$ then Swap $U$ and $W$ , $\omega { } _ { U }$ and $\omega _ { W }$ ; 2 Create $n \times m$ array $\mathbf { M }$ where $M [ i , j ] = d i s t ( u _ { i } , w _ { j } )$ ; 3 Create $n \times ( m - \omega _ { W } + 1 )$ array M inP ool where $M i n P o o l [ i , j ] = m i n \{ M [ i ; j , \dots , j + \omega _ { W } - 1 ] \} ;$ 4 Create $\left( n - \omega _ { U } + 1 \right) \times \left( m - \omega _ { W } + 1 \right)$ array MinP ath where Min $P a t h [ i , j ] = s u m \{ M i n P o o l [ i , \dots , i + \omega _ { U } - 1 ; j ] \}$ ; 5 Create $\left( n - \omega _ { U } + 1 \right) \times \left( m - \omega _ { W } + 1 \right)$ array MaxP ath where $\begin{array} { r } { M a x P a t h [ i , j ] = \sum _ { k = 0 } ^ { \omega _ { W } - 2 } M [ i + k , j + k ] + \sum _ { k = \omega _ { W } - 1 } ^ { \omega _ { U } - 1 } M [ i + k , j + \omega _ { W } - 1 ] ; } \end{array}$ $^ { \prime * }$ Find the minimum element in MaxP a th matrix. \*/ $\textbf { \textit { s } } M i n O f M a x P a t h = \operatorname* { m i n } \{ M a x P a t h \}$ ; /\* Strategically prune subsequence pairs using upper/lower bounds. $^ { * / }$ 7 UnsortedCandidateSolu $\mathsf { \Gamma } _ { i o n s } ^ { } = \bar { \{ } ( i , j ) \ | \ \mathscr { M } i n P a t h [ i , j ] \leq \mathscr { M } i n O f M a x { P a t h } \}$ ; 8 CandidateSolution $s = [ ( i _ { k } , j _ { k } ) \in U \ i$ nsortedCandidateSolutions] array where $M i n P a t h [ i _ { k } , j _ { k } ] \leq M i n P a t h [ i _ { k + 1 } , j _ { k + 1 } ]$ ; /\* Strategically search for optimal subsequence pairs by ranking and pruning pairs. 9 (SolutionIdx, shortestDist) $\mathbf { \Sigma } = \mathbf { \Sigma }$ F indOptimalSolutions $( M , \omega _ { U } , \omega _ { W }$ , CandidateSolutions, MinP ath); 10 Return (SolutionIdx, shortestDist);
# 4 Methods
To solve VARIABLE-LENGTH SIMILAR SUBSEQUENCES INFERENCE PROBLEM, given two multidimensional time series and time window of subsequences $\omega _ { 1 } , \omega _ { 2 }$ as inputs, Algorithm 2 can be used to find the most similar pairs of subsequences with length $\omega _ { 1 }$ and $\omega _ { 2 }$ . Figure 1 shows the overview of Algorithm 2. After getting inputs, the algorithm computes the euclidean distances between each time step of both time series; then, it computes lower- and upper- bound distances of each subsequence pair. Afterward, 1) it prunes any subsequence pairs whose lower bounds are greater than an upper bound of any other pairs since their distances cannot be the minimum. Next, 2) the algorithm strategically ranks and searches subsequence pairs by computing an exact DTW distance of the subsequence pairs with the most potential to have the smallest distance. Then, the algorithm prunes any other subsequence pairs whose lower bounds are greater than this exact DTW distance. The process continues until there is no more subsequence pairs to search for. The algorithm finally reports the most similar pairs of subsequences with length $\omega _ { 1 }$ and $\omega _ { 2 }$ . Below is the detail of Algorithm 2. We also show that Algorithm 2 always provides the solution for the Problem 1 in Theorem 4.1.
# Algorithm 3: FindOptimalSolutions
input :Distance matrix $M$ , time window $\omega _ { U }$ and $\omega _ { W }$ , array of starting indices of subsequence pairs $C$ andidateSolutions, matrix of minimum paths MinP ath. output :SolutionIdx, a set of $( a , b )$ where subsequence $U _ { a , \omega \upsilon }$ of length $\omega _ { U }$ is most similar to $W _ { b , \omega _ { W } }$ of length $\omega _ { W }$ and shortestDist, the distance between these subsequences. 1 Create Solution $I d x = \emptyset$ ; 2 shortes $D i s t = I n f$ ; $^ { \prime * }$ Find the DTW distance of a subsequence pair that is not yet pruned. \*/ 3 for $i = 1$ to CandidateSolutions do 4 $( a , b ) = C a n d i d a t e S o l u t i o n s [ i ]$ ; $/ *$ Skip the pair if its lower-bound distance is still greater than the current solution if shortes $t D i s t < M i n P a t h [ a , b ]$ then 5 Break; end 6 shortestDis $t _ { n e w } = .$ DynamicT imeW arping $( M , \omega _ { U } , \omega _ { W } , ( a , b ) )$ ; $^ { \prime * }$ Update the shortest distance solution if we find a better one. \*/ if shortestDistnew $, < s h e$ ortestDist then 7 s $\ ` o r t e s t D i s t = s h o r t e s t D i s t _ { n e w }$ ; 8 SolutionIdx.clear(); 9 SolutionIdx.add $( { \ddot { a } } , b ) )$ ; else if shortestDistnew = shortestDist then 10 SolutionIdx.add((a, b)); else 11 Continue; end end 12 Return (SolutionIdx, shortestDist);
Theorem 4.1 Algorithm 2 with time series $U = ( u _ { 1 } , \dots , u _ { n } ) , W = ( w _ { 1 } , \dots , w _ { m } )$ and time window $\omega _ { U } , \omega _ { W }$ as inputs provides an optimal solution for VARIABLE-LENGTH SIMILAR SUBSEQUENCES INFERENCE PROBLEM.
Proof Algorithm 2 exhausts all possible cases of pairs of subsequences with length $\omega _ { U }$ and $\omega _ { W }$ . However, we will show that when the algorithm prunes certain pairs of sequences; it only prunes those that cannot provide an optimal solution.
(Forward direction)
Our goal here is to find a set of pairs of subsequences with the length $\omega _ { U }$ and $\omega _ { W }$ that has the minimum DTW distance. In other words, the solution can be formalized as Soluti $\mathit { o n s } = \{ \{ U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } \} \ |$ $( i , j ) = a r g m i n _ { i , j } D T W M _ { D } [ i , j ] \}$ where $D =$ domain of interest $D _ { n , m } = ( U , W , \omega _ { U } , \omega _ { W } )$ Note that we can trivially construct Solutions from SolutionIdx, a set of starting indices of subsequence pairs. In other words, $S o l u t i o n s = \{ \{ U _ { i , \omega _ { U } } , W _ { j , \omega _ { W } } \} \mid ( i , j ) \in S o l u t i o n I d x \}$ where $S o l u t i o n I d i r = \{ ( i , j ) \mid ( i , j ) = a r g m i n _ { i , j } D T { \dot { W } } { \dot { M } } _ { D } [ { \ddot { i } } , { \dot { j } } ] \}$ .
We will prove that Algorithm 2 outputs SolutionIdx.
On line 4, we construct $M i n P a t h \ = \ D T W M L _ { D }$ . On line 5, we construct $M a x P a t h \ =$ ${ D T W M U } _ { D }$ . On line 6, we compute $\begin{array} { r } { M i n O f M a x P a t h = \operatorname* { m i n } _ { i , j } D T W M U _ { D } [ i , j ] } \end{array}$ .
It follows from Prop. 3.4 that $\{ ( i , j ) \mid M i n O f M a x P a t h < M i n P a t h [ i , j ] \} \cap S o l u t i o n I d x =$ D Assume that we start with the set of all pairs of starting indices $A l l I d x = \{ ( i , j ) \mid \exists D T W M _ { D } [ i , j ] \}$ .
On line 7, we then obtain UnsortedCandidateSolutions from AllIdx by eliminating index $( i , j ) \in A l l I d x$ if it doesn’t belong in SolutionIdx. Formally, SolutionIdx $\subseteq$ UnsortedCandidateSolutions $\mathbf { \Sigma } =$ AllIdx $\begin{array} { r l r } { - } & { { } \big \{ ( i , j ) \mathrm { ~ ~ { ~ \pi ~ } ~ } | \ } & { \operatorname* { m i n } _ { i ^ { \prime } , j ^ { \prime } } M a x P a t h [ i ^ { \prime } , j ^ { \prime } ] } \end{array}$ $\begin{array} { r l r l r } { M i n P a t h [ i , j ] \} } & { = } & { \{ ( i , j ) } & { | } & { \operatorname* { m i n } _ { i ^ { \prime } , j ^ { \prime } } M a x P a t h [ \tilde { l } ^ { \prime } , j ^ { \prime } ] } & { \geq } & { M i n \tilde { P } _ { a t h } ^ { \check { \it a t h } } [ i , j ] \} } & { = \mathrm { ~ \ ` ~ } \{ ( \dot { l } ^ { \prime } , j ) } & \quad \end{array}$ $M i n O f M a x P a t h \ge M i n P a t h [ i , j ] \}$
On line 8, we obtain CandidateSolutions by sorting UnsortedCandidateSolutions. This doesn’t dismiss any valid solutions.
On line 9, F indOptimalSolutions() iterates through (i, j) CandidateSolutions and puts $( i ^ { * } , j ^ { * } ) \in$ CandidateSolutions in SolutionIdx if $\begin{array} { r c l } { { D T W M _ { D } [ i ^ { * } , j ^ { * } ] } } & { { = } } & { { } } \end{array}$ shortestDist $\leq$ $D T W M _ { D } [ i ^ { \prime } , j ^ { \prime } ]$ for all $i ^ { \prime } , j ^ { \prime }$ and ignore the rest of $( i , j )$ if shortestDis $\dot { \mathbf { \zeta } } < M i n P a t h [ i , j ]$ .
We then return SolutionIdx and shortestDist on line 10.
(Backward direction) We will prove that Algorithm 2 provides a complete set of solutions by contradiction.
Assume that there exists $( i ^ { * } , j ^ { * } ) \notin S o l u t i o n I d x$ such that $( i ^ { * } , j ^ { * } ) = a r g m i n _ { i , j } D T W M _ { D } [ i , j ]$ Let $( i ^ { \prime } , j ^ { \prime } ) = a r g m i n _ { i ^ { \prime } , j ^ { \prime } } D T W M U _ { D } [ i ^ { \prime } , j ^ { \prime } ]$ .
It follows from the assumption that M inP ath[i∗, j∗] = DT W M LD[i∗, j∗] ≤ DT W MD[i∗, j∗] ≤ $D T W M _ { D } [ i ^ { \prime } , j ^ { \prime } ] \leq D T W M U _ { D } [ i ^ { \prime } , j ^ { \prime } ] = M i n O f \bar { M } a x P a t h$ 2
Since Min $P a t h [ i ^ { * } , j ^ { * } ] \leq M i n O f M a x P a t h$ , (i∗, j∗) ∈ UnsortedCandidateSolutions. After sorting, $( i ^ { * } , j ^ { * } ) \in C a n d i d a t e S o l u t i o n s$
On line 9, it is the case that $( i ^ { * } , j ^ { * } ) \in S o l u t i o n I d x$ since $D T W M _ { D } [ i ^ { * } , j ^ { * } ] \leq D T W M _ { D } [ i ^ { \prime } , j ^ { \prime } ]$ for all $i ^ { \prime } , j ^ { \prime }$ (from assumption). This contradicts the assumption, and, therefore, Algorithm 2 must also output $( i ^ { * } , j ^ { * } )$ .
Therefore, Algorithm 2 provides an optimal solution for the Problem 1.
# 4.1 Time Complexity
In the best case, Algorithm 2 performs within $O ( n \times m )$ time steps where ${ \mathbf { } } n , { \mathbf { } } m$ are the length of time series since it just computes ordinary DTW. For the average case, given $1 < k < n \times m$ , the algorithm performs within $O ( k \times n \times m )$ time steps since the algorithm prunes certain subsequence pairs and leaves only $k$ pairs to compute DTW (DTW uses $\omega _ { U } \times \omega _ { W }$ time steps). For real-world datasets, the similar subsequences generally have lower distance than random matching of subsequences. This makes our algorithm be able to prune most of subsequence pairs out. Hence, $k$ is typically much lower than $n \times m$ . For the worst case scenario, the algorithm performs within $O ( n ^ { 2 } m ^ { \bar { 2 } } )$ , the same as using $D T W$ to compute all subsequence pairs.
# 5 Experiments
# 5.1 Experimental setup
To determine performance of methods, there are two aspects we considered in this work: running time and correctness of inference. For the running time, we vary the length of time series and $\omega _ { 1 } , \omega _ { 2 }$ and compare running time for each method. For the correctness of inference, given a ground truth of intervals (GT) within each time series that contains the most similar subsequence (MSS), we measure the performance of methods as follows. The number of true positive cases (TP) represents intersection between the predicted and GT of MSS. The false positive (FP) occurs when a method infers that there is a position $i$ in MSS but the GT disagrees. The false negative (FN) occurs when MSS contains position $i$ w.r.t. GT but the method fails to infer $i$ as a position in MSS. The precision, recall, F1 score can be computed from these variables. We vary the level of noise within the simulation datasets to evaluate the robustness of methods. Suppose $W$ is a time series generated from the model, the
Time Series X
1.0 W 0.5 0.0
-0.5
-1.0 0 250 500 750 1000 1250 1500 1750 2000 time Time Series Y
1.5
1.0 w
0.5
0.0
-0.5
−1.0 0 250 500 750 1000 1250 1500 1750 2000 time
uniform noise time series $\mathcal { U }$ is added to $U$ with the following equation.
$$
\hat { W } = ( 1 - \gamma ) \times W + \gamma \times \mathcal { U }
$$
We vary $\gamma \in { 0 . 1 , 0 . 2 , 0 . 3 , 0 . 4 , 0 . 5 }$ for our analysis. All experiments were perform on a PC with OS: Ubuntu 22.04.4 LTS x86_64, Kernel: 6.8.0-59-generic, CPU: AMD Ryzen 7 5800X (16) $@$ 3.800GHz, Memory: 32033MiB.
# 5.2 Time series simulation
We use the moving average model with time delay 20 steps and normal $\mathcal { N } ( 0 , 4 )$ where ${ \mathcal { N } } ( \mu , \sigma ^ { 2 } )$ is a normal distribution with mean $\mu$ and variance $\sigma ^ { 2 }$ . The generated motifs are sine signals with length $l$ whose frequencies $= 1 / l \mathrm { H z }$ and amplitude $= 1$ . If applicable, we will use time window $= 6 0$ and 80. An example of simulated time series is in Fig. 2. The proposed algorithm correctly identified intervals that contain the most similar subsequences with different lengths in both time series.
# 5.3 Real world dataset
# 5.3.1 Baboon leader/follower time series
The time series of trajectories of this baboons’ coordinated movement were from GPS collars of an olive baboon (Papio anubis) troop in the wild in Mpala Research Centre, Kenya StrandburgPeshkin et al. (2015). After filtering, it consisted of 16 baboons, whose leader in this particular event had $\mathrm { I D } = 3$ . The coordination event began on Aug 2nd, 2012, at 6:00 AM and ended 10 minutes later Amornbunchornvej et al. (2018b). The ID3 initiated the coordinated movement and everyone followed it for 100 seconds; then, ID1 led the group Amornbunchornvej et al. (2018a). The Variable-Lag-Granger causal relation between time series of ID3 directions as a cause and the aggregate time series of rest of the group can be detected in this event Amornbunchornvej et al. (2021). In this work, we analyze the normalized two-dimensional time series of positions of 16 baboons with the length 600 time steps (one time step per second).
# 5.3.2 Stock prices of companies in similar sector vs those in unrelated sector
The time series of Nvidia Corp. (Nvidia), Vishay Intertechnology, Inc. (Vishay), and Tyson Foods, Inc. (Tyson) 2020 stock prices were downloaded from Yahoo Finance. In this work, we find the most similar subsequences (time window $= 9 0$ days) of the normalized time series of Nvidia, Vishay, and Tyson stock prices, each of which has 253 time steps (one time step per trading day).
Figure 3: A comparison of running time of methods vs. a) the length of time series to find the most similar subsequences and b) the window size with the fix time series length as 2000.
Figure 4: A comparison of noise level vs. a) the running time and b) F1 score of inferring correct subsequences. The time series length is 2000 time steps.
# 6 Results
# 6.1 Simulation results
There are four methods that we performed analysis. The brute force method that performs sliding window to select all possible subsequence pair w.r.t. given $\omega _ { U } , \omega _ { W }$ , then computes DTW for each pair. The Sakoe Chiba is the brute force method with Sakoe-Chiba constraint. The Strategic pruning (SP) is our approach and the last one is the SP with Sakoe-Chiba constraint. Fig. 3 a) shows the running time of methods with different lengths of time series. Our proposed method consistently used less time than others; it used around one-five of time compared against the brute-force (SP with Sakoe-Chiba constraint). The time growing rates for all methods are slightly non-linear. For Fig. 3 b), it shows that when setting the windows $\omega _ { U } = \omega _ { W }$ and vary the windows, our SP with Sakoe-Chiba constraint performed fastest. The SP performed well only when the window size is below $40 \%$ of time series length.
For sensitivity analysis, Fig. 4 a) shows the running time of methods vs. noise. When the noise level is lower, our sliding window approach used one fifth of brute-force time. However, when the noise level increased, all methods used more time to compute.
Fig. 4 b) shows that the performance of inferring the most similar subsequences of all methods dropped after the noise level reach beyond 0.25. Briefly, all methods can tolerate some level of noises but too much noise make it is impossible to infer correct subsequences.
# 6.2 Case study: leadership of coordinated movement of baboons
Fig. 5 shows heatmaps of lead difference. Let $i d x _ { - } l _ { i } =$ starting index of subsequence from Leader ID $i$ and $i d x _ { - } f _ { j } =$ starting index of subsequence from Follower $\operatorname { I D } j$ . We define leading subsequence pair as a pair where $i d x \_ l _ { i } < i d x \_ f _ { j }$ , and following subsequence pair as a pair where $i d x \_ l _ { i } > i d x \_ f _ { j }$ . Given top-1000 most similar subsequences of Leader ID $i$ and Follower ID $j$ , each cell in the heatmap has a value of $L - F$ where $L =$ number of leading subsequence pairs, and $F =$ number of following subsequence pairs. A cell is green when $L > F$ and red when $L < F$ .
Figure 5: Heatmaps of lead difference during a) the first 100 seconds and b) the last 500 seconds. The y-axis is leader and $\mathbf { \boldsymbol { x } }$ -axis is follower. Given top-1000 most similar subsequences, how much leader leads follower $= \#$ of leading subsequence pairs - # of following subsequence pairs (defined in Subsection 6.2).
Figure 6: The most similar subsequences (orange) of time series of prices of NVIDIA stock prices vs. (left) common sector company (vishay) and (right) different sector company chain (Tyson).
The window size is $\omega _ { U } = \omega _ { W } = 6 0$ . In Fig. 5 a), ID3 has the most of green cells during the first 100 seconds, which is consistent with the ground truth that ID3 initiated the group movement.
In Fig. 5 b), ID1 has the most of green cells during the last 500 seconds. This is also consistent with the ground truth that ID1 led the group movement most of the time. Moreover, in the baboon dataset, our approach ran up to 20 times faster than the brute-force approach.
# 6.3 Case study: Stock prices of companies in similar sector vs those in unrelated sector
We analyzed this dataset using $\omega _ { U } = \omega _ { W } = 9 0$ or three months. Fig. 6 (left) shows the most similar subsequences (orange) of prices of NVIDIA stock vs. Vishay stock (company in related sector). The match subsequences established the same trend and pattern with minimum distance as 1.601, which is consistent with the fact that both companies operate in a similar sector and might be influenced by similar factors. In contrast, Fig. 6 (right) shows the most similar subsequences (orange) of prices of NVIDIA stock vs. Tyson stock (company in a different sector) with higher distance as 1.856. The matching in this case has different pattern, which shows that both companies are unrelated. This result demonstrates the utilization of using our method to find pattern of potential dependency in financial data. | Finding the most similar subsequences between two multidimensional time
series has many applications: e.g. capturing dependency in stock market or
discovering coordinated movement of baboons. Considering one pattern occurring
in one time series, we might be wondering whether the same pattern occurs in
another time series with some distortion that might have a different length.
Nevertheless, to the best of our knowledge, there is no efficient framework
that deals with this problem yet. In this work, we propose an algorithm that
provides the exact solution of finding the most similar multidimensional
subsequences between time series where there is a difference in length both
between time series and between subsequences. The algorithm is built based on
theoretical guarantee of correctness and efficiency. The result in simulation
datasets illustrated that our approach not just only provided correct solution,
but it also utilized running time only quarter of time compared against the
baseline approaches. In real-world datasets, it extracted the most similar
subsequences even faster (up to 20 times faster against baseline methods) and
provided insights regarding the situation in stock market and following
relations of multidimensional time series of baboon movement. Our approach can
be used for any time series. The code and datasets of this work are provided
for the public use. | [
"cs.LG",
"cs.AI",
"cs.DB",
"stat.ME"
] |
# I. INTRODUCTION
Accurate separation of moving and static objects is crucial for efficient path planning and safe navigation in dynamic traffic environments [1]. Moving Object Segmentation (MOS), particularly for pedestrians, cyclists, and vehicles, reduces system errors caused by dynamic objects and improves environmental perception accuracy [2].This technology is essential for reducing uncertainties in scene flow estimation [3], [4] and path planning [5], enabling autonomous systems to make precise and reliable decisions. MOS plays a key role in real-time obstacle detection and adaptive environmental perception, making it integral to autonomous driving technology.
For the MOS task, existing solutions can be categorized into projection-based [2], [6]–[8] and non-projection-based methods [9]–[11]. Projection-based methods lose geometric information when mapping results back to the 3D point cloud space, limiting their performance. To address this, [7] proposed a two-stage approach using 3D sparse convolution to fuse information from the projection map and point cloud, reducing back-projection loss and improving accuracy.However, this approach is computationally expensive, making it challenging to balance accuracy and inference speed. Likewise, non-projection-based methods [9]–[11] face the same issue. Among them, MambaMOS [11] achieves state-of-the-art (SoTA) performance in the MOS task, but its long processing time limits real-time applicability (see Fig. 1).
Knowledge distillation (KD) [12] is an effective model compression technique that addresses real-time performance issues. Previous distillation algorithms [13], [14] have been successfully applied to LiDAR semantic segmentation, achieving remarkable results. However, most of these methods focus on model compression [13] or improving feature extraction [15], with few dedicated to generalizable knowledge distillation methods for MOS tasks.
Based on the most intuitive observation that balancing accuracy and real-time performance is fundamental to solving the MOS problem, we propose KDMOS. The core idea of KDMOS is to compress the large model and balance inference speed and accuracy. We propose Weighted Decoupled Class Distillation (WDCD), a logit-based distillation method. Following the DKD approach [16], we first decouple traditional knowledge distillation into KL losses for target and non-target classes.To mitigate the severe class imbalance in the moving category for MOS tasks and enhance distillation performance, we further decouple moving and non-moving classes, apply different distillation strategies to compute their respective losses, and assign appropriate weights via labels.This allows the student model to effectively learn critical information among potential moving object categories, significantly reducing false positives and missed detections (see Fig. 6). Moreover, it can be broadly applied to other MOS tasks (see Fig. 5). Additionally, we introduce dynamic upsampling in the network, achieving an inference speed of $4 0 ~ \mathrm { H z }$ while maintaining a balance between accuracy and speed.
Extensive experiments demonstrate the superiority of our design. In summary, the main contributions of this paper are as follows:
• We propose a general distillation framework for the MOS task, effectively balancing real-time performance and accuracy (see Fig. 1). To the best of our knowledge, this is the first application of knowledge distillation to MOS.
The KDMOS network architecture is improved by introducing the Dysample offset, reducing model complexity and mitigating overfitting.
Our method achieves competitive results on the SemanticKITTI [6] and Apollo [17] datasets, demonstrating its superior performance and robustness.
# II. RELATED WORK
# A. Knowledge Distillation
Knowledge distillation (KD) was first introduced by Hinton et al. [12]. Its core idea is to bridge the performance gap between a complex teacher model and a lightweight student model by transferring the rich knowledge embedded in the teacher. Existing methods can be categorized into two types: distillation from logits [12], [16], [18], [19] and from intermediate features [13], [14]. Hou et al. [13] proposed Point-to-Voxel Knowledge Distillation (PVKD), the first application of knowledge distillation to LiDAR semantic segmentation.They introduced a supervoxel partitioning method and designed a difficulty-aware sampling strategy. Feng et al. [14] proposed a voxel-to-BEV projection knowledge distillation method, effectively mitigating information loss during projection.Borui et al. [16] decouple the classical KD formula, splitting the traditional KD loss into two components that can be more effectively and flexibly utilized through appropriate combinations. Although featurebased methods often achieve superior performance, they incur higher computational and storage costs and require more complex structures to align feature scales and network representations. Compared to the simple and effective logitbased approach, it has weaker generalization ability and is less applicable to various downstream tasks.
that separately processes range images and residual images, utilizing a motion-guided attention module for feature fusion. Cheng et al. [2] suggested that residual images offer greater potential for motion information and proposed a motionfocused model with a dual-branch structure to achieve the decoupling of spatiotemporal information. Unlike RV projection, BEV projection represents point cloud features from a top-down view, preserving object scale consistency and improving interpretability and processing efficiency. MotionBEV [8] projects to the polar BEV coordinate system and extracts motion features through height differences over temporal windows. CV-MOS [20] proposes a cross-view model that integrates RV and BEV perspectives to capture richer motion information. Non-projection-based methods directly operate on point clouds in 3D space. 4DMOS [9] uses sparse four-dimensional convolution to jointly extract spatiotemporal features from input point cloud sequences and integrates predictions through a binary Bayesian filter, achieving good segmentation results. Mambamos [11] considers temporal information as the dominant factor in determining motion, achieving a deeper level of coupling beyond simply connecting temporal and spatial information, thereby enhancing MOS performance. Therefore, previous methods have struggled to balance real-time performance and accuracy. To address this issue, this paper proposes KDMOS based on knowledge distillation, which distills a non-projection-based large model into a projection-based lightweight model through logits distillation.
# III. METHODOLOGY
In this section, we present a detailed description of KDMOS, with its overall framework illustrated in Fig. 2. We start with data preprocessing, followed by an explanation of the KDMOS network architecture and WDKD. Finally, we provide an in-depth analysis of the loss function components.
# A. Input Representation
Student Input Representation. We employ an extreme bird’s-eye view (BEV) for point cloud coordinate segmentation, a lightweight data representation obtained by projecting the 3D point cloud into 2D space. Following the setup from previous work [8], we project the LiDAR point cloud onto a BEV image. After obtaining the BEV images of the past N-1 consecutive frames, we align the past frames with the current frame’s viewpoint using pose transformation $T \in$ $\mathbb { R } ^ { 4 \times 4 }$ , resulting in the projected BEV images. Meanwhile, we maintain two adjacent time windows, $Q _ { 1 }$ and $Q _ { 2 }$ , with equal lengths, and obtain the residual image by computing the height difference between the corresponding grids in the two time windows.
# B. MOS Based on Deep Learning
Recent methods tend to apply popular deep learning models and directly capture spatiotemporal features from data. Chen et al [6] proposed the LMNet, which uses range residual images as input and extracts temporal and spatiotemporal information through existing segmentation networks. Sun et al. [7] introduced a dual-branch structure
$$
\begin{array} { r l } & { Z _ { ( u , v ) , i } = \big \{ z _ { j } \in p _ { j } \ \big | \ p _ { j } \in Q _ { ( u , v ) , i } , z _ { \operatorname* { m i n } } < z _ { j } < z _ { \operatorname* { m a x } } \big \} , } \\ & { \ I _ { ( u , v ) , i } = \operatorname* { m a x } \{ Z _ { ( u , v ) , i } \} - \operatorname* { m i n } \{ Z _ { ( u , v ) , i } \} . } \end{array}
$$
where $p _ { j }$ is a point within the temporal window $Q _ { ( u , v ) , i }$ , represented as $[ x _ { j } , y _ { j } , z _ { j } , 1 ] ^ { T }$ , and each pixel value $I _ { ( u , v ) , i }$ represents the height occupied by the $( u , v ) _ { t h }$ grid in $Q _ { i }$ . Following MotionBEV [8], we restrict the $z$ -axis range to $( z _ { \mathrm { m i n } } , z _ { \mathrm { m a x } } ) ~ = ~ ( - 4 , 2 )$ . Subsequently, we compute the residuals between the projected BEV images $I _ { 1 }$ and $I _ { 2 }$ :
Fig. 2. The KDMOS framework comprises three main components: the teacher model, the student model, and knowledge distillation. During training, the teacher model uses pre-trained weights and remains frozen, while the student model is trained from scratch, with its parameters continuously update through WDCD and the model’s own loss. MSSM, proposed by MambaMOS [11], achieves deep coupling of temporal and spatial features.
$$
\begin{array} { r } { D _ { ( x , y ) , i } ^ { 0 } , D _ { ( x , y ) , i - 1 } ^ { 1 } , . . . , D _ { ( x , y ) , i - N _ { 2 } + 1 } ^ { N _ { 2 } - 1 } = I _ { ( x , y ) , 1 } - I _ { ( x , y ) , 2 } , } \\ { D _ { ( x , y ) , i - N _ { 2 } } ^ { N _ { 2 } } , D _ { ( x , y ) , i - N _ { 2 } - 1 } ^ { N _ { 2 } + 1 } , . . . , D _ { ( x , y ) , i - N + 1 } ^ { N - 1 } = I _ { ( x , y ) , 2 } - I _ { ( x , y ) , 1 } } \end{array}
$$
where (ku,v),i represents the motion feature in the (u, v)th grid of the $k ^ { \mathrm { t h } }$ channel for the $i ^ { \mathrm { { t h } } }$ frame.
Teacher Input Representation.To align with the student’s input, following the setup of previous work [11], we need to align the past frames with the current frame’s viewpoint using the pose transformation matrix $T \in \mathbb { R } ^ { 4 \times 4 }$ and convert the homogeneous coordinates into Cartesian coordinates, resulting in a sequence of N continuous 4D point cloud sets. To distinguish each scan within the 4D point cloud, we add the corresponding time step of each scan as an additional dimension of the point, obtaining a spatio-temporal point representation:
$$
\underset { S ^ { \prime } = \{ S _ { 0 } , S _ { 1 0 } , . . . , S _ { t 0 } \} } { \boldsymbol { p } _ { i } ^ { \prime } } ,
$$
Next, We follow the setup of MambaMOS [11] to obtain sequences from unordered 4D point cloud sets.:
$$
\begin{array} { c } { { S _ { i } ^ { \prime } = \Psi ( S ^ { \prime } ) , } } \\ { { S ^ { \prime } = \Psi ^ { - 1 } ( S _ { i } ^ { \prime } ) } } \end{array}
$$
# B. Network Structure
1)We propose a novel knowledge distillation network architecture, as shown in Fig. 2. The KDMOS network comprises three main components. First, the teacher model, where we select MambaMOS [11] as the teacher, which deeply integrates temporal and spatial information, delivering strong performance but with a high computational cost. We select a BEV-based method [8] as the student model.
Fig. 3. Structure of the Dysample module. The input feature and original grid are denoted by $X$ and $g$ , respectively.
BEV projection provides a global top-down view, intuitively representing object distribution and relative positions in the scene while maintaining low computational complexity. The final component is the knowledge distillation model, the core of our framework. With WDCD (see Fig. 4), the student model learns category similarities and differences during training, enhancing performance without additional computational cost.
2)Feature Upsampling Module:To balance accuracy and inference speed, we optimize the upsampling module with the dynamic mechanism DySample [21]. As shown in Fig. 3, the feature map is transformed through a linear layer, scaled by an offset factor to compute pixel displacement coordinates, and refined via pixel shuffle for upsampling. Displacement coordinates are added to the base grid for precise sampling. This approach replaces fixed convolution kernels with point-based sampling, reducing parameter count and model complexity, thereby lowering the risk of overfitting.
# C. Weighted Decoupled Class Distillation
WDCD is a key distillation module in our framework, as shown in Fig. 4. Most distillation methods [13], [14] rely on intermediate layer features from both the teacher and student networks.However, if the teacher and student architectures differ significantly, aligning feature scales and network capabilities requires more complex structures, making distillation more challenging and less effective. In contrast, logits-based distillation [12], [16] relies solely on the teacher’s final output, bypassing its internal structure. This approach is simpler to implement, more computationally efficient, and applicable to other MOS methods.
The logits contain relational information between categories, and to further leverage this information, unlike previous MOS methods [2], [6]–[11], we divide the final predictions of points into four categories: unlabeled, static, movable, and moving.Specifically, for a training sample from the $t$ -th class, we first compute the probabilities for the target and non-target classes for each point:
$$
p _ { t } = \frac { \exp ( z _ { t } ) } { \sum _ { j = 1 } ^ { 4 } \exp ( z _ { j } ) } , \quad p _ { \backslash t } = \frac { \sum _ { k = 1 , k \neq t } ^ { 4 } \exp ( z _ { k } ) } { \sum _ { j = 1 } ^ { 4 } \exp ( z _ { j } ) }
$$
where $z _ { j }$ represents the logit of the $j$ -th class, and $p _ { t }$ and $p _ { \backslash t }$ denote the probabilities of target and non-target classes, respectively.We know that the distribution probabilities for each class can be represented as $p = [ p _ { 1 } , p _ { 2 } , p _ { 3 } , p _ { 4 } ]$ . Meanwhile, we define $\hat { p } = [ \hat { p } _ { 1 } , \hat { p } _ { 2 } , \hat { p } _ { 3 } ]$ to independently represent the probabilities for the non-target classes, excluding the target class (i.e., without incorporating its influence).Each element is computed as follows:
$$
p _ { i } = \frac { \exp ( z _ { i } ) } { \sum _ { j = 1 } ^ { 4 } \exp ( z _ { j } ) } , \quad \hat { p } _ { i } = \frac { \exp ( z _ { i } ) } { \sum _ { j = 1 , j \neq m } ^ { 4 } \exp ( z _ { j } ) } .
$$
Specifically, we use the binary probability $b$ and the nontarget probability $\hat { p }$ to represent knowledge distillation (KD), where $T$ and $S$ denote the teacher and student models, respectively.
$$
\mathrm { K D } = \mathrm { K L } ( p ^ { T } \| p ^ { S } ) = p _ { t } ^ { T } \log \left( \frac { p _ { t } ^ { T } } { p _ { t } ^ { S } } \right) + \sum _ { i = 1 , i \neq t } ^ { C } p _ { i } ^ { T } \log \left( \frac { p _ { i } ^ { T } } { p _ { i } ^ { S } } \right) .
$$
From Equations 5 and 6, we obtain $\hat { p } _ { i } ~ = ~ p _ { i } / p _ { \backslash t }$ . Thus, Equation 7 can be rewritten as:
$$
\begin{array} { l } { { \displaystyle \mathrm { K D } = p _ { t } ^ { T } \log \left( \frac { p _ { t } ^ { T } } { p _ { t } ^ { S } } \right) + p _ { t } ^ { T } \log \left( \frac { p _ { \backslash t } ^ { T } } { p _ { \backslash t } ^ { S } } \right) } } \\ { { \displaystyle \qquad + p _ { \backslash t } ^ { T } \sum _ { i = 1 , i \neq t } ^ { C } \hat { p } _ { i } ^ { T } \log \left( \frac { \hat { p } _ { i } ^ { T } } { \hat { p } _ { i } ^ { S } } \right) } } \\ { { \displaystyle \qquad = \mathrm { K L } ( b ^ { T } \| b ^ { S } ) + ( 1 - p _ { t } ^ { T } ) \mathrm { K L } ( \hat { p } ^ { T } \| \hat { p } ^ { S } ) } } \end{array}
$$
where $\mathrm { K L } ( { \boldsymbol { b } } ^ { T } | | { \boldsymbol { b } } ^ { S } )$ represents the similarity between the binary probabilities of the target class for the teacher and student models, while $\mathrm { K L } ( \hat { p } ^ { T } | | \hat { p } ^ { S } )$ denotes the similarity between the probabilities of non-target classes for the teacher and student models.In the MOS task, the severe imbalance between moving and non-moving classes results in the number of non-moving points being approximately 400 times that of moving points. During training, the high accuracy of non-moving classes significantly reduces the effectiveness of ${ \mathrm { K L } } ( b ^ { T } | b ^ { S } )$ , which may even become detrimental (as shown in Table V). Meanwhile, $\mathrm { K L } ( \hat { p } ^ { T } | | \hat { p } ^ { S } )$ is also influenced by $p _ { t } ^ { T }$ , severely impairing the effectiveness of knowledge distillation.
Fig. 4. The structure of WDCD, where $p _ { t } ^ { T }$ and $p _ { t } ^ { S }$ represent the teacher’s and student’s probabilities for the target class, respectively.
Therefore, in the MOS task, we further decouple moving and non-moving classes. For moving classes, both losses are computed as usual, while for easily trainable non-moving classes, only $\mathrm { K L } ( \hat { p } ^ { T } | \hat { p } ^ { S } )$ is applied. We define this approach as Decoupled Class Distillation (DCD).
$$
\begin{array} { r } { \mathrm { D C D } = \left\{ \begin{array} { l l } { \mathrm { K L } ( \pmb { b } ^ { T } \| \pmb { b } ^ { S } ) + \beta \mathrm { K L } ( \hat { \pmb { p } } ^ { T } \| \hat { \pmb { p } } ^ { S } ) , } & { \mathrm { c l a s s = m o v i n g } } \\ { \beta \mathrm { K L } ( \hat { \pmb { p } } ^ { T } \| \hat { \pmb { p } } ^ { S } ) , } & { \mathrm { c l a s s \neq m o v i n g } } \end{array} \right. } \end{array}
$$
where $\beta$ is the balancing coefficient. Based on label-assigned weighting, our final WDCD is formulated as:
$$
\begin{array} { r } { W ^ { i } = \mathrm { C o n t e n t } [ \mathrm { l a b e l } ] , } \\ { \mathrm { W D C D } = \mathrm { D C D } / W ^ { i } } \end{array}
$$
Where Content represents the ratio of points from different categories in the $i ^ { t h }$ frame, and label represents the ground truth labels of points in the $i ^ { t h }$ frame.
# D. Loss Function
During the training process, the total loss function of this algorithm includes both the segmentation loss and the knowledge distillation loss:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { T o t a l } } = \mathcal { L } _ { \mathrm { S t u d e n t } } + \gamma \mathcal { L } _ { \mathrm { L D C D } } , } \end{array}
$$
where $L _ { \mathrm { S t u d e n t } }$ represents the segmentation loss generated by the student network, $L _ { \mathrm { L D C D } }$ is the losses of Knowledge Distillation and $\gamma$ is the balancing coefficient.The student network loss consists of the cross-entropy loss $L _ { \mathrm { w c e } }$ and the Lova´sz-Softmax loss $L _ { \mathrm { l s } }$ . The loss function is defined as follows:
$$
{ \mathcal { L } } _ { \mathrm { S t u d e n t } } = { \mathcal { L } } _ { \mathrm { w c e } } + { \mathcal { L } } _ { \mathrm { l s } } ,
$$
TABLE I COMPARISONS RESULT ON SEMANTICKITTI-MOS DATASET.
TABLE II CROSS VAL IOU PERFORMANCE OF DIFFERENT METHODS ON APOLLO DATASET.
TABLE III ABLATION EXPERIMENTS WITH PROPOSED MODULES.
Fig. 5. The performance of the proposed WDCD module on other MOS methods $( \mathrm { I o U \% } )$ ).
# IV. EXPERIMENTS
In this section, we conduct extensive experiments to comprehensively evaluate KDMOS. Section IV-A introduces the datasets and evaluation metrics. In Section IV-B, we present quantitative and qualitative comparisons between our method and other state-of-the-art approaches on SemanticKITTIMOS [6] and Apollo [17]. In Section IV-C, we conduct ablation experiments to evaluate the effectiveness of our knowledge distillation module. In Section IV-D, we perform a qualitative analysis to intuitively compare our algorithm with other SoTA approaches. Finally, in Section IV-E, we present the runtime performance of our method.
# A. Experiment Setups
The SemanticKITTI-MOS dataset [25] is the largest and most authoritative dataset for the MOS task, derived from the original SemanticKITTI dataset. We follow the setup from previous work [8] for training, validation, and testing. Additionally, to evaluate the generalization ability of our method across different environments, we tested it on another dataset, Apollo, following the setup by Chen et al. [6].
Our code is implemented in PyTorch. Experiments were conducted on a single NVIDIA RTX 4090 GPU with a batch size of 8. We trained KD-MOS for 150 epochs using Stochastic Gradient Descent (SGD) to minimize $L _ { \mathrm { w c e } }$ , $L _ { \mathrm { l s } }$ and $L _ { \mathrm { W D C D } }$ , with a momentum of 0.9 and a weight decay of 0.0001. The initial learning rate was set to 0.005 and decayed by a factor of 0.99 after each epoch. The offset factor for DySample $a$ was set to 0.25, and the weight of $L _ { \mathrm { W D C D } }$ was set to 0.25. The teacher network used MambaMOS pretrained weights with frozen parameters. The student network was trained from scratch without pre-trained weights, and training continued until the validation loss converged. We used the Jaccard Index, also known as the Intersection-overUnion (IoU) metric [26], to evaluate MOS performance for moving objects.
# B. Comparison with State-of-the-Art Methods
First, we report the validation and test results on the SemanticKITTI-MOS dataset [6] in Tab. I. Competitive results were achieved on the validation set. For the test set, we submitted the moving object segmentation results to the benchmark server, which also demonstrated excellent performance, further confirming the model’s robustness.As shown in Tab. I, our mIoU score is lower than that of MambaMOS [11] in both the validation and test benchmarks. However, as shown in Tab. VI, while non-projection-based methods achieve higher accuracy, they often struggle to balance accuracy and inference speed. In contrast, our method effectively combines the advantages of both projectionbased and non-projection-based approaches, demonstrating superior overall performance.Specifically, compared to the baseline MotionBEV, the performance improved by $3 . 9 \%$ on the validation set and $2 . 9 \%$ on the test set.
To evaluate the generalization ability of our method in different environments, we also report the validation results on the Apollo dataset [17] in the Tab. II. Following the standard settings of previous methods [10], [27],the data in the Tab. II does not use any domain adaptation techniques or retraining. Compared to other projection-based methods, our KDMOS performed the best. Although the results on the Apollo dataset are not as good as those of 4DMOS [9], as shown in the table, Our method demonstrates significantly faster inference speed and outperforms 4DMOS on largescale datasets such as SemanticKITTI-MOS.
Fig. 6. Qualitative results of LiDAR-MOS on the SemanticKITTI-MOS validation set using different methods. Green circles indicate false negatives while blue circles indicate false positives.
TABLE IV COMPARISONS OF DIFFERENT KD METHODS.
TABLE VABLATION OF DKD MODULES ON NON-MOVING CLASSES USING THESEMANTICKITTI-MOS VALIDATION SET
# C. Ablation Experiment
In this section, we conduct ablation experiments on the proposed KDMOS and its various components. All experiments are performed on the SemanticKITTI validation set (sequence 08). As shown in Tab. III. It is noteworthy that our proposed WDCD shows significant improvement $( + 1 . 3 \%$ IoU) compared to MotionBEV without increasing the number of parameters. To further demonstrate the indispensability of each component, we conducted ablation experiments with different component combinations in Setting ii. Each proposed component consistently improves baseline performance to varying degrees.The last row shows that our complete KDMOS achieves the best performance, with a significant accuracy improvement $( + 2 . 9 \%$ IoU) and a $7 . 6 9 \%$ reduction in parameter count compared to the baseline.
To evaluate the generalization ability of the proposed
TABLE VI COMPUTATION RESOURCE COMPARISON.
WDCD, we applied it to other MOS methods [2], [6]–[8], using them as baseline models and training from scratch. The experimental results are shown in Fig. 5. The proposed module also enhances the performance of other MOS models. Notably, due to the nature of logits-based knowledge distillation, it introduces no additional parameters while improving the model’s performance without any loss. Furthermore, to explore the advantages brought by our method,we compared WDCD with other KD algorithms [12], [16], [28], [29], As shown in Tab. IV. Our WDCD demonstrates superior performance over other methods in MOS task.
To further validate our argument in Section III-C, we conduct an ablation study on each distillation module for nonmoving classes using the SemanticKITTI-MOS validation set, while applying WDCD to the moving class as usual. The results are shown in Table V, where TCKD and NCKD represent the distillation of the teacher and student models on the target and non-target classes, respectively.Since the number of non-moving classes is significantly higher than that of moving classes, they achieve higher accuracy during training. As a result, TCKD has a limited effect and may even be detrimental ( $- 0 . 3 \%$ IoU). In contrast, applying NCKD alone proves more effective than combining both.
# D. Qualitative Analysis
To provide a more intuitive comparison between our algorithm and other SoTA algorithms, we conducted a visual qualitative analysis on the SemanticKITTI-MOS dataset. As shown in Fig. 6, both MF-MOS and MotionBEV exhibit misclassification of movable objects and missed detection of moving objects. Compared to SoTA algorithms, the knowledge distillation-based model effectively mitigates the impact of moving targets and accurately captures them.
# E. Evaluation of Resource Consumption
We evaluate the inference time (FPS, ms), memory usage (size), and the number of learnable parameters (params) of our method and SoTA methods on Sequence 08, using 112 Intel(R) Xeon(R) Gold 6330 CPUs $\textcircled { a } 2 . 0 0 \mathrm { G H z }$ and a single NVIDIA RTX 4090 GPU. As shown in Tab. VI. We achieved real-time processing speed and outperformed MotionBEV in terms of performance, achieving a balance between accuracy and inference speed. | Motion Object Segmentation (MOS) is crucial for autonomous driving, as it
enhances localization, path planning, map construction, scene flow estimation,
and future state prediction. While existing methods achieve strong performance,
balancing accuracy and real-time inference remains a challenge. To address
this, we propose a logits-based knowledge distillation framework for MOS,
aiming to improve accuracy while maintaining real-time efficiency.
Specifically, we adopt a Bird's Eye View (BEV) projection-based model as the
student and a non-projection model as the teacher. To handle the severe
imbalance between moving and non-moving classes, we decouple them and apply
tailored distillation strategies, allowing the teacher model to better learn
key motion-related features. This approach significantly reduces false
positives and false negatives. Additionally, we introduce dynamic upsampling,
optimize the network architecture, and achieve a 7.69% reduction in parameter
count, mitigating overfitting. Our method achieves a notable IoU of 78.8% on
the hidden test set of the SemanticKITTI-MOS dataset and delivers competitive
results on the Apollo dataset. The KDMOS implementation is available at
https://github.com/SCNU-RISLAB/KDMOS. | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
# 1 Introduction
In this work, we develop a task-oriented SDS that can regulate emotional TTS based on contextual cues (emotional SDS) to enable more empathetic news conversations. Task-oriented SDSs must balance task- and social-goals to create engaging interactions (Clavel et al., 2022), with emotional speech regulation being crucial among social-goals. For instance, synthesizing "sad" speech for tragic earthquake news can foster user empathy and engagement. Appropriately managing emotional tone can thus enhance both user perception and overall experience (Kurata et al., 2024; Concannon and Tomalin, 2024). To support such needs, the field of affective computing has developed emotional TTS techniques, which generate emotionally expressive speech by adjusting acoustic features like cadence, intensity, and pitch. Recent emotional TTS systems have achieved high-quality oral emotional expressions (Cho and Lee, 2021; Wang et al., 2023; Bott et al., 2024).
Figure 1: System architecture. Proposed system uses emoTTS and sentiment analyzer.
However, despite these advances, task-oriented emotional SDSs remain underexplored. This is primarily because socio-conversational research has been compartmentalized (Clavel et al., 2022), with SDS and emotional TTS developing separately and lacking an integrated framework. Moreover, evaluating social-goals like emotional speech regulation is difficult (Kurata et al., 2024), as these goals are multidimensional and lack clear definitions, leading to few established evaluation metrics. Thus, the gap between emotional TTS capabilities and their effective integration into SDS highlights an important area for further research.
We develop a task-oriented emotional SDS and propose its evaluation method. Specifically, we focus on news summarization and Q&A as a target task due to its extensive prior studies. For emotional speech regulation, we adopt a cascade SDS architecture. We employ a PromptTTS (Guo et al., 2022) fine-tuned on the ESD dataset (Zhou et al., 2022) as our emotional TTS model. For evaluation, we use an empathy scale originally (Concannon and Tomalin, 2024) and assess the SDS’s ability to regulate emotional speech. Additionally, we manually evaluate both the system’s emotional speech regulation and task achievement (Walker et al., 1997), comparing it with SDSs that employ non-emotional TTS.
Through this study, we contribute to the studies on socio-conversational system by: (i) providing a method for developing emotional SDS; and (ii) proposesing an evaluation method of emotional SDS.
# 2 System Design and Method
As depicted in Figure 1, we develop an emotional SDS for news conversations using a cascade architecture, building on a strong baseline (Arora et al., 2025) by adding emotional awareness via sentiment-guided synthesis. Below, we describe the core components of both systems.
The baseline system includes three modules: ASR, LLM, and TTS. The ASR transcribes user speech to text, which is encoded and compared against a News Database for relevant article retrieval. The LLM generates a response based on both the transcript and retrieved news snippets, and the TTS outputs spoken responses in a default tone. We utilize Retrieval Augmented Generation (RAG) in our system. The core ASR and RAGLLM module are shared: the ASR transcript is passed to a RAG language model that selectively retrieves news to ground its replies, using dynamic in-context prompting for adaptability.
Our proposed system enhances the baseline with a Sentiment Analyzer that infers the emotional tone (neutral, happy, sad, angry, or surprised) from the LLM’s text response. The emotion tag is fed to PromptTTS, an emotional TTS module that conditions speech synthesis on both text and emotion, producing expressive and empathetic responses. Compared to the emotionally neutral baseline, our system delivers more human-like, engaging interactions through sentiment understanding and emotional prosody.
# 3 Experiments
To evaluate the extent to which the proposed method can control proper speech emotion, we
compared it with a baseline system using human subjective judgments.
# 3.1 Datasets
For emotional TTS fine-tuning, we used the English portion of the ESD dataset (Zhou et al., 2022), splitting 17,500 utterances into training, validation, and evaluation subsets across five emotions. For sentiment analyzer fine-tuning, we used GoodNewsEveryone (Bostan et al., 2020) and GoEmotions (Demszky et al., 2020), mapping their emotion tags to five target categories following Koufakou et al. (2024). As the news database for retrieval, we used Free News2, filtering for English articles and embedding news titles with Chroma3 and Sentence Transformers (Reimers and Gurevych, 2019).
# 3.2 System Setups
Proposed System We used Whisper Large for ASR, LLaMA 3.2 1B for the language model, and a sentence transformer for retrieving the top 1 relevant news. For emotional TTS, we fine-tuned PromptTTS (pre-trained on LJSpeech). Our preliminary analysis showed that its quality was comparable to FastSpeech (Ren et al., 2019) and VITS (Kim et al., 2021) in terms of UTMOS, DNSMOS, PLCMOS, and WER, and qualitative analysis confirmed clear emotional variation. For the sentiment analyzer, we fine-tuned a distilled RoBERTa model (batch size 8, learning rate 0.00001, 4 epochs) after finding that prompt-based LLM approaches tended to over-predict sadness and surprise, achieving better performance than Koufakou et al. (2024).
Baseline System The baseline system shared the same modules as the proposed system, except the sentiment analyzer and a VITS model pre-trained on LJSpeech instead of emotional TTS.
# 3.3 Metrics
We create a seven-item questionnaire in Table 1, using a 5-point Likert scale ( $1 =$ strongly disagree, $5 =$ strongly agree). The first item assesses RAG performance on relevance and coherence, while the second and third address task achievement (Walker et al., 1997): system helpfulness in understanding retrieved news and consistency of responses. The fourth item measures speech emotion appropriateness, adapted from empathy scales for dialogue systems (Concannon and Tomalin, 2024). The last three items assess user engagement, based on Kurata et al.’s questionnaire (Kurata et al., 2024). We also recorded the number of SDS turns as an additional engagement indicator (Aoyama et al., 2024).
Table 1: Emotional SDS Evaluation Questionnaire.
Figure 2: Comparison of Evaluation Metrics by System Type.
Table 2: Statistical Comparison Between Baseline and Proposed Systems
# 3.4 Evaluation Procedure
We collect 20 conversation samples by conducting 10 dialogues with each system. To avoid bias, emotion tags predicted by the sentiment analyzer were hidden from the SDS interface. We test differences in mean scores using Mann-Whitney U tests $( \alpha = . 0 5 )$ due to the small sample size, and calculate Cohen’s d for effect sizes (Cohen, 2013). We assess the internal consistency of the three engagement items using Cronbach’s alpha, which was .860, indicating substantial reliability; thus, we averaged them into a single engagement score.
# 3.5 Results and Discussion
Figure 2 shows the boxplots of human-judgment scores. The proposed system significantly outperformed the baseline in speech emotion appropriateness with a large effect size $\dot { \ b { d } } = 3 . 0 7 0$ ; 4.100 vs. 1.700), confirming its ability to control emotions according to context. Although engagement scores and the number of turns showed no significant differences, both had large effect sizes ( $\mathrm { \ddot { d } = }$ 0.824, 0.831), suggesting that emotional control may promote more engaging conversations (Concannon and Tomalin, 2024). However, the mean engagement score remained moderate (around 3), possibly due to abrupt, discrete emotional shifts without considering prior conversational context. Finally, no significant differences were observed in RAG performance or task achievement, and both systems scored around 3, indicating room for improvement in task-goal fulfillment. | We develop a task-oriented spoken dialogue system (SDS) that regulates
emotional speech based on contextual cues to enable more empathetic news
conversations. Despite advancements in emotional text-to-speech (TTS)
techniques, task-oriented emotional SDSs remain underexplored due to the
compartmentalized nature of SDS and emotional TTS research, as well as the lack
of standardized evaluation metrics for social goals. We address these
challenges by developing an emotional SDS for news conversations that utilizes
a large language model (LLM)-based sentiment analyzer to identify appropriate
emotions and PromptTTS to synthesize context-appropriate emotional speech. We
also propose subjective evaluation scale for emotional SDSs and judge the
emotion regulation performance of the proposed and baseline systems.
Experiments showed that our emotional SDS outperformed a baseline system in
terms of the emotion regulation and engagement. These results suggest the
critical role of speech emotion for more engaging conversations. All our source
code is open-sourced at
https://github.com/dhatchi711/espnet-emotional-news/tree/emo-sds/egs2/emo_news_sds/sds1 | [
"cs.CL"
] |
# 1 Introduction
Diffusion-based generative models [44, 17, 47] have shown unprecedented capability in modeling high-dimensional distribution and have become the dominant choice in various domains. The attractive potential has incentivized advances in multiple dimensions, such as prediction targets [26, 42, 33], diffusion process design [24, 34], and training schedule design [38, 27, 10].
The success is largely benefited from the more stable training process. Nevertheless, the diffusion loss only reflects the relative data-fitting quality for monitoring training process or comparing models under the same setting, while remains obscure for measuring the absolute fit to the training data. It is due to that the optimal loss of diffusion model, i.e., the lowest possible loss value that can be attained by any model, is actually not zero but unknown beforehand. This introduces a series of inconveniences. After the training converges, one still does not know whether the model is already close to oracle, or the remaining loss can be further reduced by tuning the model. Practitioners have to rely on generating samples to evaluate diffusion models, which requires significant computational cost, and sampler configurations introduce distracting factors. The unknown optimal loss also makes it obscured to analyze and compare learning quality at different diffusion steps, impeding a principled design of training schedule. Moreover, as the actual loss value is not fully determined by model capacity but also the unknown optimal loss as the base value, it poses a question on using the actual loss value alone for monitoring the scaling law of diffusion models.
In this work, we highlight the importance of estimating the optimal loss value, and develop effective estimation methods applicable to large datasets. Using this tool, we unlock new observations of data-fitting quality of diffusion models under various formulation variants, and demonstrate how the optimal loss estimate leads to more principled analysis and performant designs. Specifically,
• We derive the analytical expression of the optimal loss at each diffusion step, from which we reveal its inherent positivity and analyze its asymptotic behaviors.
• We then develop estimators for the optimal loss based on the expression. For large datasets, we design a scalable estimator based on dataset sub-sampling. We make a delicate design to properly balance the variance and bias.
• Using the estimator, we reveal the patterns of the optimal loss across diffusion steps on diverse datasets, and by comparison with the losses of mainstream diffusion models under a unified formulation, we find the characteristics of different diffusion formulation variants, and identify the diffusion-step region where the model still underfits compared to the optimal loss.
• From the analysis, we designed a principled training schedule for diffusion models, based on the gap between the actual loss and the optimal loss. We find that our training schedule improves generation performance in FID by $2 \% - 1 4 \%$ (for EDM [24] / FM [33]) on CIFAR-10, $7 \% - 2 5 \%$ (for EDM / FM) on ImageNet-64, and $9 \%$ (for LightningDiT [52]) on ImageNet-256.
• We challenge the conventional formulation to study neural scaling law for diffusion models. We propose using the loss gap as the measure for data-fitting quality. Using state-of-the-art diffusion models [25] in various sizes from 120M to 1.5B on both ImageNet-64 and ImageNet-512, we find that our modification leads to better satisfaction of the power law.
We hope this work could provide a profound understanding on diffusion model training, and ignite more principled analyses and improvements for diffusion models.
# 1.1 Related Work
Optimal loss and solution of diffusion model. A related work by Bao et al. [4, 3] derived the optimal ELBO loss under discrete Gaussian reverse process, and used it to determine the optimal reverse Gaussian (co)variances and optimize the discrete diffusion steps. Gu et al. [14] further studied the memorization behavior of diffusion models. In contrast, we consider general losses and develop effective training-free estimators for the optimal loss value, and emphasize its principled role with important real examples in monitoring and diagnosing model training, designing training schedule, and studying scaling law. There are also some other works that made efforts to estimate the optimal solution [50] using importance sampling. Although more scalable methods are proposed using fast KNN search [39], directly applying these estimators for the optimal loss is not practically viable, since the optimal loss involves estimating two nested expectations. Instead, we develop a more scalable estimator for the optimal loss with proper balance between bias and variance.
Training design of diffusion model. Due to the stochastic nature, intensive research efforts are paid to investigate diffusion model training in multiple directions such as noise schedules and loss weight. Karras et al. [24] presented a design space that clearly separates design choices, enabling targeted explorations on training configurations. Kingma and Gao [27] analyzed different diffusion objectives in a unified way and connect them via ELBO. Esser et al. [10] conduct large-scale experiments to compare different training configurations and motivate scalable design choices for billion-scale models. Most works require large-scale compute for trial and error, due to the lack of a principled guideline for training schedule design based on the absolute data-fitting quality.
Scaling law study for diffusion model. Model scaling behaviors are of great interest in deep learning literature. In particular, the remarkable success of Large Language Models has been largely credited to the establishment of scaling laws [22, 15, 18], which help to predict the performance of models as they scale in parameters and data. There also exist works that empirically investigate the scaling behavior of diffusion models [40, 31, 37, 10], and make attempts to explicitly formulate scaling laws for diffusion transformers [32]. However, training loss values are typically used as the metric in these works, which are not corrected by the optimal loss to reflect the true optimization gap, leading to biased analysis for scaling behaviors of diffusion models.
# 2 Formulation of Diffusion Model
Diffusion models perform generative modeling by leveraging a step-by-step transformation from an arbitrary data distribution $p _ { \mathrm { d a t a } }$ to a Gaussian distribution. Sampling and density evaluation for the data distribution can be done by reversing this transformation process step by step from the Gaussian. In general, the transformation of distribution is constructed by:
$$
\mathbf { x } _ { t } = \alpha _ { t } \mathbf { x } _ { 0 } + \sigma _ { t } \mathbf { \epsilon } , \quad t \in [ \dot { 0 , } T ] ,
$$
where $\mathbf { x } _ { \mathrm { 0 } } \sim p _ { \mathrm { d a t a } }$ is taken as a data sample, $\epsilon \sim p ( \epsilon ) : = \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ is a Gaussian noise sample, and $\mathbf { x } _ { t }$ is the constructed random variable that defines the intermediate distribution $p _ { t }$ . The coefficients $\alpha _ { t }$ and $\sigma _ { t }$ satisfy $\alpha _ { 0 } = 1 , \sigma _ { 0 } = 0 .$ , and $\alpha _ { T } \ll \sigma _ { T }$ , so that $p _ { \mathrm { 0 } } = p _ { \mathrm { d a t a } }$ and $p _ { T } = \bar { \mathcal { N } } ( \mathbf { 0 } , \sigma _ { T } ^ { 2 } \mathbf { I } )$ yield the desired distributions. Eq. (1) gives
$$
p ( \mathbf { x } _ { t } \mid \mathbf { x } _ { 0 } ) = { \mathcal { N } } ( \mathbf { x } _ { t } \mid \alpha _ { t } \mathbf { x } _ { 0 } , \sigma _ { t } ^ { 2 } \mathbf { I } ) ,
$$
which corresponds to a diffusion process expressed in the stochastic differential equation $\mathrm { d } { \mathbf { x } } _ { t } =$ $a _ { t } \mathbf { x } _ { t } \mathrm { d } t + g _ { t } \mathrm { d } \mathbf { w } _ { t }$ starting from $\mathbf { x } _ { \mathrm { 0 } } \sim p _ { \mathrm { 0 } }$ , where $a _ { t } : = ( \log \alpha _ { t } ) ^ { \prime }$ , $g _ { t } : = \sigma _ { t } \sqrt { ( \log \sigma _ { t } ^ { 2 } / \alpha _ { t } ^ { 2 } ) ^ { \prime } }$ , and $\mathbf { w } _ { t }$ denotes the Wiener process. The blessing of the diffusion-process formulation is that the reverse process can be given explicitly [2]:
$$
\mathrm { d } \mathbf { x } _ { s } = - a _ { T - s } \mathbf { x } _ { s } \mathrm { d } s + g _ { T - s } ^ { 2 } \nabla \mathrm { l o g } p _ { T - s } ( \mathbf { x } _ { s } ) \mathrm { d } s + g _ { T - s } \mathrm { d } \mathbf { w } _ { s }
$$
from $\mathbf { x } _ { s = 0 } \sim p _ { T }$ , where $s : = T - t$ denotes the reverse time. Alternatively, the deterministic process $\begin{array} { r } { \mathrm { d } \mathbf { x } _ { s } = - a _ { T - s } \mathbf { x } _ { s } \mathrm { d } s + \frac { 1 } { 2 } g _ { T - s } ^ { 2 } \nabla \log p _ { T - s } ( \mathbf { x } _ { s } ) \mathrm { d } s , } \end{array}$ also recovers $p _ { \mathrm { d a t a } }$ at $s = T$ [47]. The only obstacle to simulating the reverse process for generation is the unknown term $\nabla \log p _ { t } ( \mathbf { x } _ { t } )$ called the score function. Noting that $p _ { t }$ is produced by perturbing data samples with Gaussian noise, diffusion models employ a neural network model ${ \bf s } _ { \boldsymbol { \Theta } } ( { \bf x } _ { t } , t )$ to learn the score function using the denoising score matching loss [48, 47]:
$$
J _ { t } ^ { ( \mathbf { s } ) } ( \theta ) : = \mathbb { E } _ { p _ { 0 } ( \mathbf { x } _ { 0 } ) } \mathbb { E } _ { p ( \mathbf { x } _ { t } | \mathbf { x } _ { 0 } ) } { \big \| } \mathbf { s } _ { \oplus } ( \mathbf { x } _ { t } , t ) - \nabla _ { \mathbf { x } _ { t } } \log p ( \mathbf { x } _ { t } \mid \mathbf { x } _ { 0 } ) { \big \| } ^ { 2 }
$$
$$
\begin{array} { r } { \stackrel { \mathrm { E q . } ( 1 ) } { = } \mathbb { E } _ { p _ { 0 } ( \mathbf { x } _ { 0 } ) } \mathbb { E } _ { p ( \mathbf { \epsilon } ) } \| \mathbf { s } _ { \Theta } \big ( \alpha _ { t } \mathbf { x } _ { 0 } + \sigma _ { t } \mathbf { \epsilon } , t \big ) + \mathbf { \epsilon } \big \langle \sigma _ { t } \big \| ^ { 2 } . } \end{array}
$$
To cover the whole diffusion process, loss weight $w _ { t } ^ { ( \mathbf { s } ) }$ and noise schedule $p ( t )$ are introduced to optimize over all diffusion steps using $J ( \pmb \theta ) : = \mathbb { E } _ { p ( t ) } w _ { t } ^ { ( \mathbf { s } ) } J _ { t } ^ { ( \mathbf { s } ) } ( \pmb \theta )$ .
Alternative prediction targets. Besides the above score prediction target, diffusion models also adopt other prediction targets. The formulation of Eq. (2) motivates the noise prediction target [17] $\boldsymbol { \epsilon } _ { \boldsymbol { \Theta } } ( \mathbf { x } _ { t } , t ) : = - \sigma _ { t } \mathbf { s } _ { \boldsymbol { \Theta } } ( \mathbf { x } _ { t } , t )$ , which turns the total loss into $J ( \pmb \theta ) = \mathbb { E } _ { p ( t ) } w _ { t } ^ { ( \epsilon ) } J _ { t } ^ { ( \overline { { \epsilon } } ) } ( \pmb \theta )$ , where:
This formulation poses a friendly, bounded-scale learning target, and avoids the artifact at $t = 0$ of the denoising score matching loss. If formally solving $\mathbf { x } _ { \mathrm { 0 } }$ from Eq. (1) and let ${ \bf x } _ { 0 } \mathbf { \rho } _ { 0 } ( \mathbf { x } _ { t } , t ) : = { \bf \rho }$ $\begin{array} { r } { \frac { { \bf x } _ { t } - \sigma _ { t } \epsilon _ { \theta } ( { \bf x } _ { t } , t ) } { \alpha _ { t } } = \frac { { \bf x } _ { t } + \sigma _ { t } ^ { 2 } { \bf s } _ { \theta } ( { \bf x } _ { t } , t ) } { \alpha _ { t } } } \end{array}$ = xt+σt2 sθ(xt,t) , then we get the loss:
$$
J _ { t } ^ { ( \mathbf { x } _ { 0 } ) } ( \mathbf { \theta } ) : = \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } \mathbb { E } _ { p ( \mathbf { \epsilon } ) } { \left\| \mathbf { x } _ { 0 \Theta } \left( \alpha _ { t } \mathbf { x } _ { 0 } + \sigma _ { t } \mathbf { \epsilon } , t \right) - \mathbf { x } _ { 0 } \right\| } ^ { 2 } = \frac { \sigma _ { t } ^ { 4 } } { \alpha _ { t } ^ { 2 } } J _ { t } ^ { ( \mathbf { s } ) } ( \mathbf { \theta } ) , \quad w _ { t } ^ { ( \mathbf { x } _ { 0 } ) } = \frac { \alpha _ { t } ^ { 2 } } { \sigma _ { t } ^ { 4 } } w _ { t } ^ { ( \mathbf { s } ) } .
$$
It holds the semantics of clean-data prediction [26, 24], and can be viewed as denoising autoencoders [49, 1] with multiple noise scales. From the equivalent deterministic process, one can also derive the vector-field prediction target ${ \mathbf { v } } _ { \Theta } ( { \mathbf { x } } _ { t } , t ) : = a _ { t } { \dot { \mathbf { x } } } _ { t } - \textstyle \frac { 1 } { 2 } g _ { t } ^ { 2 } { \mathbf { s } } _ { \Theta } ( { \mathbf { x } } _ { t } , t )$ with loss function
$$
J _ { t } ^ { ( \mathbf { v } ) } ( \mathbf { \ominus } ) : = \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } \mathbb { E } _ { p ( \mathbf { c } ) } \| \mathbf { v } _ { \oplus } \big ( \alpha _ { t } \mathbf { x } _ { 0 } + \sigma _ { t } \mathbf { \epsilon } , t \big ) - \big ( \alpha _ { t } ^ { \prime } \mathbf { x } _ { 0 } + \sigma _ { t } ^ { \prime } \mathbf { \epsilon } \big ) \| ^ { 2 } = \frac { g _ { t } ^ { 4 } } { 4 } J _ { t } ^ { ( \mathbf { s } ) } ( \mathbf { \ominus } ) , \quad w _ { t } ^ { ( \mathbf { v } ) } = \frac { 4 } { g _ { t } ^ { 4 } } w _ { t } ^ { ( \mathbf { s } ) } .
$$
It coincides with velocity prediction [42] and the flow matching formulation [33, 34]: $\alpha _ { t } ^ { \prime } \mathbf { x } _ { 0 } + \sigma _ { t } ^ { \prime } \mathbf { \epsilon } \mathbf { \epsilon }$ is the conditional vector field given $\mathbf { x } _ { \mathrm { 0 } }$ and $\epsilon$ (same distribution as $\mathbf { x } _ { T }$ ).
# 3 Estimating the Optimal Loss Value for Diffusion Models
The diffusion loss in various forms (Eqs. 2-5) allows effective and stable learning of intractable targets that would otherwise require diffusion simulation or posterior estimation. Nevertheless, as we will show from the expression of the optimal solution and loss (Sec. 3.1), the optimal loss value is typically non-zero but unknown, obscuring the diagnosis and design of diffusion training. We then develop practical estimators of the optimal loss value, starting from a standard one (Sec. 3.2) to stochastic but scalable estimators applicable to large datasets (Sec. 3.3). Using these tools, we investigate mainstream diffusion models against the optimal loss with a few new observations (Sec. 3.4).
# 3.1 Optimal Solution and Loss Value of Diffusion Models
Despite the intuition, the names of the prediction targets of diffusion model introduced in Sec. 2 might be misleading. Taking the clean-data prediction formulation as an example, it is informationally impossible to predict the exact clean data from its noised version [5]. From the appearance of the loss functions (Eq. (2-5)), the actual learning targets of the models are conditional expectations [6, 4, 3]:
$$
\begin{array} { r l r l } & { \mathbf { s } _ { \mathbf { \theta } } ^ { * } ( \mathbf { x } _ { t } , t ) = \mathbb { E } _ { p ( \mathbf { x } _ { 0 } \mid \mathbf { x } _ { t } ) } [ \nabla _ { \mathbf { x } _ { t } } \log p ( \mathbf { x } _ { t } \mid \mathbf { x } _ { 0 } ) ] , \quad } & & { \mathbf { \epsilon } _ { \mathbf { \theta } } ^ { * } ( \mathbf { x } _ { t } , t ) = \mathbb { E } _ { p ( \epsilon \mid \mathbf { x } _ { t } ) } [ \epsilon ] , } \\ & { \mathbf { x } _ { \mathbf { 0 } \mathbf { \theta } } ^ { * } ( \mathbf { x } _ { t } , t ) = \mathbb { E } _ { p ( \mathbf { x } _ { 0 } \mid \mathbf { x } _ { t } ) } [ \mathbf { x } _ { 0 } ] , \quad } & & { \mathbf { v } _ { \mathbf { \theta } } ^ { * } ( \mathbf { x } _ { t } , t ) = \mathbb { E } _ { p ( \mathbf { x } _ { 0 } , \epsilon \mid \mathbf { x } _ { t } ) } [ \alpha _ { t } ^ { \prime } \mathbf { x } _ { 0 } + \sigma _ { t } ^ { \prime } \epsilon ] , } \end{array}
$$
where the conditional distributions are induced from the joint distribution $p ( \mathbf { x } _ { 0 } , \mathbf { x } _ { t } , \epsilon ) : =$ $p _ { 0 } ( \mathbf { x } _ { 0 } ) p ( \mathbf { \epsilon } ) \delta _ { \alpha _ { t } \mathbf { x } _ { 0 } + \sigma _ { t } \mathbf { \epsilon } } ( \mathbf { x } _ { t } )$ . For completeness, we detail the derivation in Appx. A.
Looking back into the loss functions, the model learns the conditional expectations over random samples from the joint distribution. Hence even at optimality, the loss still holds a conditional variance value. Noting that the joint distribution hence the conditional variance depends on the data distribution, it would be more direct to write down the optimal loss value in the clean-data prediction formulation, which we formally present below:
Theorem 1. The optimal loss value for clean-data prediction defined in Eq. (4) is:
$$
\begin{array} { r } { J _ { t } ^ { ( \mathbf { x } _ { 0 } ) ^ { * } } = \underbrace { \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } { \left\| \mathbf { x } _ { 0 } \right\| } ^ { 2 } } _ { = : A } - \underbrace { \mathbb { E } _ { p ( \mathbf { x } _ { t } ) } { \left\| \mathbb { E } _ { p ( \mathbf { x } _ { 0 } | \mathbf { x } _ { t } ) } [ \mathbf { x } _ { 0 } ] \right\| } ^ { 2 } } _ { = : B _ { t } } , \quad J ^ { * } = \mathbb { E } _ { p ( t ) } w _ { t } ^ { ( \mathbf { x } _ { 0 } ) } J _ { t } ^ { ( \mathbf { x } _ { 0 } ) ^ { * } } . } \end{array}
$$
See Appx. C.1 for proof. For other prediction targets, the optimal loss value can be calculated based on their relations in Eqs. (3, 4, 5) . The expression is derived from $J _ { t } ^ { ( \mathbf { x } _ { 0 } ) ^ { * } } = \mathbb { E } _ { p ( \mathbf { x } _ { t } ) } [$ $\mathbb { E } _ { p ( \mathbf { x } _ { 0 } | \mathbf { x } _ { t } ) } \Big | \Big | \mathbf { x } _ { 0 } - \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ^ { \prime } | \mathbf { x } _ { t } ) } \big [ \mathbf { x } _ { 0 } ^ { \prime } \big ] \Big | \Big | ^ { 2 } \Big ]$ , which is indeed an averaged conditional variance of $p ( \mathbf { x } _ { 0 } \mid \mathbf { x } _ { t } )$ , and takes
a positive value unl
ess at $t = 0$ or when $p ( \mathbf { x } _ { 0 } )$ concentrates only on a single point. For sufficiently large $t$ , $\mathbf { x } _ { t }$ becomes dominated by the noise (see Eq. (1)) hence has diminishing correlation with x0. This means p(x0 | xt) ≈ pdata(x0), hence Jt(x0)∗ $J _ { t } ^ { ( \mathbf { x } _ { 0 } ) ^ { * } } \approx \mathbb { E } _ { p _ { \mathrm { d a t a } } ( \mathbf { x } _ { 0 } ) } \big | \big | \mathbf { x } _ { 0 } - \mathbb { E } _ { p _ { \mathrm { d a t a } } ( \mathbf { x } _ { 0 } ^ { \prime } ) } [ \mathbf { x } _ { 0 } ^ { \prime } ] \big | \big | ^ { 2 }$ approaches the data variance. Note that this optimal loss only depends on
dataset and diffusio
n settings, but not on model architectures and parameterization.
# 3.2 Empirical Estimator for the Optimal Loss Value
To estimate the optimal loss value using Eq. (6) on a dataset $\{ \mathbf { x } _ { 0 } ^ { ( n ) } \} _ { n \in [ N ] }$ , where $[ N ] : = \{ 1 , \cdots , N \}$ , the first term $A : = \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } { \left\| { \mathbf { x } _ { 0 } } \right\| } ^ { 2 }$ can be directly estimated through one pass:
$$
\hat { A } = \frac { 1 } { N } \sum _ { n \in [ N ] } \lVert \mathbf { x } _ { 0 } ^ { ( n ) } \rVert ^ { 2 } .
$$
However, the second term $B _ { t } : = \mathbb { E } _ { p ( \mathbf { x } _ { t } ) } \big \| \mathbb { E } _ { p ( \mathbf { x } _ { 0 } \mid \mathbf { x } _ { t } ) } \big [ \mathbf { x } _ { 0 } \big ] \big \| ^ { 2 }$ requires estimating two nested expectations that cannot be reduced. The inner expe
ctation is take
n under the posterior distribution $p ( \mathbf { x } _ { 0 } \mid \mathbf { x } _ { t } )$ which cannot be sampled directly. By expanding the distribution using tractable ones (Bayes rule), the term can be reformulated as Ep(x0|xt)[x0] = R xp0(px(0x,0x,tx)t )ddx0x0 $\begin{array} { r } { \tilde { \mathbb { E } } _ { p ( \mathbf { x } _ { 0 } | \mathbf { x } _ { t } ) } \big [ \mathbf { x } _ { 0 } \big ] = \frac { \int \mathbf { x } _ { 0 } p ( \mathbf { x } _ { 0 } , \mathbf { x } _ { t } ) \mathrm { d } \mathbf { x } _ { 0 } } { \int p ( \mathbf { x } _ { 0 } , \mathbf { x } _ { t } ) \mathrm { d } \mathbf { x } _ { 0 } } = \frac { \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } [ \mathbf { x } _ { 0 } p ( \mathbf { x } _ { t } | \mathbf { x } _ { 0 } ) ] } { \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } [ p ( \mathbf { x } _ { t } | \mathbf { x } _ { 0 } ) ] } } \end{array}$ Ep(x0)[x0p(xt|x0 . Using Eq. (1) further reduces it as:
$$
\mathbb { E } _ { p ( \mathbf { x } _ { 0 } | \mathbf { x } _ { t } ) } [ \mathbf { x } _ { 0 } ] = \frac { \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } [ \mathbf { x } _ { 0 } K _ { t } ( \mathbf { x } _ { t } , \mathbf { x } _ { 0 } ) ] } { \mathbb { E } _ { p ( \mathbf { x } _ { 0 } ) } [ K _ { t } ( \mathbf { x } _ { t } , \mathbf { x } _ { 0 } ) ] } , \quad \mathrm { w h e r e ~ } K _ { t } ( \mathbf { x } _ { t } , \mathbf { x } _ { 0 } ) : = \exp \left\{ - \frac { \| \mathbf { x } _ { t } - \boldsymbol { \alpha } _ { t } \mathbf { x } _ { 0 } \| ^ { 2 } } { 2 \sigma _ { t } ^ { 2 } } \right\} ,
$$
whose numerator and denominator can then be estimated on the dataset. The outer expectation can be estimated by averaging over a set of independent and identically distributed (IID) samples $\{ \mathbf { x } _ { t } ^ { ( m ) } \} _ { m \in [ M ] }$ following Eq. (1), where each sample is produced by an independently (i.e., with replacement) randomly selected data sample $\mathbf { x } _ { 0 }$ and a randomly drawn noise sample $\mathbf { \epsilon } \sim \mathcal { N } ( \mathbf { 0 } , \mathbf { I } )$ . The estimator for the second term is then:
$$
\hat { B } _ { t } = \frac { 1 } { M } \sum _ { m \in [ M ] } \left\| \frac { \sum _ { n \in [ N ] } \mathbf { x } _ { 0 } ^ { ( n ) } K _ { t } ( \mathbf { x } _ { t } ^ { ( m ) } , \mathbf { x } _ { 0 } ^ { ( n ) } ) } { \sum _ { n ^ { \prime } \in [ N ] } K _ { t } ( \mathbf { x } _ { t } ^ { ( m ) } , \mathbf { x } _ { 0 } ^ { ( n ^ { \prime } ) } ) } \right\| ^ { 2 } .
$$
The outer expectation can be conducted sequentially until the estimation converges. This typically takes $M$ up to two to three times of $N$ . See Appx. E.1 for details.
# 3.3 Scalable Estimators for Large Datasets
Although asymptotically unbiased (Appx. D), the $\hat { B }$ estimator in Eq. (9) incurs a quadratic complexity in dataset size $N$ , which is unaffordably costly for large datasets which are ubiquitous in modern machine learning tasks. For a scalable estimator, dataset sub-sampling is an effective strategy. Instead of using independent random subsets to estimate the numerator and denominator separately, we adopt the self-normalized importance sampling (SNIS) estimator [41, 30] (see Appx. D for background) $\begin{array} { r } { \hat { B } _ { t } ^ { \mathrm { S N I S } } : = \frac { 1 } { M } \sum _ { m \in [ M ] } \| \frac { \sum _ { l \in [ L ] } \mathbf { x } _ { 0 } ^ { ( l ) } K _ { t } ( \mathbf { x } _ { t } ^ { ( m ) } , \mathbf { x } _ { 0 } ^ { ( l ) } ) } { \sum _ { l ^ { \prime } \in [ L ] } K _ { t } ( \mathbf { x } _ { t } ^ { ( m ) } , \mathbf { x } _ { 0 } ^ { ( l ^ { \prime } ) } ) } \| ^ { \sum } } \end{array}$ It uses the same randomly selected (with replacement) subset $\{ \mathbf { x } _ { 0 } ^ { ( l ) } \} _ { l \in [ L ] }$ , where $L \ll N$ , for both the numerator and denominator, which leads to more stable estimates. One can repeat drawing the random data subset $\{ \mathbf { x } _ { 0 } ^ { ( l ) } \} _ { l \in [ L ] }$ and calculate the estimate until convergence.
A specialty for estimating the diffusion optimal loss is that, for a given xt(m) sample, when σt is small, the weight term $K _ { t } ( \mathbf { x } _ { t } ^ { ( m ) } , \mathbf { x } _ { 0 } ^ { ( l ) } )$ is dominated by the $\mathbf { x } _ { \mathrm { 0 } }$ sample closest to $\mathbf { x } _ { t } ^ { ( m ) } / \alpha _ { t }$ (see Eq. (8)), which could be missed in the randomly selected subset $\{ \mathbf { x } _ { 0 } ^ { ( l ) } \} _ { l \in [ L ] }$ , thus incurring a large variance. Fortunately, we know that by construction (Eq. (1)), each $\mathbf { x } _ { t } ^ { ( m ) }$ sample is produced from a data sample x(0n $\mathbf { x } _ { 0 } ^ { ( n _ { m } ) }$ and a noise sample $\mathbf { \epsilon } \mathbf { \epsilon } \mathbf { \epsilon } \mathbf { \epsilon } \mathbf { \epsilon } \mathbf { \epsilon } \mathbf { \prime }$ using $\mathbf { x } _ { t } ^ { ( m ) } = \alpha _ { t } \mathbf { x } _ { 0 } ^ { ( n _ { m } ) } + \sigma _ { t } \mathbf { \epsilon } ^ { ( m ) }$ x(0nm) + σtϵ(m), and when σt is small, αt is also close to 1 (Sec. 2), indicating that x(0n $\mathbf { x } _ { 0 } ^ { ( n _ { m } ) }$ is likely the most dominant $\mathbf { x } _ { \mathrm { 0 } }$ sample and should be included in the subset $\{ \mathbf { x } _ { 0 } ^ { ( l ) } \} _ { l \in [ L ] }$ . This can be simply implemented by constructing the $\{ \mathbf { x } _ { t } ^ { ( \tilde { m } ) } \} _ { \tilde { m } \in [ M ] }$ samples by independently $( i . e .$ , with replacement) drawing a sample $\mathbf { x } _ { 0 } ^ { ( l _ { \tilde { m } } ) }$ from the subset $\{ \mathbf { x } _ { 0 } ^ { ( l ) } \} _ { l \in [ L ] }$ and setting $\mathbf { x } _ { t } ^ { ( \tilde { m } ) } = \alpha _ { t } \mathbf { x } _ { 0 } ^ { ( l _ { \tilde { m } } ) } + \sigma _ { t } \mathbf { \epsilon } ^ { ( \tilde { m } ) }$ tx(0l ˜m) + σtϵ( ˜m) with ϵ( ˜m) ∼ N (0, I). We call it the Diffusion Optimal Loss (DOL) estimator:
$$
\hat { B } _ { t } ^ { \mathrm { D O L } } : = \frac { 1 } { M } \sum _ { \tilde { m } \in [ M ] } \left\| \frac { \sum _ { l \in [ L ] } \mathbf { x } _ { 0 } ^ { ( l ) } K _ { t } ( \mathbf { x } _ { t } ^ { ( \tilde { m } ) } , \mathbf { x } _ { 0 } ^ { ( l ) } ) } { \sum _ { l ^ { \prime } \in [ L ] } K _ { t } ( \mathbf { x } _ { t } ^ { ( \tilde { m } ) } , \mathbf { x } _ { 0 } ^ { ( l ^ { \prime } ) } ) } \right\| ^ { 2 } .
$$
Nevertheless, this introduces an artificial correlation between $\mathbf { x } _ { t }$ and $\mathbf { x } _ { \mathrm { 0 } }$ samples: it becomes more probable to calculate $K _ { t }$ for $\left( \mathbf { x } _ { t } , \mathbf { x } _ { 0 } \right)$ pairs where ${ \bf x } _ { t }$ is constructed from $\mathbf { x } _ { \mathrm { 0 } }$ . Such pairs have larger $K _ { t }$ values, hence over-estimating $B _ { t }$ and under-estimating the optimal loss $J _ { t } ^ { ( \mathbf { x } _ { 0 } ) ^ { * } }$ . We introduce a simple correction by down-weighting such pairs with a coefficient $C$ , and call it the corrected DOL (cDOL) estimator:
$$
\hat { B } _ { t } ^ { \mathrm { c D O L } } : = \frac { 1 } { M } \sum _ { \tilde { m } \in [ M ] } \left\| \frac { \sum _ { l \in [ L ] , l \neq l _ { \widehat { m } } } \mathbf { x } _ { 0 } ^ { ( l ) } K _ { t } ( \mathbf { x } _ { t } ^ { ( \tilde { m } ) } , \mathbf { x } _ { 0 } ^ { ( l ) } ) + \frac { 1 } { C } \mathbf { x } _ { 0 } ^ { ( l _ { \tilde { m } } ) } K _ { t } ( \mathbf { x } _ { t } ^ { ( \tilde { m } ) } , \mathbf { x } _ { 0 } ^ { ( l _ { \tilde { m } } ) } ) } { \sum _ { l ^ { \prime } \in [ L ] , l ^ { \prime } \neq l _ { \widehat { m } } } K _ { t } ( \mathbf { x } _ { t } ^ { ( \tilde { m } ) } , \mathbf { x } _ { 0 } ^ { ( l ^ { \prime } ) } ) + \frac { 1 } { C } K _ { t } ( \mathbf { x } _ { t } ^ { ( \tilde { m } ) } , \mathbf { x } _ { 0 } ^ { ( l _ { \tilde { m } } ) } ) } \right\| ^ { 2 } ,
$$
where $l _ { \tilde { m } }$ indexes the sample in $\{ \mathbf { x } _ { 0 } ^ { ( l ) } \} _ { l \in [ L ] }$ that is used to construct $\mathbf { x } _ { t } ^ { ( \tilde { m } ) }$ . To formalize the effectiveness, we provide the following theoretical result on the cDOL estimator:
Theorem 2. The $\hat { B } _ { t } ^ { \mathrm { c D O L } }$ estimator with subset size $L$ has the same expectation as the $\hat { B } _ { t } ^ { \mathrm { S N I S } }$ estimator with subset size $L - 1$ when $M \to \infty , C \to \infty$ , hence is a consistent estimator.
See Appx. C.2 for proof. Note that the first terms in the numerator and denominator are unbiased, but the second terms introduce biases due to the artificial correlation between $\mathbf { x } _ { t }$ and $\mathbf { x } _ { \mathrm { 0 } }$ samples. The DOL estimator in Eq. (10) amounts to using $C = 1$ , which suffers from the biases. The bias can be reduced using $C > 1$ in the cDOL estimator. On the other hand, the second terms become the dominant components at small $t$ for estimating the numerator and denominator, respectively. Always including them using a finite $C$ hence reduces estimation variance. The complete process of the cDOL estimator is concluded in Alg. 1 in Appx. E.1.
# 3.4 Estimation Results of Optimal Loss Values
We now provide empirical results of diffusion optimal loss estimates on popular datasets. We first compare the scalable estimators on CIFAR-10 [28] and FFHQ-64 [23] (Fig. 1(a,b)), whose relatively small sizes allow the full-dataset estimate by Eqs. (7, 9) , providing a reference for the scalable estimators. With the best scalable estimator identified (from Fig. 1(c)), we apply it to the much larger ImageNet-64 [29] dataset, and analyze the optimal loss pattern (Fig. 1(d)).
Figure 1: Estimation results of optimal loss value. (a,b) Stepwise optimal loss estimates by the DOL (Eqs. 7, 10) and the corrected DOL (cDOL) (Eqs. 7, 11) estimators, with the full-dataset estimate (Eqs. 7, 9) as reference, on the (a) CIFAR-10 and (b) FFHQ-64 datasets. (c) Error and variance of cDOL estimates using various $C$ values (including DOL and SNIS as extreme cases) for the optimal loss at $\log \sigma = 1 . 2 5$ on CIFAR-10. (d) Stepwise optimal loss on various datasets in different scales. Figures are plotted for the $\mathbf { x } _ { 0 }$ prediction loss of the VE process.
As different prediction targets (Sec. 2) and diffusion processes (Sec. 4.1 below) can be converted to each other, we choose the clean-data prediction target and variance exploding (VE) process $\langle \alpha _ { t } \equiv 1 \rangle$ ) [46, 47] to present the diffusion optimal loss. We plot the optimal loss for each diffusion step, which is marked by $\log { \sigma }$ to decouple the arbitrariness in the time schedule $\sigma _ { t }$ (as advocated by [24]; the same $\sigma$ indicates the same distribution at that step of diffusion). All the scalable estimators repeat data subset sampling until the estimate converges. See Appx. E.1 for details.
Comparison among the scalable estimators. From Fig. 1(a,b), we can see that the DOL estimator indeed under-estimates the optimal loss as we pointed out, especially at intermediate diffusion steps. The cDOL estimator can effectively mitigate the bias, and stays very close to the reference under diverse choices of $C$ . The insensitivity of the cDOL estimator w.r.t $C$ can be understood as that, for small $t$ (equivalently, $\sigma$ ), both the numerator and denominator are dominated by the $C$ -corrected terms, in which $C$ cancels out, and for large $t$ , the $K _ { t } ( \mathbf x _ { t } ^ { ( \tilde { m } ) } , \mathbf x _ { 0 } ^ { ( l _ { \tilde { m } } ) } )$ term is in the same scale as other terms hence is overwhelmed when compared with the summation.
To better analyze the behavior of the estimators, we zoom in on their estimation error and standard deviation. Fig. 1(c) presents the results at an intermediate $\log { \sigma }$ where the estimation is more challenging. The result confirms that the variance increases with $C$ . Particularly, at $C = \infty$ which corresponds to the SNIS estimator (Thm. 2), it is hard to sample the dominating cases for the estimate, leading to a large variance, and a significantly large estimation error. At the $C = 1$ end which corresponds to the DOL estimator, although the variance is smaller, its bias still leads it to a large estimation error. The cDOL estimator with $C$ in between achieves consistently low estimation error. Empirically, a preferred $C$ is around $4 N / L$ . The subset size $L$ can be taken to fully utilize memory.
The pattern of optimal loss. From Fig. 1(d), we observe that the optimal loss Jσ(x0)∗ increases monotonically with the noise scale $\sigma$ on all the three datasets. The optimal loss is close to zero only when the noise scale $\sigma$ is less than a critical point $\sigma ^ { \star }$ , in which situation the noisy samples stay so close to their corresponding clean sources that they are unlikely to intersect with each other, hence preserve the information of the clean samples, allowing the model to perform a nearly perfect denoising. We can see that the critical point $\sigma ^ { \star }$ depends on the dataset. CIFAR-10 achieves the minimal $\boldsymbol { \sigma } ^ { \star }$ , since it has the lowest image resolution $( 3 2 \times 3 2 )$ , i.e., the lowest data-space dimension, where the data samples appear less sparse hence easier to overlap after isotropic noise perturbation. Both FFHQ-64 and ImageNet-64 have $6 4 \times 6 4$ resolution, but ImageNet-64 is larger, hence data samples are easier to overlap, leading to a smaller $\sigma ^ { \star }$ .
Beyond the critical point, the optimal loss takes off quickly. The positive value indicates the intrinsic difficulty of the denoising task, where even an oracle denoiser would be confused. The increase trend converges for sufficiently large noise scale σ, which meets our analysis under Thm. 1 that Jσ(x0)∗ converges to the data variance. As ImageNet-64 contains more diverse samples (images from more classes), it has a larger data variance, hence converges to a higher value than the other two.
Table 1: Viewing various diffusion models under the same formulation as the $\mathbf { x } _ { \mathrm { 0 } }$ prediction under the VE process, following Eq. (12). We consider mainstream diffusion models (top 5 rows) and FM variants (bottom 2 rows). Each diffusion model is labeled by “(diffusion process)-(prediction target)”.
$$
\begin{array} { l c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c c } & { \frac { } { } } & & { \frac { c c _ { c } ^ { 2 } } { c _ { c } ^ { 2 } } } & { \frac { c _ { c } ^ { 3 } } { c _ { c } ^ { 2 } } } & { \frac { c _ { c } ^ { 1 0 } } { c _ { c } ^ { 2 } } } & { \frac { c _ { c } ^ { 1 0 } } { c _ { c } ^ { 2 } } } & { \frac { w _ { \sigma } } { c _ { \sigma } ^ { 2 } } } & { \frac { } { w _ { \sigma } } { c } } & { \frac { p ( \sigma ) } { t } } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } \\ { \mathrm { V P - \epsilon \left( { \mathrm { D D D P M } } \right) \left[ 1 7 \right] } } & { 1 } & { - \sigma } & { \frac { 1 } { c } } & { \frac { 1 } { \sqrt { 1 + \sigma ^ { 2 } } } } & { 9 9 9 t ( \sigma ) } & { \frac { 1 } { \sigma ^ { 2 } } } & { \frac { 1 } { \sigma ^ { 2 } } } & { \sigma = \sqrt { \epsilon ^ { 2 } \frac { 1 } { 2 } \left( 3 \omega _ { 1 } t - \frac { 1 } { 2 } \right) \left( 3 \omega _ { 1 } \sigma - \beta _ { 1 0 } \right) t ^ { 2 } - 1 } } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } & { } \\ { \mathrm { V E - \mathbf { F } \left( { \mathrm { E D M } } \right) \left[ 2 4 \right] } } & { \frac { \sigma _ { 2 } ^ { 2 } \omega _ { \mathrm { L A m } } } { \sigma ^ { 2 } + \sigma _ { 4 \alpha \epsilon } ^ { 2 } } } & \frac { \sigma _ { 2 } ^ { 2 } \omega _ { \mathrm { d i c } } \sigma _ { \mathrm { m } } ^ { 2 } } { \sqrt { \sigma ^ { 2 } + \sigma _ { 4 \alpha \epsilon } ^ { 2 } } } & { \frac { 1 } { 4 \sigma } } & { \frac { 1 } { 4 \sigma } } & \frac { 1 } { \sigma ^ { 2 } + \sigma _ { 4 \alpha \epsilon } ^ { 2 } } & { \frac { 1 } { 4 \sigma } } & { \frac { 1 } { \sigma ^ { 2 } + \sigma _ { 4 \alpha \epsilon } ^ { 2 } } } & { \frac { 1 } { \sigma ^ { 2 } + \sigma _ { 4 \alpha \epsilon } ^ { 2 } } } & { \frac { 1 } { \sigma } } & \frac { 1 } \end{array}
$$
# 4 Analyzing and Improving Diffusion Training Schedule with Optimal Loss
From the training losses in Sec. 2, the degree of freedom for the training strategy of diffusion models is the noise schedule $p ( t )$ and the loss weight $\boldsymbol { w } _ { t }$ , collectively called the training schedule. In the literature, extensive works [17, 47, 24, 27, 10] have designed training schedules for various prediction targets and diffusion processes individually, based on the analysis on the loss scale over diffusion steps. Here, we argue that analyzing the gap between the loss and the optimal loss would be a more principled approach, since it is the gap but not the loss itself that reflects the data-fitting insufficiency and the potential for improvement. Under this view, we first analyze and compare the loss gap of mainstream diffusion works on the same ground (Sec. 4.1), identifying new patterns that are related to generation performance. We then develop a new training schedule based on the observation (Sec. 4.2).
# 4.1 Analyzing Training Schedules through Optimal Loss
Existing training schedules are developed under different diffusion processes and prediction targets. For a unified comparison on the same ground, we start with the equivalence among the formulations and convert them to the same formulation. As explained in Sec. 3.4, we use the noise scale $\sigma$ in place of $t$ to mark the diffusion step to decouple the choice of time schedule $\sigma _ { t }$ .
Equivalent conversion among diffusion formulations. Sec. 2 has shown the equivalence and conversion among prediction targets. We note that different diffusion processes in the form of Eq. (1) can also be equivalently converted to each other. Particularly, the variance preserving (VP) process $( \alpha _ { \sigma } = \sqrt { 1 - \sigma ^ { 2 } } )$ [44, 17] and the flow matching (FM) process $\stackrel { \prime } { \alpha } _ { \sigma } = 1 - \sigma )$ [33, 34] can be converted to the variance exploding (VE) process $\{ \alpha _ { \sigma } \equiv 1 \}$ ) [46, 47] by $\begin{array} { r } { \mathbf { x } _ { \sigma } ^ { \mathrm { V E } } : = \frac { \mathbf { x } _ { \sigma } } { \alpha _ { \sigma } } } \end{array}$ , since $\begin{array} { r } { \mathbf { x } _ { \sigma } ^ { \mathrm { V } \bar { \mathrm { E } } } = \mathbf { x } _ { 0 } + \frac { \sigma } { \alpha _ { \sigma } } \mathbf { \epsilon } \epsilon } \end{array}$ by Eq. (1), and ${ \bf x } _ { 0 } = { \bf x } _ { 0 } ^ { \mathrm { V E } }$ . The correspondence of diffusion step is given by $\begin{array} { r } { \sigma ^ { \mathrm { V E } } = \frac { \sigma } { \alpha _ { \sigma } } } \end{array}$ . With this fact, various diffusion models can be viewed as different parameterizations of the $\mathbf { x } _ { \mathrm { 0 } }$ prediction under the VE process [24], where the parameterization is formulated by:
$$
\begin{array} { r } { \mathbf { x } _ { 0 \Theta } ( \mathbf { x } , \sigma ) = c _ { \sigma } ^ { \mathrm { s k i p } } \mathbf { x } + c _ { \sigma } ^ { \mathrm { o u t } } \mathbf { F } _ { \Theta } ( c _ { \sigma } ^ { \mathrm { i n } } \mathbf { x } , c _ { \sigma } ^ { \mathrm { n o i s e } } ) , } \end{array}
$$
where $\mathbf { x } _ { 0 \theta } , \mathbf { x } .$ and $\sigma$ are the $\mathbf { x } _ { 0 }$ prediction model, the diffusion variable, and the noise scale under the VE process, and $\mathbf { F } _ { \boldsymbol { \Theta } } ( \cdot , \cdot )$ represents the bare neural network used for the original prediction target and diffusion process. The precondition coefficients $c _ { \sigma } ^ { * }$ are responsible for the conversion. Their instances for reproducing mainstream diffusion models are listed in Table 1, where the converted $w _ { \sigma }$ and $p ( \sigma )$ from the original works are also listed. See Appx. B for details. For EDM [24], the precondition coefficients are not derived from a conversion but directly set to satisfy the input-output unit variance principle. This leads to a new prediction target which we refer to as the $\mathbf { F }$ prediction.
Actual loss vs. the optimal loss. Under the above convention, we convert the actual training loss of various diffusion models to the $\mathbf { x } _ { \mathrm { 0 } }$ prediction loss under the VE process, which serves as a unified metric on the same ground. We conduct the comparison on the CIFAR-10 dataset, and compare their actual loss against the optimal loss, which has been estimated and presented in Fig. 1(a; the full-dataset curve) under the unified formulation. The results are shown in Fig. 2(a). We can see that the optimal loss reveals that the data-fitting quality across different diffusion steps is not as expected before, and the actual loss cannot reflect it. For example, although the actual loss is large at large noise scale, most diffusion models (except VE-ϵ) actually have a near-optimal fit. Instead, the optimal loss identifies that the region where existing diffusion models still have room to improve lies in the intermediate noise scales, which is around $\operatorname { l o g } \sigma \in [ - 2 . 5 , 2 . 5 ]$ .
Figure 2: Actual stepwise training loss across noise scales by various diffusion models on CIFAR-10, compared with the optimal loss. (a) Results of mainstream diffusion models. (b) Comparison of prediction targets under the flow matching (FM) process; (c) Comparison of existing training schedules and our schedule (Sec. 4.2) under the FM process with the v prediction target. Curves of different diffusion models are plotted together by viewing them as parameterizations of the $\mathbf { x } _ { \mathrm { 0 } }$ prediction under the variance exploding (VE) process (Eq. (12); Table 1).
Loss gap vs. generation performance. We now use the gap to the optimal loss as the fundamental data-fitting measure to analyze which region is more critical for the generation performance, measured in Fréchet Inception Distance (FID) [16] which is also marked in Fig. 2. All diffusion models use the same deterministic ODE sampler with ${ \mathrm { N F E } } = 3 5$ following [24]. We can see that an erroneous fit at large noise scales leads to a deficiency in generation quality. Among methods with good fit in this range, the intermediate noise scale region $[ - 2 . 5 , 2 . 5 ]$ becomes more relevant to the generation performance. From the two inset figures, we find a trade-off within this region: counter-intuitively, around the critical point $\sigma ^ { \star }$ , the loss gap negatively correlates to the FID, and the correlation becomes positive only until $\sigma$ decreases to the left part of the region. This indicates that although the loss gap is maximal around $\boldsymbol { \sigma } ^ { \star }$ , it is worth it for generation performance to sacrifice the region around $\sigma ^ { \star }$ for better fit at smaller noise scales $\sigma < \sigma ^ { \star }$ .
Comparison of prediction targets. The optimal loss under the unified formulation also enables comparison of prediction targets. We fix the FM process for a fair comparison, and in addition to the existing v prediction case, we also run the $\epsilon$ and $\mathbf { x } _ { \mathrm { 0 } }$ prediction versions (bottom two rows in Table 1). As shown in Fig. 2(b), we observe that the ϵ prediction model excels at small noise scales, but exhibit an undesired data-fitting at large noise scales. This also meets the result of VE-ϵ in Fig. 2(a). In contrast, the $\mathbf { x } _ { \mathrm { 0 } }$ prediction model fits data well at large noise scales, but insufficiently for small noise scales. Notably, the $\mathbf { v }$ prediction model achieves a close fit over all noise scales. The preferred performance can be understood from its formulation as the combination of $\epsilon$ and $\mathbf { x } _ { \mathrm { 0 } }$ prediction [11]. The $\mathbf { F }$ prediction also has a combination formulation, and its result in Fig. 2(a) also shows a consistently good fit.
# 4.2 Principled Design of Training Schedule
The training schedule plays a crucial role in optimizing diffusion models. Inspired by observations in Sec. 4.1, we design a principled training schedule based on diffusion optimal loss estimates.
The loss weight. $w _ { \sigma }$ calibrates the error resolution across different noise scales. For this, the optimal loss $J _ { \sigma } ^ { * }$ provides a perfect reference scale for the loss at each diffusion step $\sigma$ , so $w _ { \sigma } = a / J _ { \sigma } ^ { * }$ with a scale factor $a$ is a natural choice to align the loss at various $\sigma$ to the same scale. Although it downweighs the loss for large noise scales, the above observation suggests that the model can still achieve a good fit if using $\mathbf { v } , \mathbf { F }$ , or $\mathbf { x } _ { 0 }$ prediction. For smaller noise scales, a cutoff $w ^ { \star }$ is needed to avoid divergence, which stops the increase of $w _ { \sigma }$ before $\sigma$ runs smaller than the critical point $\sigma ^ { \star }$ . As observed in Sec. 4.1, to the left of $\sigma ^ { \star }$ , there is an interval with a strong positive correlation to the generation performance (i.e., the left inset figure of Fig. 2(a)). We hence introduce an additional weight function $f ( \sigma ) = \mathcal { N } ( \log \sigma ; \mu , \varsigma ^ { 2 } )$ , whose parameters $\mu$ and $\varsigma$ are chosen to put the major weight over the interval. The resulting loss weight is finally given by: $\begin{array} { r } { w _ { \sigma } = a \ \operatorname* { m i n } \{ \frac { 1 } { J _ { \sigma } ^ { * } } , w ^ { \star } \} + \dot { f ( \sigma ) } \mathbb { I } _ { \sigma < \sigma ^ { \star } } } \end{array}$
The noise schedule. $p ( \sigma )$ allocates the optimization intensity to each noise level. A desired $p ( \sigma )$ should favor noise steps on which the optimization task has not yet been done well, which is to minimize $w _ { \sigma } J _ { \sigma } ( \theta )$ down to $w _ { \sigma } J _ { \sigma } ^ { \star }$ . Therefore, the weight-calibrated loss gap provides a principled measure for optimization insufficiency, which leads to an adaptive noise schedule: $p ( \sigma ) \propto$ $\overline { { w _ { \sigma } ( J _ { \sigma } ( \pmb { \theta } ) - J _ { \sigma } ^ { * } ) } }$ .
CIFAR-10 & ImageNet-64 Results. We evaluate the designed training schedule in training two advanced diffusion models EDM [24] and Flow Matching (FM) [33] on the CIFAR10 and ImageNet-64 datasets (details in Appx. E.2). As shown in Table 2, our training schedule significantly improves generation performance upon the original in both EDM and FM settings and on both datasets, demonstrating the value of the insights from our analysis using the optimal loss. For a closer look at how our schedule works, in Fig. 2(c) we plot the step-wise training loss in the FM setting using our schedule, and compare it with the schedule of the original [33] and of StableDiffusion 3
Table 2: Generation FID ( ) by existing training schedules and ours on CIFAR-10 and ImageNet-64.
(SD3) [10]. We find that our schedule indeed further decreases the loss in the interval with positive correlation to performance, aligning with the insight from Sec. 4.1.
ImageNet-256 Results. We further evaluate our training schedule on the ImageNet-256 dataset, and compare the results with existing approaches. We use VA-VAE [52] as the tokenizer and employ a modified LightningDiT [52] architecture enhanced with QK-Normalization [7] to improve training stability (details in Appx. E.2). As shown in Table 3, our training schedule improves the generation performance over the original LightningDiT training schedule, and is comparable or exceeding other diffusion models in latent space and pixel space.
Table 3: Comparison between existing training schedules and ours on the ImageNet-256 dataset.
Figure 3: Scaling law study using optimal loss on ImageNet-64. Training curves at $\log \sigma = 4 . 3 8$ using various model sizes and their envelope are plotted showing (a) the actual loss and ${ \bf ( b ) }$ the gap between the actual and the optimal loss. Curves showing the gap for the total loss (covering all diffusion steps) are plotted in (c). Correlation coefficients $\rho$ for the envelopes are marked.
# 5 Principled Scaling Law Study for Diffusion Models
Neural scaling law [22] has been the driving motivation for pursuing large models, which shows the consistent improvement of model performance with computational cost. The conventional version takes the form of a power law [22, 15, 18]: $J ( F ) = \beta F ^ { \alpha }$ , where $F$ denotes floating point operations (FLOPs) measuring training budget, $J ( F )$ denotes the minimal training loss attained by models in various sizes (the envelope in Fig. 3), and $\alpha < 0$ and $\beta > 0$ are power law parameters. The specialty of a scaling law study for diffusion model is that, as the optimal loss sets a non-zero lower bound of the training loss, not all the loss value in $J ( F )$ can be reduced with the increase of $F$ , questioning the form which converges to zero as $F \to \infty$ . Instead, the following modified power law is assumed:
$$
J ( F ) - J ^ { * } = \beta F ^ { \alpha } ,
$$
where $J ^ { * }$ denotes the optimal loss value. It can be rephrased as that $\log ( J ( F ) - J ^ { * } )$ is linear in $\log F$ , so we can verify it through the linear correlation coefficient $\rho$ . We conduct experiments using current state-of-the-art diffusion models EDM2 [25] with parameter size ranging from 120M to 1.5B.
We first compare the model training curves at a large noise scale, for which Fig. 3(a) and (b) assume the original and the modified (Eq. (13)) scaling laws, respectively. We can observe that in the modified version, the envelope is indeed closer to a line, and the improved correlation coefficient $\rho = 0 . 9 4$ (vs. 0.82) validates this quantitatively. For the total loss, we use the optimized adaptive loss weight by EDM2 [25]. The result is shown in Fig. 3(c), which achieves $\rho = 0 . 9 9 1 7$ , and the fitted scaling law is given by: $J ( F ) = 0 . 3 6 7 5 F ^ { - 0 . 0 1 4 } + \mathrm { \bar { 0 } } . 0 1 5$ . Appx. E.3 provides more results on ImageNet-512. We hope this approach could lead to more profound future studies in the scaling law for diffusion models. | Diffusion models have achieved remarkable success in generative modeling.
Despite more stable training, the loss of diffusion models is not indicative of
absolute data-fitting quality, since its optimal value is typically not zero
but unknown, leading to confusion between large optimal loss and insufficient
model capacity. In this work, we advocate the need to estimate the optimal loss
value for diagnosing and improving diffusion models. We first derive the
optimal loss in closed form under a unified formulation of diffusion models,
and develop effective estimators for it, including a stochastic variant
scalable to large datasets with proper control of variance and bias. With this
tool, we unlock the inherent metric for diagnosing the training quality of
mainstream diffusion model variants, and develop a more performant training
schedule based on the optimal loss. Moreover, using models with 120M to 1.5B
parameters, we find that the power law is better demonstrated after subtracting
the optimal loss from the actual training loss, suggesting a more principled
setting for investigating the scaling law for diffusion models. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
# 1 Introduction
Automating coding using Large Language Models (LLMs) and LLM-based agents has become a very active area of research. Popular benchmarks like LiveCodeBench [16] and SWE-bench [17] respectively test coding abilities on standalone competitive coding problems and GitHub issues over library or application code. Despite the demonstrated progress of LLM-based coding agents on these benchmarks, they are yet to scale to complex tasks over an important class of code, systems code.
Systems code powers critical and foundational software like file and operating systems, networking stacks, distributed cloud infrastructure and system utilities. Systems codebases have multiple dimensions of complexity. Firstly, they are very large, containing thousands of files and millions of lines of code. Secondly, systems code often interfaces directly with the hardware and is performance critical. This results in complex low-level code (involving pointer and bit manipulations, compile-time macros, etc.) in languages like $\mathrm { C / C } { + + }$ , and global interactions between different parts of the codebase for concurrency, memory management, maintenance of data-structure invariants, etc. Finally, foundational systems codebases have rich development histories spanning years or even decades, containing contributions by hundreds or thousands of developers, which are important references on legacy design decisions and code changes.
Due to the size and complexities of systems code, making changes to a systems codebase is a daunting task, even for humans. Automating such changes requires a different type of agents, agents that can research about many pieces of context, derived automatically from the large codebase and its massive commit history, before making changes. Recently, deep research agents have been developed to solve complex, knowledge-intensive problems that require careful context gathering and multi-step reasoning, before synthesizing the answer. The agents and techniques have mostly focused on longform document generation or complex question-answering over web contents [32, 30, 7, 31, 20, 38] and enterprise data [1, 25]. Inspired by these advances, we propose the first deep research agent for code, called Code Researcher, and apply it to the problem of generating patches for mitigating crashes reported in systems code.
As shown in Figure 1, Code Researcher works in three phases: (1) Analysis: Starting with the crash report and the codebase, this phase performs multi-step reasoning over semantics, patterns, and commit history of code. The “Reasoning Strategies” block shows the reasoning strategies used. Each reasoning step is followed by invocations of tools (labeled “Actions” in Figure 1) to gather context over the codebase and its commit history. The information gathered is stored in a structured context memory. When the agent is able to conclude that it has gathered sufficient context, it moves to the next phase. (2) Synthesis: The Synthesis phase uses the crash report, the context memory, and the reasoning trace of the Analysis phase to filter out irrelevant memory contents. Then, it identifies one or more buggy code snippets from memory, possibly spread across multiple files, and generates patches. (3) Validation: Finally, the Validation phase checks if the generated patches prevent the crash from occurring using external tools. A successful patch is presented to the user.
The Linux kernel [21] is a canonical example of a systems codebase with complex low-level code and massive size (75K files and 28M lines of code), and has rich development history. Mathai et al. [23] recently proposed a benchmark, called kBenchSyz, of 279 Linux kernel crashes detected by the Syzkaller fuzzer [12]. Through extensive experimentation on this challenging benchmark, we evaluate the effectiveness of Code Researcher and compare it to strong baselines. Most of the existing coding agents are geared towards resolving bugs in moderately-sized codebases given issue descriptions, as exemplified by the popular SWE-bench [17] benchmark. The issue descriptions are written by humans, wherein they explain the nature of the bug and which files are likely relevant. Coding agents [40, 36] are designed to take advantage of this and quickly navigate the repository to reach the buggy files. They do not expend much efforts in gathering codebase-wide context. In our setting, the bugs are described by stack traces which are devoid of natural language hints and typically contain a much larger number of files and functions than an issue description. Therefore, multi-step reasoning and context gathering becomes more important. Our experimental results bear a strong witness to this.
As a strong baseline, we customized SWE-agent [40], a SOTA open-source agent on SWE-bench, for kernel crash resolution. Code Researcher resolved $48 \%$ crashes compared to $3 1 . 5 \%$ of SWE-agent with a budget of 5 trajectories each and GPT-4o as the LLM. Code Researcher explored about 10 files per trajectory compared to a much smaller number 1.33 of files explored by SWE-agent. Further, in a direct comparison on 90 bugs where both Code Researcher and SWE-agent edit all the ground-truth buggy files, the resolution rate of Code Researcher is $6 1 . 1 \%$ compared to $3 7 . 8 \%$ of SWE-agent. This clearly shows that Code Researcher is able to research and gather more useful context. Using o1 only for patch generation, Code Researcher’s performance improves to $5 8 \%$ , showing that well-researched context enables a reasoning model to improve the performance significantly.
Concurrent to our work, Mathai et al. [24] have proposed a specialized agent for resolving Linux kernel crashes. However, they perform evaluation in the assisted setting wherein (a) the agent is provided the ground-truth buggy files to edit, and (b) they build Linux-kernel specific tooling to scale. In contrast, we evaluate in the realistic, unassisted setting in which Code Researcher has to identify the buggy files by itself, using general search tools. Another factor that distinguishes our work is the use of commit history, which to the best of our knowledge, none of the existing coding agents do. Commit history is known to contain important information [18]. With an ablation study, we show that searching over commits plays an important role in the success of Code Researcher.
In addition to the thorough experimentation on kBenchSyz, we also experiment on an open-source multimedia software, FFmpeg [3]. Code Researcher was able to generate crash-preventing patches for $7 / 1 0$ crash reports tested, establishing its generalizability.
In summary, we make the following main contributions:
(1) We design the first deep research agent for code, Code Researcher, capable of handling large systems code and resolving crashes. Recognizing the importance of commit history in systems code, we equip the agent with a tool to efficiently search over commit histories.
(2) We evaluate Code Researcher on the challenging kBenchSyz benchmark [23] and achieve a crash resolution rate of $58 \%$ , outperforming strong baselines. We also demonstrate generalizability of Code Researcher on a multimedia software, FFmpeg.
(3) Through a comprehensive evaluation, we provide insights such as (i) how our deep research agent outperforms agents that do not focus on gathering relevant context, (ii) that this advantage persists even if the existing SOTA agent is given higher inference-time compute, and (iii) reasoning models improve performance significantly if given well-researched context.
# 2 Related work
LLM-powered software development subfield has produced several autonomous coding agents [40, 39, 36, 42, 35], predominantly evaluated on SWE-bench [17]. SWE-bench focuses on GitHub issues from small to medium-sized Python repositories. However, systems code, the focus of our work, presents unique challenges. We highlight and contrast key related work in this context.
Coding agents Agents like SWE-agent [40] or OpenHands [36] use a single ReAct-style [41] loop endowed with shell commands or specialized tools for file navigation and editing. However, agents like these do not use program structure to traverse the codebase (e.g., following data and control flow chains) and are not designed to reason about complex interactions and gather context. As a result, they tend to explore a small number of files per bug and make an edit, without gathering and reasoning over the full context of the bug (see Section 5.3). AutoCodeRover [42] uses tools based on program structure to traverse the codebase (albeit limited to Python code). It performs explicit localization of the functions/classes to edit using these tools, and those are later repaired. Code Researcher does not explicitly localize the functions to edit; instead it gathers relevant context for patch generation and decides what to edit in the Synthesis phase. Code Researcher is also the first agent to incorporate causal analysis over historical commits; this is critical to handling subtle bugs introduced by code evolution in long-lived systems codebases.
Deep research agents Deep research is a fast emerging subfield in agentic AI [25, 30, 7, 31], to tackle complex, knowledge-intensive tasks, that can take hours or days even for experts. Academic work so far has focussed on long-form document generation [5, 32], scientific literature review [38, 14], and complex question-answering [20, 37] based on the web corpus. The key challenges in deep research for such complex tasks include (a) intent disambiguation, (b) exploring multiple solution paths (breadth of exploration), (c) deep exploration (iterative tool interactions and reasoning), and (d) grounding (ensuring that the claims in the response are properly attributed). Most of the aforementioned challenges also apply to our setting. To the best of our knowledge, our work is the first to design and evaluate a deep research strategy for complex bug resolution in large codebases.
Most recently, OpenAI’s Deep Research model has been integrated with GitHub repos for report generation and QA over codebases [29]. However, (a) it does not support agentic tasks like bug fixing, and (b) there is no report on its effectiveness in real-world developer tasks.
Long context reasoning Support for increasing context length sizes in LLMs has been an active area of research [33, 15], opening up the possibility of feeding the entire repository into an LLM’s context and generating a patch. But there are a few complications. First, note that the Linux kernel has over 75K files and 28 Million lines of code. In contrast, state of the art models today (e.g., Gemini $2 . 5 \mathrm { P r o }$ ) support at most 2M tokens in the context window [8, 9], roughly corresponding to around 100K lines of code [8]. Second, long-context models do not robustly make use of the information in context. They often get “lost in the middle” [22], performing highest when relevant information occurs at the beginning or end of the input context, and significantly worse when they must access relevant information in the middle of long contexts. Li et al. [19] found that long-context LLMs struggle with processing long, context-rich sequences and reasoning over multiple pieces of information (which is important for any automated software development task).
Automated kernel bug detection and repair Prior work for detecting Linux kernel bugs includes various types of sanitizers, e.g., Kernel Address Sanitizer (KASAN) [10], and the Syzkaller kernel fuzzer [12], an unsupervised coverage-guided fuzzer that tries to find inputs on which the kernel crashes. Code Researcher, complementary to this, generates patches from crash reports. We use some traditional software engineering concepts like deviant pattern detection [4] and reachability analysis [26], but leverage LLMs to scale to large codebases (Section 3). As noted earlier, CrashFixer [24] targets Linux kernel crashes but assumes that buggy files are known a priori. This assumption is unrealistic for large codebases like the Linux kernel. In contrast, Code Researcher autonomously locates buggy files using general search tools.
# 3 Design of Code Researcher
Large systems codebases, owing to their critical nature, undergo strict code development and reviewing practices by expert developers. The bugs that still sneak in are subtle and involve violations of global invariants (e.g., a certain data structure should be accessed only after holding a specific lock) and coding conventions (e.g., use of a specific macro to allocate memory), and past changes that cause unintended side effects. To fix such bugs, an agent needs to gather sufficient context about the codebase and its commit history, before it can generate any hypotheses about the cause of a bug and attempt to fix it. With this insight, we design our deep research agent, Code Researcher. As shown in Figure 1, Code Researcher comprises of three phases: (1) Analysis, (2) Synthesis and (3) Validation. We discuss the design of these phases in details now. The detailed prompts for all these phases are provided in the supplementary material.
# 3.1 Analysis phase
The Analysis phase of Code Researcher is responsible for performing deep research to understand the cause of a reported crash. We equip this phase with (a) actions to efficiently search over the codebase and the commit history, (b) reasoning strategies for code, and (c) a structured context memory.
# 3.1.1 Actions to search over codebase and commit history
We support the following actions: (1) search_definition(sym): To search for the definition(s) of the specified symbol, which can be the name of a function, struct, global constant, union or macro and so on. It can be optionally passed the file name to limit the search to a file. (2) search_code(regex): To search the codebase for matches to the specified regular expression. This is a simple yet powerful tool, which can be used for searching for any coding pattern such as call to a function, dereferences to a pointer, assignment to a variable and so on. (3) search_commits(regex): To search for matches to a regular expression over commit messages and diffs associated with the commits. The regular expression offers expressiveness, e.g., to search for occurrence of a term (“memory leak”) in the commit messages or coding patterns in code changes (diffs). In addition, the agent can invoke (4) done to indicate that it has finished the Analysis phase and (5) close_definition(sym): To remove the definition of a symbol from the memory if the symbol is deemed irrelevant to the task.
# 3.1.2 Reasoning strategies for code
We ask the agent to explore the codebase to figure out the root cause of a crash and gather sufficient context to propose a fix. The crash reports consist of stack traces and additional information generated by diagnostic tools such as an address sanitizer [10] that detects memory corruption, a concurrency sanitizer [11] that detects data races or an undefined behavior sanitizer [13] that detects undefined behavior at runtime. We provide a brief description of these tools in the prompt to help the agent interpret the diagnostic information. We induce the following reasoning strategies through prompting to guide the exploration of the codebase and its commit history. As shown in Figure 1, each reasoning step is followed by one or more actions. Additionally, we present the agent with a simple scratchpad at each step (shown as a markdown list of strings in the prompt), where it can add any important discoveries about the bug that should be emphasized for future steps.
Chasing control and data flow chains The control flow [26] of a code snippet refers to the functions that are called and the branches in it, including conditional statements, loops, gotos and even conditional compilation macros. Given a crash report and some code, the agent is asked to reason about control flow to understand how execution flows between different functions and how it leads to the crash. Similarly, data flow [26] refers to how values of variables get passed to different functions and how one variable is used to define another. So the agent should also reason about how data flows in the code. As a result of this reasoning, the agent may invoke a search_definition(sym) action (optionally also specifying the file to search in) to search for the definition of sym if it suspects that sym may have something to do with the buggy behavior and needs more information about sym to confirm or dispel the suspicion. It can also use other actions as suitable, e.g., search_code $\scriptstyle ( \mathbf { x } \backslash \mathbf { s } * = )$ to look for assignments to a variable named $\mathbf { x }$ , with $\backslash \mathbf { s } *$ indicating zero or more whitespaces.
Searching for patterns and anti-patterns Traditional software engineering literature thinks of bugs as anomalies – patterns of code that are deviant [4]. It follows that, to diagnose and understand a bug, one can find certain patterns of frequent behavior in the repository and check if a given piece of code deviates from it. Code Researcher reasons about which behavior is common or “normal” as well as which code snippets look anomalous. It can then perform a search_code(regex) action to search for these patterns and anti-patterns using regular expressions. A classic case is checking a pointer for null value after allocation. If the agent notices a missing null check for ptr, it can perform search_code(if\s\*\( $_ { \cdot \mathrm { p t r } = = \mathrm { N U L L } \setminus } )$ ) to search for null checks throughout the codebase on ptr. Similarly, it can perform search_code $\scriptstyle \cdot p \ t r \setminus \mathbf { s } * =$ . $\mathtt { * a l l o c \backslash ( . * \backslash ) } )$ to search for all allocations to ptr and actually verify whether other parts of the codebase typically perform a null check or not.
Causal analysis over historical commits An interesting and challenging aspect of a codebase that has been in development for a long time, as many foundational systems codebases have, is the rich history of commits. Because of continuous development, it is likely that a new bug has some past commits that can prove helpful in understanding or solving it. Indeed, developers often reference other commits when they come up with patches. Code Researcher reasons about how the codebase has evolved and how that evolution is related to the crash report. It can issue a search_commits(regex) action to search over past commit messages and diffs. For instance, the regular expression handle->size | crypto_fun\( matches commits that add or remove a handle->size access, or a call to crypto_fun.
Iterative process of deep research As shown in Figure 1, in each reasoning step, Code Researcher is asked to decide if it has acquired sufficient context to understand and solve the crash. If yes, it moves to the next phase of synthesizing the patch (Section 3.2). Initially, the context is empty and it starts its reasoning process by analyzing the contents of the stack trace and the diagnostic information provided as input. In each step, the agent gets to evaluate the context accrued so far, as a whole, in the light of all possible reasoning strategies it can use. Based on this, it can decide which lines of exploration to extend, and issue multiple search actions simultaneously.
# 3.1.3 Structured context memory
We maintain a structured context memory to keep a list of (action, result) pairs for every reasoning step. Examples of actions and their results are given in Appendix A. The contents of the memory are reviewed by the agent in each reasoning step.
# 3.2 Synthesis and Validation phases
The contents of memory and the reasoning trace of the Analysis phase are passed to the Synthesis phase, along with the crash report. The Analysis phase has the flexibility to follow multiple paths of inquiry simultaneously. It can thus end up collecting information that does not turn out to be relevant, which also happens when a human does research on some topic. The Synthesis phase first filters the memory and discards (action, result) pairs that are deemed irrelevant to the task of fixing the crash. The agent then uses the filtered information to generate a hypothesis about the nature of the bug and a potential remedy, and the corresponding patch. Finally, in the Validation phase, the patch is applied to the codebase, and the codebase is compiled. The user-space program that had originally caused a crash is run. If the crash is reproduced, the patch is rejected. If not, it is accepted.
# 4 Experimental setup
Benchmarks We use the kBenchSyz benchmark [23] of 279 past Linux kernel crashes found by the fuzzing tool Syzkaller [12]. Each instance in the benchmark consists of (1) a reproducer file, containing the user-space program, that triggers the crash, (2) the ground-truth commit that fixed the bug, i.e., the commit after which the kernel no longer crashes on the reproducer, and (3) the crash report at the parent commit of the fix commit (we run Code Researcher and other competing tools at this parent commit). The benchmark also has the kernel config used to compile the crashing kernel and the list of kernel subsystems involved in each bug. We validated the 279 instances (i.e., the reproducers and the ground-truth fixes), and ruled out 9 instances for which we could not run the kernel at the parent commit, 27 for which the kernel at the parent commit did not crash, and 43 where the kernel still crashed after applying the fix. So, for our experiments, we use the remaining 200 instances that we successfully validated. For reproducibility, we use the crash reports generated during our validation instead of the crash reports originally present in kBenchSyz. To show generalizability, we also evaluated Code Researcher on 10 recent crashes reported for an open-source multimedia software, FFmpeg [3] (more details in Section 5.5).
Evaluation metrics We compute $\mathrm { P a s s } @ \mathrm { k } \left( \mathbf { P } @ k \right)$ defined as $\mathbf { P } @ k = 1$ , if at least one of the $k$ candidate patches generated by the tool prevents the crash, i.e., after applying the patch, the compiled kernel does not crash on the reproducer anymore, or $\mathbf { P } @ k = 0$ otherwise. We report (1) Crash Resolution Rate (CRR) which is average $\mathbf { P } \ @ k$ , (2) average recall, i.e., the fraction of files modified in the ground-truth commit (the ground-truth buggy files) in the set of files edited by the agent, averaged over the $k$ candidate patches, and (3) the percentage of candidate patches where All, Any or None of the ground-truth buggy files are edited. When a tool does not produce a patch (e.g., it runs out of LLM call budget), the set of edited files is assumed to be empty. All the metrics are averaged over the 200 instances in the benchmark.
LLMs, sampling parameters, and budget We employ GPT-4o (v. 2024-08-06) for Code Researcher and for the competing tools. We also experiment with o1 (v. 2024-12-17) in the Synthesis phase of Code Researcher. All our experiments have a context length limit of 50K tokens. In the Analysis phase, we use a temperature of 0.6 and independently sample $k$ trajectories. For the Synthesis phase, we sample with increasing temperatures (0, 0.3, 0.6) until the agent produces a correctly-formatted patch, with a maximum of 3 attempts (more sampling details in Appendix B). We allow Code Researcher and SWE-agent a budget of at most max calls LLM calls per trajectory.
Baselines We evaluate Code Researcher in the unassisted setting (i.e., the ground-truth buggy files that are part of the fix commits are not divulged to the tool) and compare it against the following baselines and state-of-the-art techniques:
(1) o1 [28] and GPT-4o [27] in the assisted setting, i.e., we directly give the ground-truth files that are part of the fix commits, truncated to the context length limit and the crash report as input. We prompt the model to generate a hypothesis about the root cause of the crash and a patch. (2) o1 and GPT-4o in the stack context setting, where we give the contents of the files mentioned in the crash report (truncated to the context length limit) as input besides the crash report. (3) SWE-agent 1.0 [40], a SOTA coding agent on the SWE-bench benchmark, in the unassisted setting. For fairness, we added a Linux kernel-specific example trajectory and background about the kernel to its prompts. We sample $k$ (for Pass $@ k$ ) SWE-agent trajectories independently using a
Table 1: Crash resolution rate (CRR) for different tools on the kBenchSyz benchmark (200 bugs). LLMs used by the agentic tools are in parentheses. ∗CrashFixer numbers are from [24], out of 279 bugs; results wrt to the 200 bugs (Section 4) will be updated when available.
temperature of 0.6.
(4) CrashFixer [24], state-of-the-art agent for Linux kernel crash resolution, in the assisted setting, as it requires the ground-truth files to generate patches. If a patch fails to build or crashes with the ground-truth reproducer, it iteratively refines it using the respective error messages. (5) Additionally, we consider an unassisted $^ +$ test-time scaled setting, where we increase the test-time compute for Code Researcher and SWE-agent along two axes, i.e., max calls and $\mathrm { P } @ k$ .
Additional details The detailed prompts and necessary configurations for Code Researcher and the baselines will be made available in an updated version. The details about crash reproduction setup and implementation of the search actions are presented in Appendix B.
# 5 Experimental results
In this section, we present comprehensive evaluation results that show (a) the effectiveness of Code Researcher in helping resolve Linux kernel crashes compared to state-of-the-art coding agents and baselines, (b) the importance of context gathered by Code Researcher, and (c) the impact of historical commits on Code Researcher’s performance. We provide additional results in the Supplementary.
# 5.1 RQ1: How effective are different tools at resolving Linux kernel crashes?
Our main results are presented in Table 1, organized by setting, namely, assisted, stack context, unassisted, and unassisted $^ +$ test-time scaled (Section 4).
The assisted setting establishes that given the contents of the ground-truth buggy files, LLMs like GPT-4o are quite capable of resolving crashes. Using a reasoning model like o1 significantly boosts this performance (from $3 6 \%$ to $5 1 \%$ ). CrashFixer achieves $4 9 . \hat { 2 } 2 \%$ CRR using 4 parallel searches with tree depth of 4 and branching factor of 1 each (resulting in $\mathrm { P } @ 1 6 ^ { \prime }$ ) where each tree node employs multiple LLM calls (exact budget is not known, we estimate it as at least 4 calls) to arrive at the patch.
However, the assumption that an oracle can tell us exactly which files need to be edited is impractical. To show the gap between the assisted (idealistic) setting and the practical unassisted setting, we present results on the simple but effective stack context setting. Here, the models are given the contents of all the files mentioned in the crash report (truncated to fit the context length limit) along with the crash report as input. This is a strong baseline because all the ground-truth buggy files are present in the crash report for $7 4 . 5 0 \%$ bugs in our dataset. We find that o1 achieves a CRR of $40 \%$ , which is impressive, albeit a drop of $11 \%$ in absolute points from that of the assisted setting. Importantly, Code Researcher (GPT-4o) in the unassisted setting with $48 \%$ CRR, (a) significantly outperforms both GPT-4o and o1 models in the stack context setting, as well as SWE-agent (GPT-4o)
in the unassisted setting, (b) even improves on GPT-4o’s CRR in the assisted setting. Furthermore, using GPT-4o for the Analysis phase and o1 for the Synthesis phase, Code Researcher achieves the best CRR of $58 \%$ on the dataset. These results indicate that Code Researcher’s Analysis produces context that is much more effective than giving file contents based on the crash report, and is even better than directly giving all the contents of the files to be edited.
Finally, we show how test-time scaling, in terms of total inference budget (max calls $\mathbf { \nabla } \times \mathbf { \mathrm { \ n u m } }$ trajectories $k$ ), impacts performance. We find that doubling the max calls budget, i.e., making the trajectories of the agents longer, has a negligible effect on the CRR. Increasing the number of trajectories sampled, on the other hand, improves SWE-agent’s CRR to $3 7 . 5 0 \%$ and Code Researcher (GPT-4o)’s CRR to $5 4 . 0 0 \%$ .
# 5.2 RQ2: How well do the files edited by the tools match those modified in developer fixes?
Table 2: Average recall and All/Any/None percentages (metrics defined in Section 4) for the two agentic tools. LLMs used by the tools are in parentheses.
We now investigate whether the tools edited the same files as developers did. Since the ground-truth buggy files are already divulged to tools in the assisted setting, we focus on the stack context and unassisted settings. The results are in Table 2. We note that Code Researcher variants (both GPT-4o and GPT- $4 0 + \mathrm { o } 1$ ) have significantly higher average recall than GPT-4o and o1 in the stack context setting, as well as SWE-agent in the unassisted setting. In addition, Code Researcher (GPT-4o) edits all the ground-truth buggy files in $4 8 . 2 \%$ of the candidate patches, and at least one in an additional $7 . 8 \%$ of the patches, totaling $5 6 \%$ . These metrics are significantly better than that of all other tools in the stack context and unassisted settings. Finally, for Code Researcher and SWE-agent, scaling test-time compute (via increasing max calls or $\mathbf { P } @ k$ ) preserves the degree of overlap between the edited files and the ground-truth buggy files.
# 5.3 RQ3: How effective is context gathering for resolving Linux kernel crashes?
Recall from Table 1 that Code Researcher (GPT-4o) significantly outperforms GPT-4o in the assisted and stack context settings. This points to the usefulness of the context gathered by Code Researcher. On the other hand, SWE-agent (GPT-4o) in Table 1 also gathers context, but its performance is not on par. Below, we investigate this discrepancy along multiple axes:
1) Code Researcher gathers much more context than SWE-agent: Figure 2 (a) shows the distribution of the number of unique files read by Code Researcher and SWE-Agent (both using GPT-4o, $\mathrm { P } @ 5$ , 15 max calls) for each bug (i.e., all unique files read in the 5 trajectories). Code Researcher really performs deep research over the codebase, reading 29.13 unique files across 5 top-level directories on average for each bug. In stark contrast, SWE-agent reads only 1.91 files on average for each bug. When averaged by trajectory, Code Researcher explores 10 unique files compared to only 1.33 files explored by SWE-agent. One reason for this huge difference is that existing coding agents have been designed with respect to benchmarks like SWE-bench [17]. Tasks in these benchmarks do not require deep context gathering and reasoning, unlike tasks over complex systems codebases where deep exploration and reasoning is crucial (as is evident from our results).
Figure 2: SWE-agent vs Code Researcher (both at GPT-4o, $\mathrm { P } @ 5$ , 15 max calls)
2) Code Researcher has better overlap with developer-referenced context: We use LLM-as-judge to analyze the context gathered by Code Researcher (GPT-4o) and SWE-agent; and determine the overlap of the contexts with the context mentioned by the developer in the fix commit message (details in Appendix D). This context overlap is $5 4 . 1 \dot { 8 } \%$ (over candidate patches) for SWE-agent compared to ${ \bar { 6 3 } } . 7 \%$ for Code Researcher. This suggests that Code Researcher does a much better job of identifying relevant context that the developer explicitly relied on when making the fix.
3) Code Researcher has a significantly higher CRR than SWE-agent when both the tools correctly identify all the ground-truth modified files: To isolate the impact of the gathered context, we consider the subset of 90 bugs, where both Code Researcher (GPT-4o) and SWE-agent edited all the ground-truth files in at least one candidate patch generated by each tool. Now, if both the tools are editing the ground-truth files and using the same model for patch generation (GPT-4o), we can attribute the success (or failure) of the tools on these bugs to the context gathered. We find that Code Researcher resolves $5 5 / 9 0 = 6 1 . 1 0 \%$ of crashes in this subset, while SWE-agent resolves only $3 4 / 9 0 = 3 7 . 7 8 \%$ (note that we discount crash-resolving patches from each tool that do not edit all the ground-truth files). This suggests that Code Researcher’s context is more relevant to the task of crash resolution than that of SWE-agent.
# 5.4 RQ4: How important are historical commits for resolving crashes in the Linux kernel?
Table 3: Effectiveness of access to historical commits.
1 We do this ablation only on the 96 bugs resolved by Code Researcher (GPT-4o, Pass $\textcircled { a } 5$ , 15 max calls).
To the best of our knowledge, Code Researcher is the first agent to explicitly leverage the rich development history of codebases. To evaluate the importance of historical commits, we perform an ablation study, where we run Code Researcher without the search_commits action on the set of 96 bugs that were successfully resolved by Code Researcher (GPT-4o, Pass $\textcircled { a } 5$ , 15 max calls). The results are in the second row of Table 3. We observe that removing the search_commits action leads to a $1 0 \%$ drop in the crash resolution rate. More importantly, both recall and the model’s ability to identify all or any of the ground-truth modified files decrease substantially. This highlights that the search_commits action plays a crucial role in guiding the agent toward relevant context and in correctly localizing the files to be fixed. Notably, for the example in Appendix C.1, we also observe that Code Researcher navigates to the same buggy commit that the developer identified as the source commit that originally introduced the bug being repaired.
# 5.5 RQ5: Does Code Researcher generalize to other systems codebases?
To demonstrate that Code Researcher generalizes with a little effort to other codebases, we experiment with the task of crash resolution in the FFmpeg [3] codebase. FFmpeg is a leading open-source multimedia framework, that supports ability to decode, encode, transcode, mux, demux, stream, filter and play all existing media formats. Since it needs to handle a wide range of formats, from very old to the cutting edge, low-level data manipulation is common in the codebase. FFmpeg has $\sim 4 . 8 \mathsf { K }$ files and $\sim 1 . 4 6 \mathrm { M }$ lines of code, primarily comprising $\mathrm { C / C + + }$ and also some handwritten assembly code for performance.
Dataset We use vulnerabilities discovered by the OSS-Fuzz service [6] that runs fuzzers on various open source projects and creates alerts for the bugs detected. We focus on security issues, which are assigned the top priority by OSS-Fuzz. These include heap-based buffer overflows, stack-based buffer overflows, use-after-frees, etc. We build a small dataset of 10 FFmpeg crashes, taking the 11 most recent crashes (as of May 14, 2025) that have been verified as fixed and skipping 1 that we could not validate. 2 We use the instructions recommended by OSS-Fuzz for building FFmpeg and testing whether a crash reproduces. 3 The dataset contains the commit at which OSS-Fuzz found the crash, a reproducer file that triggered the crash, and the crash report that we generated by reproducing the crash (the crash reports found by OSS-Fuzz are not publicly visible). The dataset and the detailed prompts will be made available in an updated version.
Results To run Code Researcher on these crashes, we keep the same core prompts, adding a one-paragraph preamble about FFmpeg and replacing the few-shot examples for the Linux kernel with corresponding ones for FFmpeg.
Code Researcher, in the unassisted setting using GPT-4o for the Analysis phase and o1 for the Synthesis phase, with a max calls budget of 15, resolves 7 out of the 10 crashes in our dataset at Pass $\ @ 1$ , i.e., we sample only one patch per crash. Code Researcher achieves an average recall of 0.78, edits all the ground-truth modified files in 7 crashes and none of the ground-truth modified files in 2 crashes. 4 While this suggests that FFmpeg crashes are typically not as complex to resolve as Linux kernel crashes, our results show that Code Researcher’s techniques generalize easily and effectively to other systems codebases.
# 5.6 Qualitative evaluation
Even if a patch prevents the crash, that does not guarantee that it actually fixes the underlying issue. Any form of test-based evaluation (as is done even in other benchmarks like SWE-bench [17]) has the limitation that it cannot ensure preservation of functionality that the tests do not cover. Testing the full functionality of the kernel easily and reliably is a hard open research problem. While perusing the crash-preventing patches, we came across the following types of patches:
(1) Accurate These patches correctly identify and fix the root cause of the crash in a manner that closely resembles the developer solution. For example, for a crash in the JFS filesystem (Listing 1, Appendix E), Code Researcher generated a patch equivalent to the developer’s solution.
(2) Overspecialized These patches successfully prevent the crash but may be overspecialized. As shown in (Listing 2, Appendix E) for a crash in the Bluetooth HCI H5 driver, Code Researcher correctly identified that hu->serdev could be NULL. However, while the developer simply added a NULL check around the existing code, Code Researcher’s patch included additional error handling with a diagnostic message and explicit return values.
(3) Incomplete These patches correctly identify the problem area and approach, but may not be complete. They provide significant debugging insights and could accelerate the path to a proper fix. For example, in the QRTR networking subsystem (Listing 3, Appendix E), Code Researcher correctly identifies a concurrency bug involving radix tree traversal without RCU protection and inserts the appropriate synchronization in one affected function, but not in others.
(4) Incorrect These patches fail to address the root cause or may introduce new issues. In the QRTR networking subsystem (Listing 4, Appendix E), Code Researcher failed to infer the root cause of the issue which is an integer truncation problem when handling large u32 port numbers. Code Researcher instead added checks that reject ports with port $\mathit { \Theta } < \mathit { \Theta } 0$ . | Large Language Model (LLM)-based coding agents have shown promising results
on coding benchmarks, but their effectiveness on systems code remains
underexplored. Due to the size and complexities of systems code, making changes
to a systems codebase is a daunting task, even for humans. It requires
researching about many pieces of context, derived from the large codebase and
its massive commit history, before making changes. Inspired by the recent
progress on deep research agents, we design the first deep research agent for
code, called Code Researcher, and apply it to the problem of generating patches
for mitigating crashes reported in systems code. Code Researcher performs
multi-step reasoning about semantics, patterns, and commit history of code to
gather sufficient context. The context is stored in a structured memory which
is used for synthesizing a patch. We evaluate Code Researcher on kBenchSyz, a
benchmark of Linux kernel crashes, and show that it significantly outperforms
strong baselines, achieving a crash-resolution rate of 58%, compared to 37.5%
by SWE-agent. On an average, Code Researcher explores 10 files in each
trajectory whereas SWE-agent explores only 1.33 files, highlighting Code
Researcher's ability to deeply explore the codebase. Through another experiment
on an open-source multimedia software, we show the generalizability of Code
Researcher. Our experiments highlight the importance of global context
gathering and multi-faceted reasoning for large codebases. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
Data linkage is the process of identifying records that refer to the same entities within or across databases [5, 20]. The entities to be linked are most commonly people, such as patients in hospital databases or beneficiaries in social security databases. In the commercial sector, data linkage is employed to link consumer products [13] or business records.
Also known as record linkage, data matching, entity resolution, and duplicate detection [4], data linkage has a long history going back to the 1950s [16, 33]. In the biomedical and social sciences, data linkage in the past decade has increasingly been employed for research studies where administrative and / or clinical databases need to be linked to better understand the complex challenges of today’s societies [2, 5, 29]. Within governments, data linkage is being employed to make better use of the population level databases that are being collected for administrative purposes, to reduce costs to conduct expensive surveys such as national censuses [35, 36, 59], or to facilitate research that would not be possible otherwise1.
As databases containing the personal details of large numbers of individuals are increasingly being linked across organisations, maintaining the privacy and confidentiality of such sensitive data are at the core of many data linkage activities [5]. Much recent research has focused on developing novel techniques that facilitate the linkage of sensitive data in private ways [17, 54]. Thus far, the majority of such research has been devoted to the development of techniques that can provide privacy-preserving record linkage (PPRL) [19]. In PPRL, sensitive values are encoded by the database owners in ways that still allow the efficient linkage of databases while ensuring no sensitive plain-text values are being revealed to any party involved (as we discuss in Section 2.4). PPRL techniques also ensure no external party, such as a malicious adversary, will be able to gain access to any unencoded sensitive data [53].
Because PPRL requires the calculation of similarities between the values of identifying attributes2 to find similar records, one focus has been on developing secure methods to encode such values while still allowing similarity calculations. Another direction of work has developed techniques to prevent such encodings to become vulnerable [56] to attacks that aim to reidentify encoded values [58].
While notable progress has been made concerning such PPRL techniques, thus far, research has been scarce into how such techniques are being employed within operational data linkage projects and systems [37, 48]. In real-world environments, communication patterns and assumptions about the trust in employees and their possible motivations to try to explore sensitive databases (that possibly are encoded) are likely different from the conceptual models used in academic research into PPRL techniques [5, 17].
This paper aims to bridge the gap between academic research which focuses on PPRL techniques, and the actual application of data linkage in the real world – both non-PPRL as well as PPRL techniques. For the remainder of this paper, we name the former traditional data linkage (TDL) techniques. We specifically investigate the communication protocols between the different parties (generally organisations) that are likely involved in a data linkage project, the sensitive information a party involved in such a protocol can obtain, and how such information leakage can be prevented. We explore the following question:
What sensitive information can be leaked in a TDL or PPRL protocol, either unintentionally (such as through a human mistake) or intentionally (for example by a curious employee)?
To the best of our knowledge, this question has not been investigated in the context of data linkage. Understanding how sensitive information can possibly leak in data linkage protocols will help data custodians to improve the privacy of their data linkage systems, and better protect the sensitive data they are trusted with.
We do not cover situations where a party (internal or external to a linkage protocol) behaves maliciously with the aim to gain access to sensitive information they would not have access to during the normal execution of a protocol. Such situations have been covered by work on attacks on PPRL protocols [58]. Rather, we consider situations where information leakage can occur unintentionally or due to curiosity, and where a party involved in a data linkage protocol can learn sensitive information from their data as well as the data they obtain from other parties within the protocol. We also consider the collusion between (employees of) organisations that participate in a data linkage protocol, and discuss scenarios where collusion might happen for reasons that are not necessarily malicious.
# 2 Background
We now introduce the notation and data concepts we use in this paper and then describe the conceptual types of organisations (also named parties) relevant in the context of a data linkage protocol, their roles, and the data they generally have access to. We then discuss various aspects of an adversary, including who they might be and their motivation. Next, we give a brief introduction to PPRL techniques and attacks that have been developed on such techniques with the aim of reidentifying the encoded sensitive data.
For more extensive coverage of these topics, we refer the reader to Christen et al. [5]. Methodological aspects of data linkage are discussed by Harron et al. [20] and Herzog et al. [21].
# 2.1 Notation and Data Concepts
We denote a database containing records with D, and an individual record as $r _ { i } \in \mathbf { D }$ , where $1 \leq i \leq n$ , and $n = | \mathbf { D } |$ is the number of records in D. As we discuss next, each record consists of three components, $r _ { i } = ( i d _ { i } , q i d _ { i } , p \dot { d } _ { i } )$ . We provide an illustrative example of the linkage of two small databases in Figure 1.
Colesterol health database
Education level database
Linked data set (the scientific use file, SUF)
Record identifier
Quasi-identifiers MID Age BMI LDL HDL EdYear Highest 1 46 21.9 3.7 0.9 1996 PhD
Payload data 2 58 18.7 3.1 1.0 2002 BEng
Match identifier 3 65 26.6 4.3 1.2 1979 GCSE
Figure 1: Two small example databases containing record identifiers (the ID and RID attributes, respectively), quasiidentifiers (QIDs), and sensitive payload data (PD). The QIDs are used to link records across the two databases into a scientific use file (SUF), where each matched record pair is assigned a unique match identifier (MID). Only PD attributes are included in the SUF, where the attribute ‘Age’ is generated from the date of birth (DoB) attribute in the health database (assuming the year 2025).
The record identifier (ID) component, $i d _ { i }$ , has a unique value for each record $r _ { i } \in \mathbf { D }$ . Note that $i d _ { i }$ is generally not an entity identifier (such as a social security number or patient identifier) that is unique to each individual in a population. Rather, it is a unique value (for example, an integer number) assigned to each record $r _ { i }$ by the database system that stores $\mathbf { D }$ . Without loss of generality, we assume the $i d _ { i }$ values do not reveal any sensitive information about the records $r _ { i }$ . We denote the set of all record identifiers from a database $\mathbf { D }$ with $\mathbf { I D } = \{ i d _ { i } : r _ { i } \in \mathbf { D } \}$ . In Figure 1, the record identifier values are A1, A2, and A3 in the health database, and 123, 657, and 242 in the education database.
The $q i d _ { i }$ component of a record $\boldsymbol { r } _ { i }$ consists of the quasi-identifier (QID) attributes that describe an entity (individual person) whose record is stored in D [5]. QIDs include attributes such as names, addresses, and dates of birth, where a single QID value (such as first name only) is unlikely enough to uniquely identify each entity in $\mathbf { D }$ . However, when multiple QIDs are combined, they become unique for (hopefully) all entities in a population. QIDs are generally used for data linkage when no unique identifiers are available [4, 20]. A crucial characteristic of QIDs is that they can suffer from data quality issues, in that they can be missing, out of date, or contain variations and (typographical) errors [5, 7]. We denote the set of QIDs from all records in a database $\mathbf { D }$ with $\mathbf { Q I D } = \{ q i d _ { i } : r _ { i } \in \bar { \mathbf { D } } \}$ , where each qidi is a list of one or more attribute values. For example, in Figure 1, the QIDs for record A1 are $q i d _ { A 1 } =$ $[ J o h n , E l i o t t , L o n d o n , 2 3 / 0 7 / 7 9 ]$ .
The third component of a record $\boldsymbol { r } _ { i }$ is its payload data (PD), $p d _ { i }$ , also known as microdata [5]. These are the (possibly sensitive) values of interest to researchers, such as individuals’ medical, educational, or financial details. Data linkage aims to bring together complementary PD about a cohort of individuals from distinct databases. PD are generally not required for the linkage process; however, attributes such as gender, postcode, or year of birth can be used both as QIDs for linkage and PD for analysis. However, we do not consider the handling of PD, including their anonymisation [15], to be part of the data linkage process. We denote the set of PD for all records from a database $\mathbf { D }$ with $\mathbf { P D } = \{ p d _ { i } \ :$ $r _ { i } \in \mathbf { D } \}$ where each $p d _ { i }$ is a list of one or more attribute values. For example, in Figure 1, the PD for record A1 are $p d _ { A 1 } = [ 2 1 . 9 , 3 . 7 , 0 . 9 ]$ .
During a linkage protocol, the QID values of records from two databases, $r _ { i } \in \mathbf { D } _ { A }$ and $r _ { j } \in \mathbf { D } _ { B }$ , are compared [4], and an overall similarity, $s i m ( r _ { i } , r _ { j } )$ , is calculated for each compared record pair $( r _ { i } , r _ { j } )$ . Finally, a decision model uses the similarities to classify the record pairs to be a match or a non-match, implying the two records are assumed to refer to the same entity or different entities) [4]. The result is a linked data set, which contains all record pairs classified as matches, where each pair $( i d _ { i } , i d _ { j } )$ can be assigned a unique match identifier, $m _ { i j }$ .
For all protocols we describe in Section 3, both TDL and PPRL, we assume all communication between parties (except the release of any PD to the scientific community) is being encrypted using an appropriate encryption method, with passwords handled and exchanged in a secure way [5, 42]. Therefore, we do not assume information leakage happens because of a computer security issue such as a compromised password or a system breach. Rather, we investigate what sensitive information can be learned by an organisation (or, specifically, an organisation’s employee) participating in a data linkage protocol based on the data to which this organisation and its employees have legitimate access to.
We furthermore assume that a data linkage project is conducted within an established and stable regulatory framework, such as HIPAA (the US Health Insurance Portability and Accountability Act of 1996) or the GDPR (the EU European General Data Protection Regulation of 2018) [5]. There are likely additional processes (such as the Five Safes framework [12]), regulations, and confidentiality agreements that govern the access to and sharing of the sensitive databases
DP Data Producer Linkage setting Safe haven setting DA Data Anonymiser DO Data Owner DPA DA DOA 中 DUA $\mathbf { D } \mathbf { U } _ { \mathrm { A } }$ Approved Data User LU Linkage Unit $\mathbf { D } \mathbf { U } _ { \mathrm { P } }$ Public Data User DM Data Merger LU M DM SUF SDC setting SUF Scientific Use File $\mathbf { D } _ { A } , \mathbf { D } _ { B }$ lDinatkaebdases being DPB DB DOB DA PUF DU PUF Public Use File M rSectorfdmpaitrcshed SDC Statistical Disclosure Control to be linked. However, we are aware that the interests of administrators or researchers within a data-producing organisation might differ from the legal obligations of the organisations. Therefore, some members of an organisation might be tempted to deviate from the rules and procedures prescribed.
# 2.2 Parties involved in Linkage Protocols
While the linking of databases between organisations can be conducted in different ways (as we discuss in Section 3), the parties involved in such an endeavour can generally be categorised into the types we describe below. While commonly the assumption is that each party is a separate organisation, in practical linkage scenarios these parties can also be different groups or departments within the same organisation, or even different individual employees within the same area of an organisation, where each individual would take on the role of one of the types of parties we describe below.
In Figure 2 we show the overall data flow in a data linkage protocol. While in this work we focus on how the parties within the linkage setting are exchanging data with each other, it is important to also consider the larger context within which such a protocol is executed, and the parties outside the linkage setting that are relevant in this context. Most existing work on data linkage and PPRL does not consider this overall context [17, 53, 54].
We start with the three types of parties at the core of a data linkage project, shown as the linkage setting in Figure 2.
Database Owner (DO): A DO, also known as data owner or data custodian, owns a database D containing records that refer, for example, to patients, taxpayers, customers, or travellers. A DO can be a data producer (DP) itself (as we discuss below), the organisation that collected or created D, or it can receive $\mathbf { D }$ from an external DP, such as a hospital, business, or government agency. Note also that while a DO has to take care of any legal aspects of the data they hold (such as data confidentiality), a DP can have a different motivation than the DO to provide their data for a linkage project.
A DO participates in a data linkage protocol by (1) providing the record identifier, $i d _ { i }$ , and QID values, $q i d _ { i }$ , for each record in their database, $r _ { i } \in \mathbf { D }$ , for the linkage process, and (2) contributing selected PD attributes $p d _ { i }$ for records in D that are to be used for the analysis conducted by a data user (DU), as we discuss below.
Linkage Unit (LU): The LU is an organisation or a person that conducts the actual linkage of the QID values from the individual databases sent to it by two or more DOs involved in a linkage protocol.3 LUs can be embedded within a trusted organisation such as a university or government (health) department. Some LUs, such as national statistical institutes, also have their own databases, and therefore they can also be seen as a DO, and possibly also as a data producer (DP). Some LUs are also conducting the merging of linked data sets (as we describe next), and therefore they can also be a Data Merger (DM), possibly a Data Anonymiser (DA), and even a Data User (DU). Scenarios where the LU is also the DM are possible in TDL protocols, However, they are not feasible with PPRL protocols, as we discuss in Section 3.
The outcome of a linkage conducted by the LU is a set $\mathbf { M }$ of matched pairs of record identifiers $( i d _ { i } , i d _ { j } )$ , where $r _ { i } \in \mathbf { D } _ { A }$ (the database of the first DO) and $r _ { j } \in \mathbf { D } _ { B }$ (the database of the second DO), with the corresponding match identifier $m _ { i j } \in \mathbf { M }$ that represents the matched pair $m _ { i j } = ( r _ { i } , r _ { j } )$ .
Data Merger (DM): The party that, based on the set $\mathbf { M }$ of matched record pairs, and the PD it receives from the LU and DOs, generates a scientific use file (SUF) [27] by combining the PD attributes of the record pairs in M. For each matched pair $m _ { i j } \in \mathbf { M }$ that corresponds to record pair $( r _ { i } , r _ { j } )$ , the SUF will contain the corresponding PD of this record pair, $( p d _ { i } , p d _ { j } )$ , where $p d _ { i }$ comes from record $r _ { i } \in \mathbf { D } _ { A }$ , and $p d _ { j }$ from record $r _ { j } \in \mathbf { D } _ { B }$ .4 A SUF can then be used in two different ways, as we discuss below.
These three types of parties (DO, LU, and DM) are directly involved in a data linkage protocol (within the linkage setting shown in Figure 2), and the relevant components of the databases required for such a protocol are communicated between these parties.
The generated SUF can then be used either (1) within a safe environment, or (2) further processed to create an anonymised version of a SUF, known as a public use file (PUF) [27]. The following two types of parties are relevant to these processes:
Data Anonymiser (DA): To facilitate the use of a SUF outside a secure research environment (also known as a trusted research environment, safe setting, or safe haven [32]), it needs to be anonymised such that it is impossible to reidentify any individuals whose PD are contained in the SUF. This can be achieved by applying appropriate data anonymisation techniques, known as statistical disclosure control (SDC) techniques. The topic of anonymising sensitive information in a SUF is outside the scope of our work, and we refer the interested reader to Duncan et al. [14], Elliot et al. [15], and Torra [46, 47]. A PUF can then be made publicly available.
Data User (DU): Also known as a data consumer, the DU is a party that is using a linked data set for a specific purpose, such as for research or an operational project. We distinguish two types of DUs, depending if they are accessing a SUF or a PUF:
(1) Because a SUF contains individuals’ PD, it can be highly sensitive. In most jurisdictions, SUFs are covered by data protection and privacy regulations, such as the EU’s GDPR or the US HIPAA [5]. Access to SUFs is, therefore, limited to approved or accredited DUs who have undergone appropriate training. In Figure 2 we denote an approved DU with $\mathrm { \ D U _ { A } }$ . Furthermore, accessing a SUF is generally limited to within a secure research environment [32].
(2) Because a PUF has been anonymised such that no reidentification of individuals is possible, it can be made accessible to any DU, both in the public or private sector, individuals or organisations, even outside a secure research environment. While DUs are most often benign and have a genuine intention to analyse a PUF, malicious parties can also access a PUF with the aim to potentially do harm. In Figure 2 we denote a public DU with $\mathrm { D U _ { P } }$ .
We note that a DU (generally) has no influence upon a data linkage protocol. They can, however, collect further data from other sources (such as the Internet), and use such external data to try to enrich any PUF and potentially SUF they have access to. They can also try to combine multiple SUFs or PUFs, either obtained from different sources or from the same source over time. The aim of such activities by a DU would be to explore if any sensitive information can be learned about the entities whose PD is contained in these files. A safe research environment is generally designed to prevent such data enrichment of a SUF by an approved DUA [12].
The final important party are the organisations who are producing the data to be linked (shown left in Figure 2).
Data Producer (DP): Also known as a data provider, this is an organisation that collects or generates the databases to be linked. A DP can also act as a DO, or they can provide their database or parts of it as relevant to a linkage project, to a DO. A DP is either obliged by law to provide their data, they have a specific interest to contribute their data for a linkage project, or they have made their data available to other organisations, with or without restrictions on how their data can be used.
Unless a DP is also a DO, it would be a party outside of a linkage protocol, as Figure 2 shows. However, in the context of analysing information leakage, it is vital to consider the motivation of a DP and how it might obtain sensitive information from a linkage project.
# 2.3 Motivation of Adversaries
Various conceptual models of parties (such as being fully trusted, honest-but-curious, or malicious) and threat scenarios have been developed [28, 42], and we describe the most commonly used such models in Appendix A. Here we discuss what the types of adversaries one might encounter in a data linkage protocol, and what the motivation of these adversaries might be.
In most real-world TDL projects (such as in those where government agencies are involved), the fully trusted model is assumed for all parties involved in such a protocol. On the other hand, the majority of PPRL techniques consider parties to be curious but not malicious [5, 53], with the additional assumption for parties involved in a protocol not to collude with each other.
While these conceptual adversarial models are useful when designing data linkage and especially PPRL protocols, in practice the assumptions of these models might not always hold. For example, employees of a trusted data linkage centre (located within an academic organisation or government agency), or approved DUs, will have signed confidentiality agreements and been trained on data privacy regulations and best practices when dealing with sensitive data. They can, however, still make mistakes when handling sensitive data that can lead to unintended information leakage (known as a data breach when becoming public5). They might also be curious (but not malicious) and query a sensitive database they have access to, for example, to gain information about their neighbours, family members, celebrities, or past lovers. An employee might even become a malicious actor if they see financial gain, seek revenge, or if they are manipulated via social engineering (or even external pressure such as from state agents) to illegally provide access to sensitive data to an external adversary [5, 22, 42]. A data linkage protocol might be attacked by an insider [58], if the person is subject to changing laws, for an example, due to a regime change, such as happened to official statistics in the Netherlands during World War 2 [44]. In some cases, a clear motivation for the strange behaviour of employees might never be found6.
An organisation that participates in a data linkage project might itself have an interest in exploiting any information it receives from other parties during such a protocol. This might be the case in commercial data linkage projects, where for example customer databases are being linked. A commercial DO will be interested in finding out which of its own customers do also occur in the database of the other DO(s) as this will allow this DO to learn more about these customers. As another example, learning anything about the PD of individuals who occur in both databases can allow a private health insurance (assumed to be one of the DOs) to possibly increase the premiums of customers who have certain health conditions (as learnt from the PD of the linked data set).
A malicious actor can try to disguise their action as being a genuine mistake (such as a file saved to the wrong location or with wrong access rights, or an email sent to the wrong receiver) in order to prevent punishment. These diverse types of motivations mean that a continuum of adversaries needs to be considered, from benign but careless all the way to pure evil.
As we discussed in Section 2.1, within a data linkage protocol there are two types of data that potentially contain sensitive (personal) information that could be of interest to an adversary, the quasi-identifiers (QIDs) and the payload data (DP). Both provide information about the individuals whose records are contained in the databases being linked. Furthermore, knowing about the sources of the databases being linked can also reveal sensitive information about the individuals whose records are stored in these databases. The following scenarios of an adversary gaining access to these different types of data are possible:
• QID values only: If there is no context available about these QID values then there might be limited useful information to an adversary, depending upon the nature of these QIDs. Details such as the names, addresses, dates of birth, or telephone numbers of people can help an adversary to potentially conduct identity fraud, however no other personal details such as financial or medical data would be part of these QIDs. • QIDs and context of a database: If additionally to the QID values an adversary learns about the source or owner of a database or its content, for example that a database contains records of HIV patients, then this can reveal potentially highly sensitive information because the individuals with the given QIDs can be associated with that revealed context. This would allow an adversary to blackmail individuals or use this context information in other ways that either harm the individuals in that database or at least benefits the adversary (such as a private health insurance that could increase the premiums of all individuals found in a HIV database). • PD values only: If the PD values an adversary gains access to do not allow any reidentification of individuals [14, 47], then no individuals can be harmed by such a leakage of PD values only. However, depending upon what PD they have gained access to, the adversary might still be able to learn about certain groups of people and sensitive information about the individuals who are members of such a group. For example, if age, race, and gender values are included in the PD besides medical details, then the higher prevalence of certain illnesses for specific groups of individuals can be exploited by the adversary. • PD values and context of a database: The PD values in a database already provide some context information about these values (such as the individuals with the given PD values have a certain illness). More specific information, such as database name detailing its source and time period (such as hiv-patients-london-2018- 2020.csv) gives the adversary more specific information which (potentially) could allow the actual reidentification of individuals if the scope of the database is small enough.
• QID and PD values: In this worst-case scenario, an adversary gains access to the full details of individuals which provides them with possibly highly sensitive information that it could exploit.
Within a data linkage protocol, the objective of an adversary would be to gain access to (sensitive) data that are being used in the protocol and to which the adversary does not have access to in the normal execution of the protocol. Depending upon the type and motivation of an adversary, their objective would be to obtain access to either specific records, records of a group of individuals with certain characteristics, or all records in a database. Understanding what potentially motivates a party (or an employee of a party) involved in a data linkage project to try to learn about the sensitive data being used in a data linkage project is crucial in order to assess the potential risk and likelihood of such behaviour.
Before we discuss different types of data linkage protocols in Section 3, we first briefly describe research that is aimed at reducing the risks of sensitive information being leaked in data linkage.
# 2.4 Privacy-Preserving Record Linkage
Traditionally, data linkage is based on the comparison of the actual QID values of records (such as names, addresses, dates of birth, and so on) to find matching (highly similar) records across the databases being linked [5]. However, due to privacy and confidentiality regulations, and concerns of using and sharing such sensitive personal data, some linkages across databases held by different organisations might be difficult to conduct or even not be feasible [5, 17].
To overcome such restrictions, PPRL techniques have been developed [19]. The aim of these techniques is to facilitate the linkage of sensitive databases by encoding QID values such that similarity calculations between encodings are feasible, and matching record pairs can be identified accurately and efficiently without any need to access the actual sensitive QID values [5]. PPRL techniques aim to guarantee that no party that participates in a linkage protocol, nor any external party, can learn any sensitive information about the individuals who are represented by records in the databases being linked. Various techniques have been developed for PPRL, ranging from perturbation based methods such as Bloom filters [43, 50] to secure multi-party computation (SMC) based approaches [26].
While SMC based PPRL techniques provide provable privacy of encoded sensitive values (at the cost of generally high computational and communication requirements), perturbation based methods have a trade-off between linkage quality, privacy protection, and their scalability to link large databases [53]. While they are scalable to large databases and are able to achieve linkage quality comparable to linking plain-text data [38], the main weakness of perturbation based techniques is that they lack formal privacy guarantees [5]. Given the usability versus privacy trade-off, most applications of PPRL use perturbation based methods [37]. Further details on PPRL are given in surveys and books [5, 17, 53, 54].
Weaknesses in perturbation based encodings for PPRL, such as encodings based on Bloom filters [43], have lead to the development of attacks that exploit patterns in encoded databases [58]. Vulnerabilities that have been exploited include the frequencies and lengths of sensitive values and encodings, and the similarities calculated between plaintext values and between encodings [56]. Some of the developed attacks have shown to be successful in that they were able to correctly reidentify some encoded sensitive QID values even in large real-world databases [6, 55, 57].
In this work we do not consider such attacks, which require an adversary to have access to both encoded sensitive QID values as well as some plain-text data which are highly similar to these encoded values. Rather, we look at what a party participating in either a TSP or a PPRL protocol can learn from the data it legitimately obtains within the protocol, or what two parties that collaborate can learn from any of the plain-text data they have legitimate access to and where they share these data in such a collusion.
As we discuss next, the communication steps between parties involved in a PPRL protocol are similar to the steps used in TDL protocols. While PPRL generally assumes multiple organisations to be involved in a linkage protocol, a PPRL protocol can also be conducted within a single organisation (for example, by different departments) to limit the sharing of sensitive personal data.
It is important to understand that PPRL techniques only protect the QID values that are used in a linkage protocol to identify matching records, but not the PD. Any PD that is to be used for analysis by a researcher as part of a SUF still needs to be provided to the researcher in its unencoded plain-text form. This requirement makes such PD potentially vulnerable to misuse.
Figure 3: A TDL protocol that involves three main parties, as based on the separation principle formalised by Kelman et al. [24] (left). The corresponding PPRL protocol is shown on the right, where we denote with $\mathbf { \nabla } \cdot _ { \mathrm { e I D } } ,$ and ‘eQID’ the encoded (or encrypted) versions of the record identifiers and QIDs, respectively. The four main communication steps are shown as (1) to (4).
Furthermore, as we show next, even PPRL protocols, in general, cannot completely hide to all parties in a protocol which records were matched and which were not. Therefore, PPRL protocols can still lead to unintentional leakage of sensitive information.
# 3 Data Linkage Protocols
Without loss of generality, we assume protocols where two DOs, $\mathsf { D O } _ { A }$ and $\mathsf { D O } _ { B }$ , aim to link their databases $\mathbf { D } _ { A }$ and $\mathbf { D } _ { B }$ using a LU and a DM. Extensions of these protocols involving more than two DOs are possible and are likely to occur in practical applications. Protocols only involving two DOs (without a LU and DM) are also feasible in the case of TDL, where linkage is conducted on plain-text values. In such situations, one of the DOs commonly takes on the role of both the LU and DM.
In the context of PPRL, both multi-party and two-party protocols have been proposed [5, 17]. The latter generally incur high computational and communication costs due to their requirement to hide sensitive data between the two DOs while concurrently identifying record pairs that refer to matches [23, 51].
Following the definitions of parties in Section 2.2, Figures 3 and 4 show two different versions each of TDL and PPRL protocols, respectively, that are possible when linking databases from two DOs using a LU. Both figures show the linkage setting at the centre of Figure 2 with the different ways of how the parties communicate in such a protocol. In these figures we denote with $\mathbf { I D } _ { A }$ , $\mathbf { Q I D } _ { A }$ , $\mathbf { P D } _ { A }$ , and $\mathbf { I D } _ { B }$ , $\mathbf { Q I D } _ { B }$ , $\mathbf { P } \bar { \mathbf { D } } _ { B }$ , the sets of record identifiers, quasiidentifiers, and the payload data of the databases $\mathbf { D } _ { A }$ and $\mathbf { D } _ { B }$ , respectively.
# 3.1 Protocols based on the Separation Principle
Figure 3 shows two versions of the separation principle based protocol, as formalised by Kelman et al. [24] in 2002. The TDL version of this protocol (left-hand side of Figure 3) is still the basis of many practical data linkage applications. The idea of the separation principle is for each party involved in a protocol only to have access to the data it requires to perform its role in the protocol [5].
In the TDL version of the protocol, for each of their records $\boldsymbol { r } _ { i }$ , in step (1) the DOs first send pairs of $( i d _ { i } , q i d _ { i } )$ to the LU without the corresponding PD. The LU uses the QID values it receives from the two DOs to link records by classifying pairs of records into matches and non-matches [5].
The LU then generates for each matched pair $( i d _ { i } , i d _ { j } )$ , with the corresponding $r _ { i } \in \mathbf { D } _ { A }$ and $r _ { j } \in \mathbf { D } _ { B }$ , a match identifier $m _ { i j }$ . In step (2), it sends pairs of $( i d _ { i } , m _ { i j } )$ back to $\mathsf { D O } _ { A }$ and $( i d _ { j } , m _ { i j } )$ back to $\mathrm { D O } _ { B }$ . The DOs then combine the PD of their matched records with these match identifiers, and in step (3) send the resulting pairs to the DM. $\mathsf { D O } _ { A }$ generates and sends $( m _ { i j } , p d _ { i } )$ to the DM, and $\mathrm { D O } _ { B }$ generates and sends $( m _ { i j } , p d _ { j } )$ . The DM can now combine the PD that refer to matched record pairs (have the same match identifier, $m _ { i j }$ ) and generate the SUF without having seen the QID values of any records. No information about non-matched records is sent from the DOs to the DM.
We assume that the match identifiers, $m _ { i j }$ , do not contain any sensitive information that relates back to the actual records they represent. Match identifiers can, for example, be integer numbers, potentially combined with an identifier that refers to the project for which the databases are being linked [24]. However, as we discuss in Section 4, because the DOs learn which of their records have been matched, this protocol still leaks some information to the DOs involved in the protocol.
Figure 4: Two versions of a three-party linkage protocol where no data flows back to the DOs (unlike the protocols shown in Figure 3). The left side shows the TDL and the right side the PPRL version of this protocol. Compared to the separation principle based protocols shown in Figure 3, both versions of this protocol require the set of matched record pairs to contain record identifiers. We denote this set with $\mathbf { \cdot } \mathbf { M } ^ { \mathrm { i d } } \mathbf { \cdot }$ . The four main communication steps are again shown as (1) to (4).
In the PPRL version of this protocol, shown in the right-hand side of Figure 3, encoded QID values are sent in step (1) from the DOs to the LU together with encoded record identifiers (for example, a hash value for each original record identifier value) as $( e i d _ { i } , e q i d _ { i } )$ for each $r _ { i } \in \mathbf { D } _ { A }$ and $( e i d _ { j } , e q i d _ { j } )$ for each $r _ { j } \in \mathbf { D } _ { B }$ . Here we denote the encoded version of $i d _ { i }$ with $e i d _ { i }$ and similarly the encoded version of $q i d _ { i }$ as $e q i d _ { i }$ (and similarly $q i d _ { j }$ as $e q i d _ { j } )$ . The LU compares these encoded QID values using a PPRL method [5], and classifies pairs of records as matches or non-matches. As with the TDL version of this protocol, for each matched pair $( e i d _ { i } , e i d _ { j } )$ the LU then generates a unique match identifier, $m _ { i j }$ , and in step (2) sends pairs of $( e i d _ { i } , m _ { i j } )$ back to $\mathrm { D O } _ { A }$ and pairs of $( e i d _ { j } , m _ { i j } )$ back to DOB.
In the same way as with the TDL protocol shown in the left-hand side in Figure 3, the DOs combine the PD of their matched records with the match identifiers (as $( m _ { i j } , p d _ { i } )$ by $\mathsf { D O } _ { A }$ and $( m _ { i j } , p d _ { j } )$ by $\mathsf { D O } _ { B } \overrightharpoon { }$ ) and in step (3) send these pairs to the DM, which can now combine the PD of the pairs with the same match identifier $m _ { i j }$ . Because the record identifiers the LU receives from the DOs, $e i d _ { i }$ and $e i d _ { j }$ , are encoded or encrypted, they do not contain any sensitive information that the LU could exploit.
The final step of the PPRL protocol, the generation of the SUF by the DM, is the same as in the TDL version of this protocol. As per Figure 2, this SUF is then either sent to a DA or an approved DU in step (4) of the protocol. As seen from the right-hand side of Figure 3, the DM is not part of the PPRL protocol. This is because to generate a SUF from a linked data set, the DM needs access to the actual PD of matched record pairs from both DOs [5].
Assuming a secure PPRL technique is used for this protocol, no party within the PPRL context will be able to learn any sensitive information from the data it receives from any other party that participates in the protocol. Similar to the TDL version of this protocol, however, this PPRL protocol does still leak some sensitive information about matched and non-matched records to the DOs, as we discuss in Section 4.
# 3.2 Protocols without Data Backflow
The protocols based on the separation principle shown in Figure 3 require information about matched records to be communicated back from the LU to the DOs, for the DOs to extract the PD of the records in their database that have been matched, and sending these PD together with the corresponding match identifiers to the DM. Therefore, a DO is involved in multiple communication steps, and have to conduct potentially substantial data extraction and processing of their own database. There can be situations where a DO does not have the capacity, nor is willing or permitted to conduct these required communication and processing steps [37, 48]. Examples include the linkage processes in the German Neonatal Data Process [39] and the German Cancer Registries [45].
An alternative type of linkage protocol is shown in Figure 4. In the same way as in the protocols shown in Figure 3, in this type of protocols, in step (1) the DOs also send their record identifiers and QID values as pairs of $( i d _ { i } , q i d _ { i } )$ (or pairs of $( e i d _ { i } , e q i d _ { i } )$ for the PPRL version of the protocol) to the LU without the corresponding PD. The LU, therefore, conducts the linkage of record pairs in the same way as with the separation principle based protocols.
However, instead of generating a match identifier, $m _ { i j }$ , for each matching record pair $( r _ { i } , r _ { j } )$ , with $r _ { i } \in \mathbf { D } _ { A }$ and $r _ { j } \in \mathbf { D } _ { B }$ , the LU now generates a set of matched record identifier pairs, denoted with $\mathbf { M } ^ { i d }$ , where each element in this set corresponds to the actual pair of identifiers, $( i d _ { i } , i d _ { j } )$ of the matched record pair $( r _ { i } , r _ { j } )$ . For the PPRL version of this protocol, the LU generates pairs that contain the encoded record identifiers, $( e i d _ { i } , e i d _ { j } )$ . For both types of protocols shown in Figure 4, in step (2) the LU then forwards this set of matched record identifier pairs to the DM.
Table 1: Reasons for a party (or employee) to explore the data they have access to.
For the DM to be able to generate the linked data set (the SUF), this party requires the PD of the records that occur in the matched pairs in $\bar { \mathbf { M } } ^ { i d }$ it received from the LU. However, because in this type of protocol, the DOs do not know which of their records were matched to records in the other database, in step (3) they have to send the PD of all the records in their databases to the DM, together with the corresponding record identifiers, as pairs $( i d _ { i } , p d _ { i } )$ for all $r _ { i } \in \mathbf { D } _ { A }$ and pairs $( i d _ { j } , p d _ { j } )$ for all $r _ { j } \in \mathbf { D } _ { B }$ . In the PPRL version of this protocol, these record identifiers will need to be encoded as $e i d _ { i }$ and $e i d _ { j }$ , respectively.
The DM now has the task of generating a linked data set based on the set of matched record pairs, $( i d _ { i } , i d _ { j } ) \in \mathbf { M } ^ { i d }$ it received from the LU, and the pairs of record identifiers and PD, $( i d _ { i } , p d _ { i } )$ it received from $\mathsf { D O } _ { A }$ and $( i d _ { j } , p d _ { j } )$ from $\mathsf { D O } _ { B }$ . For each pair $( i d _ { i } , i d _ { j } )$ it generates the corresponding pair $( p d _ { i } , p d _ { j } )$ , which will be added to the SUF (shown as step (4) in Figure 4) or further processed and anonymised [14, 15] into a PUF for public release. Similarly, in the PPRL version of this protocol, the pairs $( e i d _ { i } , e i d _ { j } )$ will be used by the DM to generate the pairs $( p d _ { i } , p d _ { j } )$ of PD.
Compared to the separation principle based protocols shown in Figure 3, in this type of protocol the DOs do not learn which of their records were classified as matches with records from the other DO, thereby reducing the information leakage at the DOs. However, the DM does receive the PD for all records (even those not matched) from all databases involved in a linkage protocol. This can lead to an increase in information leakage at the DM, as we discuss next. Furthermore, in a context where consent of individuals for their data to be used is required, and a person does not consent to a data request for a study, then the inclusion of the PD in their record(s) into a data set that is sent to the DM might violate privacy regulations within certain jurisdictions.
In the TDL versions of both types of protocols (separation principle based or protocols without data backflow), it is possible for a LU also to take on the role of DM. This means this party will conduct both the linkage and the generation of the linked data (the SUF). National Statistical Institutes are examples of such parties that act both as LU and DM (and even DO, DP, and DU). In such situations, generally legally and organisationally separate units with additional supervision and control are employed within such a party to take on the different role types within a data linkage protocol. Such a combination of roles within a single party is not feasible in PPRL protocols because this would result in substantial leakage of sensitive information.
# 4 Information Leakage in Linkage Protocols
How information is being leaked in the different data linkage protocols we described can be categorised based on how many parties are involved in an attempt to learn sensitive information.
• One party: A single party (or specifically, one of its employees) can be curious and explore all data they have access to within a linkage protocol (plus publicly obtainable data). The reasons for doing so can be diverse, as we illustrate in Table 1.
• Multiple parties: Several parties (or employees from more than one party) can collude in a linkage protocol with the aim to learn about the sensitive data from another not-colluding party. In scenarios where several parties collude, the data available to the colluding parties, the data they received from the not colluding party (or parties) during the linkage protocol, and any relevant external (publicly available) data they can access, can be exploited in the collusion.
Table 2: Summary of (potentially sensitive) information available to the parties described in Section 2.2, assuming no collusion between parties. We highlight in italics font (and red background) information that is not necessary for a party within a given linkage protocol, but that is available to a party due to the data flow in a protocol. Grey background indicates information that is required by a party for it to fulfil its function within a protocol.
As with single parties, the motivation behind collusion does not necessarily need to be malicious. It can again be curious or self-motivated parties. One example is a situation where scientists want to progress their research goals but are hindered by the required approval of a data-sharing protocol where access to the data is being delayed. Directly exchanging the required data for their study will allow them to progress their research without delays; however, this might be outside an approved data-sharing agreement.
Collusion between parties can involve as little as one party revealing to one or several other parties what linkage and encoding algorithms, and corresponding parameter settings (and even secret keys) have been used in a PPRL project [5]. It can also involve the sharing of individual records in the databases being linked, or it can even be the sharing of the final linked data set with parties that are not supposed to obtain this data.
The reasons we discussed so far for the behaviour of a party all have a purpose, and any resulting actions are intentional. A common weakness of humans is, however, their propensity to making mistakes or being careless and thereby inadvertently revealing some information that can assist another party to explore the data they receive from the careless party. Reasons for this can be manifold [5, 7] and include social engineering (where an adversary obtains access to the data of a party through illegal means), careless handling of login credentials and passwords, being overworked and therefore making mistakes (such as sending the unencrypted version of a database file to another party instead of the encrypted file), using outdated software, or parameter settings that are insecure, or being new to a job or unfamiliar with new software or procedures that lead to mistakes being made when accessing and handling sensitive data.
# 4.1 Information Leakage at a Single Party
First, we discuss what a single party can learn by itself. Table 2 summarises what information is available to the different types of parties in the four types of protocols. We start with the parties that are the core of a data linkage project, as shown within the linkage setting in Figure 2.
One DO alone: For both versions of protocols that are based on the separation principle, as shown in Figure 3, a DO receives from the LU the identifiers of all records in its own database that were classified as a match with a record in the database held by the other DO(s). Knowing which records match allows the DO to restrict the PD it needs to send to the DM to these records only. However, knowing which records in its own database were matched (and therefore, which ones were not) can leak sensitive information.
Imagine the DO has a database containing the employment details of individuals. Suppose this database was linked with a health database of, for example, HIV patients to analyse the employment prospects of people with HIV. In this case, learning which records are matches reveals to the DO who in their database likely has this illness. On the other hand, if the database held by the other DO contains information about taxpayers, then any record in its own database that is not matched points to an individual who might not have paid taxes. In both these examples, the DO can learn sensitive information about individuals in their own database. This leakage happens even when the PPRL version of this protocol (shown in the right-hand side of Figure 3) is employed. The reason is that PPRL protocols aim to hide sensitive information in QID values from the LU, but they do not (at least existing PPRL protocols) hide which records were matched and which were not.
One way to overcome such information leakage to the DOs is to use one of the protocols shown in Figure 4, where each DO sends the PD of all its records to the DM and receives no information about matched records from the LU nor any other party participating in the protocol. Such an approach, however, can leak information to the DM, as we discuss below.
The LU alone: In both TDL versions of the protocols shown in the left-hand side of Figures 3 and 4, the LU obtains plain-text QID values from the DOs. Knowing any information about the context of the sources of these databases (for example, from file names or database table names), such as the HIV database in the example above, means the LU learns about all records in the databases to be linked even if the DOs do not send any PD to the LU. Combining the QID values in $\mathbf { D } _ { A }$ and $\mathbf { D } _ { B }$ with external data (such as publicly available social media profiles) might allow a curious employee of the LU to learn even more personal details about these individuals. Furthermore, the outcomes of the linkage (which records are matched and which ones are not) results in a similar type of information leakage at the LU as occurred at a DO as described above. This is because the LU does have access to the QID values that were required for the linkage.
With the corresponding PPRL versions of these protocols, as shown in the right-hand side of Figures 3 and 4, and assuming these protocols are secure against attacks [56, 58], the LU will not be able to learn any sensitive information about any of the individuals that are represented by the encoded QID values sent to the LU by the DOs. The main aim of PPRL techniques is to prevent any leakage of sensitive information to the LU [17, 52]. The information that a PPRL protocol likely leaks to the LU are the calculated similarities between records, and how many of the compared record pairs were classified as matches. It has been shown that in some situations such similarity information can be successfully attacked by a LU [10, 41, 55]. Further development of improved PPRL techniques is needed that can hide the similarities calculated between records while still achieving high linkage quality.
The DM alone: For the protocols based on the separation principle shown in Figure 3, only information about matched records is being provided to the DM by the DOs. The linked data set the DM can create from the matched pairs of records, M, and the corresponding PD sent to it by the DOs, would have been approved by the institutional review board or ethics committee that previously assessed the linkage project being conducted. Therefore, for these protocols, no unintentional information will be leaked to the DM from the data it receives from the DOs. However, especially after record pairs are linked, the received PD might still contain enough information to allow the DM to reidentify some individuals represented by matched records. Such reidentification attacks have been shown to be feasible even on supposedly anonymised data [40].
On the other hand, in the protocols without data backflow shown in Figure 4, the DM obtains the PD of all records (both matched and non-matched) in both databases $\mathbf { D } _ { A }$ and $\mathbf { D } _ { B }$ being linked, while from the LU the DM receives the set of matched pairs, M. From these files, the DM can learn the numbers of matched and non-matched records in each database, as well as the characteristics (values and frequency distributions) of the PD attributes of matches and nonmatches. These can potentially leak sensitive information. While the results of a linkage project, the PD of the matched records in the two databases $\mathbf { D } _ { A }$ and $\mathbf { D } _ { B }$ , would have been approved for research use by the institutional review board or ethics committee that assessed the linkage project, non-matching records in $\mathbf { D } _ { A }$ and $\mathbf { D } _ { B }$ would generally not be covered by such agreements. Therefore, any information the DM can learn from non-matched records will be unintentional leakage of possibly sensitive information.
For example, assume the above-discussed employment and taxation databases are being linked. If only $10 \%$ of records with an employment category ‘CEO’ were matched, then the DM learns sensitive financial information about this group of individuals, namely that the majority of CEOs do not pay taxes. It could even be that none of the CEOs with an age above 50 and gender ‘male’ have been matched, indicating that no individual in this group of men does pay taxes. As a second example, if $2 5 \%$ of employment records with the occupation ‘Bartender’ are linked to the HIV database (while in total, only $2 \%$ of records in the employment database were matched) then this again reveals highly sensitive information about people with this occupation and their health status. This type of information leakage is known as group disclosure [60], and it results from potential differences between matched and non-matched records. These differences could not be learnt if the DM only obtains the PD of matched records as in the separation principle based protocols shown in Figure 3. While it might not be possible to reidentify individuals this way, group disclosure can lead to discrimination against groups of people with certain characteristics.
For the protocols without data backflow, similar to the separation principle based protocols, the DM can also attempt a reidentification attack because the output of the merging step (the set of matched record pairs M) is the same for both types of protocols. Furthermore, because the DM receives the PD values of all records in both databases, it can also mount a reidentification attack on the not-matched records (that do not occur in M).
One DP alone: In none of the four protocol variations shown in Figures 3 and 4 a DP receives any data from any other party. Therefore, assuming no collusion between parties, no leakage of sensitive information from another party is possible at the DP.
The DA alone: In all protocol versions, both the ones based on the separation principle or those based on no data backflow, the output of the linkage project is a SUF (as generated by the DM) that contains the PD of the record pairs that have been matched. As Figure 2 illustrates, such a SUF can be passed on to a DA that applies statistical disclosure control (SDC) methods [14, 15, 46, 47] to create a PUF that can be made publicly available.
In the separation principle based protocols, similar to the DM (the party that generates the SUF) a curious DA can mount a reidentification attack on the SUF in the same way as the DM could mount such an attack because both the DM and the DA have access to the same type of data. However, in the protocols without data backflow, a DA only have access to the SUF, which contains the PD of matched records, while the DM also has access to the PD of all non-matched records.
As with any other party, the DA could also try to source external data to assign the PD values of matched record pairs in the SUF to publicly available identifying information (such as obtained from social network sites, telephone directories, or voter databases) [60]. For example, a certain combination of postcode, age, and education level might already be unique enough to reidentify some individuals in an SUF by matching their values to external data [40].
The DU alone: An approved, but curious, $\mathrm { \ D U _ { A } }$ who obtains a SUF can mount the same reidentification attacks as the DA because it has access to the same information as the DA (the SUF).
For a public $\mathrm { D U _ { P } }$ , on the other hand, who obtains the PUF, we need to assume that the SDC methods applied on the SUF by the DA have resulted in a PUF that is safe with regard to any (currently existing) privacy attacks. Therefore, no information leakage should be possible from a PUF at a $\mathrm { \Delta D U _ { P } }$ . In the era of Big Data, where much information about many individuals is publicly available, for example, on social networking sites, this assurance is being questioned [60, 11]. A malicious party might use illegally obtained data, such as health or financial data retrieved from the dark Web, to enrich the legal data available to it in a PUF.
As we have shown, some sensitive information can be leaked unintentionally, even if a single party (or one of its employees) behaves in a curious way. Importantly, information leakage is even possible when PPRL protocols are employed, as can also be seen in Table 2. As we discuss next, once parties collude and share some of their data or information about the PPRL technique being used, even more can be learnt by the colluding two parties.
# 4.2 Information Leakage when Parties Collude
We now describe how collusion between parties can potentially lead to information leakage. We only discuss collusions involving two parties, as any collusion involving three or more parties would require detailed planning and likely involve malicious intent. This is distinct from situations where, for example, curious employees share information that interests them.
We start with the three main types of parties involved in a linkage project (the DOs, LU and DM, as shown in Figure 2), and discuss the motivation these parties might have in colluding. Without loss of generality, we assume two DOs are involved in a linkage protocol. However, the following discussion also holds for situations when sensitive databases from more than two DOs are linked.
Two DOs collude: If the DOs decide to (possibly legitimately) work together, the result is comparable to a direct exchange of some content (such as QID values) of their databases. One motivation for the DOs to directly share their data would be their desire to not involve any additional party in the linkage of their databases. Reasons for doing this could be commercial interests or privacy regulations, where sharing sensitive data (such as about the customers of a business or patients with a certain disease) with other parties might be seen as a risk or is not permitted.
On the other hand, if there is illegitimate collusion (for example by an employee of one DO who shares information with another DO) then information leakage can range from a single shared record (or subsets of the QID and/or PD values of a record) all the way to potentially a full sensitive database being shared, which would correspond to a major data breach. One example scenario of why the DOs would be motivated to collude is when employees of the DOs exchange the QID values of one or more records in their databases to see if both databases contain records with these same or similar values. Sharing these values would allow to generate ground truth data (of records that do occur in both databases as well as records that only occur in one database) that can be used to evaluate the quality of the matches generated by the LU or to train a supervised machine learning classification method. A second motivation might be improving a later linkage by sharing QIDs to facilitate data standardisation and harmonisation [4].
Note that in the context of PPRL, two-party protocols have been developed which aim to accomplish a direct linkage of two databases without the DOs learning about each other’s sensitive data [5, 53]. However, even in such two-party PPRL protocols, both DOs learn which of the records in their databases were matched, and which ones were not. As we have shown in Section 4.1, this knowledge can reveal sensitive information about the individuals whose records are stored in a database.
One DO colludes with the LU: The motivation for such a collusion is for the colluding parties to learn information about the sensitive data held by the not colluding DO. In both types of TDL protocols (the separation principle based one and the protocol without data backflow), the LU obtains the QIDs from both DOs, and by doing the linkage, it learns which records are part of a match and which are not. Sharing this information with the colluding DO is similar to the above discussion, where one DO sends the QID values of all its records to the other DO. However, the colluding DO will also learn which of its own records match across the two databases and which do not. This could reveal sensitive information, as discussed in the above taxation database example. For TDL protocols, any collusion between a DO and the LU can, therefore, result in substantial leakage of information from records in the database of the not-colluding DO.
In a commercial scenario, one business (DO) could be interested to learn about all customers of the other DO and which of its customers are not also customers of the other business. Similarly, in a research environment, the generation of ground truth (as described above) could be a reason for an employee of the LU to collude with an employee of one DO. The colluding DO could, for example, validate the matches generated by the LU by inspecting and comparing the QID values from both DOs.
Furthermore, knowing the source of a database (such as in the previous example containing records about HIV patients) will potentially leak sensitive information to the colluding DO, which this DO is unlikely allowed to learn. Because, generally, the linkage of databases has been approved (either by the two DOs alone or involving an ethics committee or institutional review board), both DOs will know the general background and content of each other’s databases, such as if they contain records of HIV patients or taxpayers. Even if only QID values are shared (but no PD values), the colluding DO can learn sensitive information about the individuals whose records occur in the database of the not-colluding DO.
When a PPRL protocol is employed, then QID values are hidden from the LU because they are encoded [17, 53]. If the colluding DO shares with the LU the encoding algorithm and parameter settings used by the DOs to encode their QID values, then the LU can mount an attack on the encoded QID values it has received from the not colluding DO (as well as the QID values of the colluding DO) [58]. Knowing all the parameters of the encoding technique used in a PPRL protocol can allow the reidentification of encoded QID values for many records in an encoded database [30]. Collusion between one DO, and the LU is, therefore, one of the biggest weaknesses of the current PPRL methods [56], and further research is required to prevent information leakage in such situations.
One DO colludes with the DM: In the separation principle based TDL protocol, the DM only obtains the PD of those records that are involved in a match from both DOs together with the match identifiers, while in the protocols without data backflow, the DM receives the PD of all records in both databases plus the match identifiers from the LU.
The motivation for a DO to collude with the DM would be to obtain the PD values of records from the non-colluding DO because these contain information (likely sensitive) the colluding DO is not supposed to have access to. Because the DM knows which records are part of a match (for both types of TDL protocols), the colluding DO will be able to assign the PD from records in the other database to its own matching records, and thereby, it can learn potentially sensitive information about the corresponding individuals. An example motivation for the DM could be either direct payment or the promise of the next merge job.
This information leakage can also happen in PPRL protocols because, even with such protocols, the DM still obtains the plain-text PD values and the encrypted record identifiers of all matching record pairs. The DM can, therefore, send the PD values of the non-colluding DO to the colluding DO together with the match identifiers, which will allow the colluding DO to associate the PD of records from the other DO’s database to its own records that were matched by the LU. Alternatively, the colluding DO might send the QID values of its matching records to the DM. In such a scenario, the DM and colluding DO would learn the identities of all individuals whose records have been matched and the PD values from both databases for these individuals. Such leakage of information is again possible for both TDL as well as PPRL protocols.
As an example, two curious employees (one at a DO and one at the DM) aim to learn the PD of a celebrity or politician, which only requires the employee of the DO to let the colluding employee of the DM know which record identifier (encrypted for a PPRL protocol) corresponds to the individual they are interested in.
The LU colludes with the DM: In the separation-principle based TDL protocol, the combined information the LU and the DM can access consists of the QID and PD values of all records involved in matched record pairs. This corresponds to the SUF with QID values attached to each record in the SUF. For any records (from both databases) not involved in a match, the LU also has the QID values but the DM does not have corresponding PD values. Therefore, besides knowing about the source and overall content of these databases (like the HIV and taxation examples from above), no PD will be available for those individuals whose records are not part of a match.
In the TDL protocol without data backflow, however, together the LU and DM have access to both the QID and PD values of all records from both source databases, and using the record identifiers, they can reconstruct the full input databases. Furthermore, they also know which records have been matched across the two databases. Any such collusion could, therefore, lead to a full data breach.
For the PPRL based protocols, the LU does not know anything about the QID values because these are encoded. If this encoding is secure [58], then, in such a collusion, the LU could only learn the PD values of the matched records (for a separation principle-based protocol) or all records (for a protocol without data backflow). However, the LU would not be able to assign these PD values to any specific individuals because, in a PPRL protocol, these are hidden from both the LU and DM.
We finally discuss what the other parties involved in a data linkage protocol (the DP, DA and DU) can learn if they collude with another party and the motivation of such a collusion.
A DP colludes with another party: A Data Provider (DP) might be motivated differently than a DO to contribute their data for a data linkage project. A DP can, for example, be a commercial business or a government agency, while a DO can be a data repository in a research organisation or a National Statistical Institute. If the DP is a commercial provider, it is motivated to learn more about the individuals in its database and their corresponding QID and PD values from the other database, such as up-to-date contact details or PD values that complement the DP’s own PD values, which can be useful to its business.
Therefore, a DP might consider to collude with the LU (to obtain the QID values of the other DO) or the DM (to obtain the PD values of records that were matched by the LU). Such a collusion with the DM would be possible for both TDL and PPRL protocols. However, a collusion with the LU would only lead to information leakage for TDL based protocols. This is because in PPRL protocols the LU only obtains encoded QID values and neither the DP nor the LU know how these are the DOs encoded original QID values.
The DA colludes with another party: The Data Anonymiser (DA) obtains the SUF from the DM, and therefore any collusion with the DM, LU or DO could be aimed at enriching this SUF with QID values, similar to the collusions of the DM with other parties we described previously. A (possibly friendly) motivation of the DA to collude would be to obtain QID values (and possibly PD values of not-matched records) to validate if the SDC [14] methods applied on the SUF by the DA are secure and prevent any reidentification of individuals whose PD values are included in the SUF. Of course, malicious motivations are possible as well.
A DU colludes with another party: A Data User (DU) is a researcher or analyst who is motivated to obtain as much data as possible for the study they are working on. If they can only access a restricted SUF or PUF, allowing them to conduct their research in a limited way, they might aim to find other publicly available data that is of use for their work, or they might contact the DOs or DPs involved in the data linkage protocol to see if it would provide extra data that was not included in the SUF or PUF.
# 5 Discussion and Recommendations
When databases from different sources have to be linked, various types of protocols have been developed to communicate the required data between the parties involved in such a protocol.
As we have shown, these different types of protocols and linkage approaches (TDL or PPRL) lead to different types and amounts of information being leaked to some parties involved in a data linkage protocol, as can be seen from Table 2.
Importantly, as we discussed in Section 3, no current data linkage protocol or PPRL technique can prevent information leakage at all parties involved in a protocol. It is important to understand that current PPRL techniques only hide the QID values that are used for the comparison of records from the LU. These techniques, however, do not hide which records were classified as matched, nor any aspects of the payload data (PD) which is to be used by researchers for analysing the linked records.
Given these current gaps in any data linkage protocol, we provide the following recommendations for anyone who is nvolved in linking sensitive databases across organisations:
1. Carefully assess a specific data linkage protocol being developed, including the linkage techniques being employed, the parties involved, and the data flow between these parties.
2. Using the list of potential leakages discussed in Section 4, assess where information leakage could happen, and accordingly design processes and methods to prevent potential information leakage.
3. Use PPRL techniques as much as possible within a given linkage setting if permitted by regulations and policies.
4. Data sets, database tables, and files should not be named in a way that reveals potentially sensitive information. If data is exchanged between parties [5], all files and communications must be properly encrypted.
5. Employ the Five Safes [12] framework (safe projects, safe people, safe data, safe settings, and safe outputs), which makes actors in a data linkage project more aware of non-technical aspects of a linkage. Note that PPRL techniques only address the safe settings dimension of the Five Safes framework.
6. Proper education and training are important, given human mistakes, curiosity or unexpected behaviour might happen in otherwise highly regulated environments [7].
7. Proper setup and deployment of access control mechanisms [5] are required to ensure a user can only access the files they require for their work but no other files.
8. Implement monitoring and logging of activities on secure systems that hold sensitive data to identify and possibly discourage unauthorised access.
It should be kept in mind that, given human beings are involved in data linkage protocols, it is impossible to have provably secure systems. Therefore, the remaining small risk of information leakage must be considered in any project that involves linking sensitive data across organisations. However, it should also be remembered that not linking data involves other potential losses [29]. | The process of linking databases that contain sensitive information about
individuals across organisations is an increasingly common requirement in the
health and social science research domains, as well as with governments and
businesses. To protect personal data, protocols have been developed to limit
the leakage of sensitive information. Furthermore, privacy-preserving record
linkage (PPRL) techniques have been proposed to conduct linkage on encoded
data. While PPRL techniques are now being employed in real-world applications,
the focus of PPRL research has been on the technical aspects of linking
sensitive data (such as encoding methods and cryptanalysis attacks), but not on
organisational challenges when employing such techniques in practice. We
analyse what sensitive information can possibly leak, either unintentionally or
intentionally, in traditional data linkage as well as PPRL protocols, and what
a party that participates in such a protocol can learn from the data it obtains
legitimately within the protocol. We also show that PPRL protocols can still
result in the unintentional leakage of sensitive information. We provide
recommendations to help data custodians and other parties involved in a data
linkage project to identify and prevent vulnerabilities and make their project
more secure. | [
"cs.CR",
"cs.DB"
] |
# 1. Introduction
Approximately 70 million deaf individuals worldwide use sign language as their first language (WHO, 2021), yet they continue to face communication barriers in education, healthcare, and public services. Most mainstream sign language assistance systems rely on a multi-stage pipeline of speech $ \mathrm { t e x t } \mathrm { g l o s s } \mathrm { l }$ pre-recorded animations, often resulting in rigid outputs lacking facial and torso signals. Additionally, users struggle to effectively modify or personalize AI-generated results, leading to low adoption rates within the deaf community (Dimou et al., 2022).
In recent years, the Transformer architecture has emerged as a core method in motion generation and sign language synthesis due to its exceptional performance in sequence modeling (Saunders et al., 2020b). Transformers can effectively capture long-range dependencies, enabling direct mapping from speech or text to continuous 3D keypoint sequences. However, existing approaches still face two critical challenges: first, models often function as ”black boxes,” lacking interpretability and user engagement mechanisms; second, constrained by computational complexity and pipeline coupling, practical systems frequently struggle to achieve low-latency, highly interactive deployment (Saunders et al., 2020a).
To address these challenges, this paper adopts a HumanCentered AI (HCAI) framework, proposing an end-to-end streaming Conformer-Transformer pipeline that directly converts speech or text into 3D sign language motions (including upper-body and facial keypoints). The system enhances usability, interpretability, and user trust through structured, editable JSON scripts and a human-in-the-loop feedback mechanism. Our design empowers users—particularly deaf individuals and sign language interpreters—to intuitively review, edit, and personalize each sign language sequence via drag-and-drop interfaces and real-time animation playback, achieving truly ”controllable” AI output. The system continuously collects user edits and ratings, periodically fine-tuning models with this feedback to ensure alignment with real-world needs.
To systematically validate the effectiveness of humancentered design, we developed a 34-item Likert-scale questionnaire covering seven dimensions (comprehensibility, naturalness, interpretability, controllability, trust, etc.) and conducted comparative evaluations with 20 native deaf signers and 5 professional interpreters across two modes: ”Auto” (automatic generation) and ”Edit” (generation $+$ editing). Results show that Edit mode improved comprehension by $2 7 \%$ , naturalness by $23 \%$ , and System Usability Scale (SUS) scores by 13 points, while reducing overall cognitive load by $1 5 \%$ (see Section 4).
The contributions of this paper are as follows:
• Real-time streaming skeletal motion generation: Proposes an end-to-end sign language motion generation system based on streaming ConformerTransformer, balancing high fidelity and low latency to meet practical accessibility needs (Saunders et al., 2020a; Damdoo & Kumar, 2025).
• Structured editable interaction layer: Designs JSON scripts and a drag-and-drop editor covering elements such as gloss, handshape, duration, and facial expressions to enhance interpretability and controllability.
• Human-centered optimization and closed-loop iteration: Introduces a continuous optimization mechanism based on user feedback and expert annotations, supporting model fine-tuning and adaptive system evolution.
• Empirical evaluation: Through mixed quantitative and qualitative experiments, this study is the first to empirically validate the significant improvements in comprehension, naturalness, usability, and other multidimensional metrics through human-centered design.
The paper is structured as follows: Section 2 reviews related work; Section 3 describes the system architecture and data flow; Section 4 presents evaluation methods and results; Section 5 discusses limitations and improvements; Section 6 concludes with future work.
# 2. Relative Works
# 2.1. Transformer in Sign Language Motion Generation
The Transformer’s self-attention mechanism effectively captures long-range sequence dependencies. Since its introduction by Vaswani et al. (2017), it has been widely applied to motion generation and sign language modeling. Saunders et al.’s (2020b) Progressive Transformer treats sign language generation as a sequence-to-sequence translation task, directly mapping text or gloss sequences to continuous 3D skeletal point sequences, achieving state-of-theart (SOTA) performance. Subsequent research improved naturalness and diversity through multi-channel modeling, Mixture Density Networks (MDN), and multi-task optimization (Saunders et al., 2020a). Damdoo and Kumar’s (2025) SignEdgeLVM achieved near 30 FPS real-time skeletal animation generation on edge devices with low computational power, demonstrating the lightweight potential of Transformer architectures.
Recent advancements have also been made in diffusion and latent variable Transformer methods: wSignGen achieved more realistic motion details and grammatical accuracy in word-level 3D ASL motion generation (Dong et al., 2024b); SignAvatar combined CVAE with a Transformer architecture to achieve highly robust 3D motion reconstruction and generation (Dong et al., 2024a); the latent variable Transformer proposed by Xie et al. (2024) surpassed traditional Seq2Seq models on both WLASL and PHOENIX14T benchmarks; SinDiff significantly improved the coherence and detail consistency of long-sequence sign language generation through a Transformer diffusion framework (Liang & Xu, 2023).
# 2.2. Sign Language Translation Paradigms
Traditional sign language translation systems often adopt a ”text $$ gloss $$ motion” intermediary approach, which is logically clear but requires extensive gloss annotation and struggles to capture facial and torso grammar (Tan et al., 2024). For convenient cross-comparison, commonly used benchmark datasets include RWTH-PHOENIX-Weather 2014T (Koller et al., 2015), WLASL (Li et al., 2020), and the phoneme-annotated WLASL-LEX (Tavella et al., 2022), which provide unified standards for evaluating the performance of different paradigm approaches. Gloss-free endto-end methods reduce annotation needs through weak supervision or latent variable alignment: for example, GASLT (Yin et al., 2023) and GloFE (Lin et al., 2023) proposed weak supervision mechanisms based on gloss-attention and semantic alignment, respectively. SignVQNet (Hwang et al., 2024) used discretized latent codebooks to enable direct text-to-motion mapping but still faces challenges in data scale and temporal synchronization.
# 2.3. System Optimization
In practical deployment, generated sign language motions must drive avatars or animated characters in real time. Cui et al. (2022) proposed a 3D skeletal point regression method based on spatio-temporal graph convolution, combined with inverse kinematics (IK) for smooth animation. Shi et al. (2024) achieved significant improvements in interframe consistency and motion smoothness through their fine-grained video generation technology based on optical flow warping and pose fusion modules. Gan et al. (Gan et al., 2023) achieved end-to-end inference for 9.2s videos on edge devices, highlighting the importance of lightweight models and efficient rendering pipelines.
# 2.4. Human-Centered Design
Human-Centered AI (HCAI) emphasizes three principles: ”transparency, controllability, and trustworthiness” (Shneiderman, 2022), advocating for deep involvement of target users in AI system design, testing, and iterative feedback. The foundational work in interactive machine learning also supports HCAI theory: Fails & Olsen (2003) proposed the concept of ”Interactive Machine Learning,” emphasizing users’ active role in model training; Amershi et al. (2014) systematically summarized the crucial role of human-computer collaboration in interactive machine learning, further highlighting the importance of participatory feedback for improving model performance. Dimou et al. (2022) demonstrated that meaningful participation by Deaf communities significantly improves the acceptance of sign language avatars. Kothadiya et al.’s (2023) SignExplainer framework integrated explanation layers and user correction in recognition tasks, showing that ”explainable” design enhances AI trustworthiness. However, existing sign language generation systems rarely support real-time editing or human-AI collaborative closed-loop optimization. Our work fills this gap by systematically validating the multidimensional benefits of human-centered design principles in accessible sign language animation.
# 3. Methodology
Modern AI-powered sign language generation systems must not only achieve real-time efficiency and natural movements, but also deeply integrate human-centered design principles to fundamentally address users’ diverse needs and societal ethical expectations. This section details the overall architecture, algorithmic principles, and human-centered interaction flow of the end-to-end speech-driven sign language animation system proposed in this study. Our solution is designed around three core principles - ”real-time performance, explainability, and user participation” - achieving for the first time an organic integration of Transformer-generated motion sequences, structured intermediate representations, and user-controllable closed-loop optimization.
# 3.1. System Architecture and Data Flow
Our end-to-end speech-to-sign animation pipeline comprises six tightly integrated modules:
1. Streaming Conformer Encoder: Incoming audio frames $x _ { 1 : T }$ are converted into high-level representations $\mathbf { H }$ with an internal encoder delay of $\leq 5 0 m s$ ; together with decoding, IK, and rendering stages (see $\ S 3 . 4 )$ , the total \*speech-to-avatar\* latency is bounded to $\leq 1 5 0 m s$ end-to-end.
2. Autoregressive Transformer-MDN Decoder: Conditions on $\mathbf { H }$ and previous motion latents $\left\{ \boldsymbol { z } _ { < t } \right\}$ to produce a sequence of 128-dim latent vectors $z _ { t }$ , gloss labels $g _ { t }$ , and AU labels $a _ { t }$ .
3. Structured JSON Generator: Maps $\{ z _ { t } , g _ { t } , a _ { t } \}$ into a human-readable intermediate representation $\mathcal { I }$ , exposing fields $\{ \mathfrak { g l o s s } , \mathtt { s t a r t } , \dots \}$
4. Interactive JSON Editor: Allows users to inspect and modify $\mathcal { I }$ ; any edit triggers local resampling of the affected $z _ { t }$ subsequence.
5. Unity3D IK Renderer: Binds the final motion latents $\left\{ { z } _ { t } ^ { \prime } \right\}$ to a 3D humanoid rig using Two-Bone IK and spline smoothing, producing real-time animation $\mathcal { A }$ .
6. Edge-side Optimization & HITL Feedback: Applies model pruning and quantization for sub- $\cdot 2 0 \mathrm { m s } \prime$ /frame inference, while capturing user edits and ratings for periodic human-in-the-loop fine-tuning.
Audio Input (Microphone/Audio File) Analytics& Model Update ·Quantitative metrics -Comprehension,SUS,Trust, ASR&Text Normalizer ECE... Conformer-based ASR ·Qualitative tags (T1-T4) Punctuation&Casing ·Retrain /fine-tune Text→Gloss Sequence
Action-Structure Generator (Transformer) Encoder:Conformer-Transformer stacks on gloss tokens Video $^ +$ Data Recorder Multi-head self- Positional attention enati bisas OBS Studio capture Frame-sync logs User interaction logs Decoder: JSON-token autoregressivegeneration Generates structured “Action Structure" in Human-in-the-Loop Editing Ul JSON form: Tree-viewof JSONfields 1 Handshape IDs Sliders/drop-downsforparameter Tran-ctorycurmearkers Instant preview in rendering pane Syntax tags (emphasis, negation) Unity3D Rendering Engine Training loss Skeletal rig + blendshapes cross-entropy $^ +$ structural consistency term Real-timeGPU skinning Lighting&camera setup Motion Synthesis Module (Keypoint Generator)
Input: Edited Converts JSON tokens →3D hand & body Output: Frame
JSON "Action E keypfrnmst :0m
During initial design phases, three rounds of interviews and co-creation workshops with deaf users and interpreters surfaced three principal requirements:
• Real-time alignment: end-to-end latency $< 1 2 8 ~ \mathrm { m s }$ , • Expressive diversity: synchronized upper-body and facial motion generation, • Full user agency: transparent, editable intermediate layer with continuous user intervention.
Following the IDEO “Insight–Principle–Solution” framework (IDEO.org, 2015), these requirements directly informed our three architectural pillars:
(i) Streaming acoustic–semantic alignment, (ii) Multi-channel structured motion & non-manual signals, (iii) Editable JSON layer $^ +$ human-in-the-loop optimization.
# 3.2. Streaming Conformer Encoder with Transformer-MDN Action Generation
The system first splits the input audio into frames $\mathrm { 2 5 m s }$ window, $1 0 \mathrm { m s }$ hop) and extracts an 80-dimensional Melspectrogram sequence
$$
X = \{ x _ { t } \} _ { t = 1 } ^ { T } .
$$
A 6-layer streaming Conformer (each layer with model dimension $d = 2 5 6$ , 4-head causal self-attention plus local conv) (Gulati et al., 2020) then encodes $X$ into a downsampled prosody–semantic feature sequence
$$
H = \{ h _ { n } \} _ { n = 1 } ^ { N } , \quad N \approx T / r , \ r \in \mathbb { N } ,
$$
ensuring end-to-end latency $< 1 6 0 \mathrm { m s }$ (Damdoo & Kumar, 2025). Details are shown in Algorithm 1.
In the decoding stage, an autoregressive Transformer-MDN generates, at each step $t$ the autoregressive TransformerMDN outputs a 128-dimensional latent vector $\boldsymbol { z } _ { t } \in \mathbb { R } ^ { 1 2 8 }$ . This vector is produced by a two-stage VAE compressor that projects the 228 raw SMPL-X pose parameters (75 body $+ 1 4 3$ hand / finger $+ 1 0$ AUs) (Pavlakos et al., 2019) into a compact latent sub-space learned jointly with the decoder, preserving $9 9 . 3 \ \%$ motion variance while enabling fast sampling. The MDN models
$$
p ( z _ { t } \mid z _ { < t } , H ) = \sum _ { k = 1 } ^ { K } \pi _ { k } \mathcal { N } \big ( z _ { t } \mid \mu _ { k } , \sigma _ { k } ^ { 2 } I \big ) , \quad K = 5 ,
$$
capturing multimodal motion distributions (Bishop, 1994). Concurrently, the decoder outputs gloss logits $g _ { t }$ (vocabulary $\sim \mathrm { 3 k }$ , cross-entropy loss) and AU logits $a _ { t }$ (7 classes, Focal loss) via multi-task heads (Lin et al., 2017). The joint training objective is
$$
\begin{array} { r } { \mathcal { L } = \lambda _ { 1 } [ \mathcal { L } _ { \mathrm { b o d y } } + 3 \mathcal { L } _ { \mathrm { h a n d } } ] + \lambda _ { 2 } \mathcal { L } _ { \mathrm { G l o s s } } + \lambda _ { 3 } \mathcal { L } _ { \mathrm { A U } } } \end{array}
$$
where the suggested values of $\lambda$ are:
$$
( \lambda _ { 1 } , \lambda _ { 2 } , \lambda _ { 3 } ) = ( 1 , 0 . 6 , 0 . 4 ) ,
$$
where hand-joint errors receive a $3 \times$ weight to emphasize clarity of critical articulations. The pipeline is shown in Figure 2.
Figure 2. Streaming Conformer–Transformer–MDN pipeline.
# 3.3. Editable Transformer Architecture in Sign Language Motion Generation
To bridge model inference and human agency, we introduce a structured JSON intermediate layer that explicitly exposes the core parameters of each sign unit—gloss label, start/end timestamps, handshape, motion trajectory, facial expression, and syntactic role—rather than treating generation as a “black box.” Crucially, this schema (gloss, start, end, handshape, movement, expression, syntax tag, etc.) was not arbitrarily defined by developers but co-created through two rounds of card sorting, priority voting, and participatory workshops with deaf users and professional interpreters. This ensures that the JSON fields align with users’ mental models and supports intuitive manipulation in the front-end editor.
Resampling Hook $z _ { 1 } , \dots , z _ { \{ k - 1 \} } ,$ Original 2.ForE,mp Latents-1, Updated Rrerender ZK,..,ZT 3. Collect new Zz ... ZT zk..,.ZT
In the live interface, each JSON entry is bound to the corresponding latent sequence segment via the “Resampling Hook” in Algorithm 2. Users modify any field using dragand-drop, sliders, or dropdowns; the system detects the edit, injects the updated latent $\hat { z } _ { t - 1 }$ into the autoregressive Transformer-MDN decoder, and efficiently recomputes only the affected subsequence $\{ z _ { t } , \dots , z _ { T } \}$ . This partial resampling strategy preserves overall fluency while delivering sub- $1 0 0 \mathrm { { m s } }$ responsiveness.
{ "gloss": "THANK-YOU", "start": 0.45, "end": 1.08, "handshape": "flat", "movement": "forward", "expression": "smile", "syntax_tag": "statement"
}
To further enhance explainability, we map the MDN mixture weights $\left\{ \pi _ { k } \right\}$ onto the 3D skeleton as a per-frame heatmap: higher $\pi _ { k }$ values render with greater opacity, enabling users to pinpoint segments where the model’s uncertainty or multimodality is greatest and focus their edits there. All UI components—including heatmap toggles and JSON fields—support keyboard, voice, and assistive-device inputs in compliance with WCAG 2.2 AA, ensuring equitable access across diverse user abilities.
# 3.4. Unity3D Animation Rendering and Client-Side Optimization
The generated motion key points are mapped and bound to the Humanoid Rig skeleton in Unity3D. We employ Two
Bone IK algorithms (Hecker et al., 2008) and Spline interpolation smoothing to further enhance motion naturalness and physical plausibility. On the inference side, the model utilizes $30 \%$ weight pruning, INT8 quantization, and TensorRT acceleration, reducing average frame time to $1 3 m s$ (RTX 4070 Mobile). Together with (i) audio feature extraction $\approx 7 m s$ , (ii) Conformer encoding $\approx 3 0 m s$ , (iii) inverse kinematics $\approx 1 8 m s$ , and (iv) Unity rendering $\approx 3 5 m s$ , the end-to-end speech-to-avatar delay is $1 0 3 \pm 6 m s$ , comfortably below our $1 5 0 m s$ target. Even on standard notebook CPUs, it maintains stable performance at 15-25 FPS, enabling practical deployment on edge devices.
# 3.5. Human-in-the-Loop Optimization
To continuously align the model with real user needs, we embed a closed-loop feedback mechanism in production:
Feedback collection $\pmb { \& }$ model fine-tuning After each generation or edit session, users rate the animation on a 5-point Likert scale, and all JSON diffs are logged. Weekly, professional interpreters annotate selected historic segments for terminology and grammatical accuracy. We assemble triplets $( \mathcal { I } ^ { \mathrm { o r i g } } , \mathcal { I } ^ { \mathrm { e d i t } } , r _ { u } , r _ { e } )$ —original JSON, user revision, user rating $r _ { u }$ , expert rating $r _ { e }$ —as incremental training data. The decoder parameters $\theta$ are then fine-tuned by minimizing a KL-regularized multi-task loss, combined with a PPO-style reward:
$$
J ( \theta ) = \mathbb { E } _ { \pi _ { \theta } } \Big [ \sum _ { t = 0 } ^ { \infty } \gamma ^ { t } R _ { \phi } ( s _ { t } , a _ { t } ) \Big ] , R _ { \phi } ( s _ { t } , a _ { t } ) = w _ { u } r _ { u } + w _ { e } r _ { e } ,
$$
where $\mathrm { D } _ { \mathrm { K L } }$ -regularization encourages the updated policy $\pi _ { \boldsymbol { \theta } }$ to remain close to the pretrained one, and $( w _ { u } , w _ { e } )$ balance user versus expert signals (Schulman et al., 2017). Empirically, we perform micro-batches of fine-tuning every two weeks.
Design process & data governance Our development follows the Double Diamond model (Design Council, 2005):
• Discover & Define: user co-creation workshops, painpoint mapping, prioritization;
• Develop & Deliver: prototype testing, expert review, data-driven iteration.
All audio/video samples undergo face-blurring and skeletal abstraction, then SHA-256 hashing for de-identification, in compliance with GDPR Art. 9(4) and CC-BY-NC 4.0.
# 4. Evaluation Methods and Results
This section systematically presents the multidimensional performance of our human-centered AI sign language animation system among real user groups. We combine quantitative and qualitative methods, focusing on key metrics such as usability, explainability, trustworthiness, editing burden, and inclusivity. The evaluation balances practical engineering metrics with design-science rigor.
# 4.1. Evaluation objectives, participants, and experimental procedures
Our evaluation objectives fall into two categories: (1) quantifying the system’s performance in terms of comprehension, naturalness, controllability, trustworthiness, and editing load; (2) qualitative analysis of how participatory information architecture and human-centered closed-loop optimization enhance user experience and actual sign language production workflows. The experimental procedure employs a Latin square balanced design to control order effects, with all tests conducted in quiet environments equipped with standard PCs and Unity animation preview interfaces.
Each participant sequentially completes two sets of tasks: ”Auto-generation (Auto)” and ”Generation $^ +$ Editing (Edit)”, with each set containing 8 typical dialogue tasks (such as greetings, instructions, terminology, emotional expression, etc.), totalling 16 rounds of interaction. After each round, participants immediately fill out a Likert scale questionnaire and participate in a semi-structured interview, with the entire process being audio and video recorded. Both the scale and task order are controlled for sequence bias using a Latin square design.
# 4.2. Quantitative indicators and measurement tools
We comprehensively adopted multi-dimensional evaluation scales including system usability, cognitive load, trust and controllability, with quantitative analysis as follows:
• Comprehensibility (C1–C4, Likert 1–5): Users’ subjective assessment of animation semantic accuracy;
• Naturalness (C5–C8, Likert 1–5): Motion fluidity and facial expression naturalness;
• System Usability (SUS, C9–C18, 0–100): Standard system usability score
• Explainability & Controllability (C19–C26, Likert 1–5): Control capability over JSON structure and interaction flow;
• Trust & Satisfaction (C27–C30, Likert 1–5): Trust in AI output results and overall satisfaction;
• Cognitive Load (NASA-TLX simplified version, C31–C34, 0–100): Mental demand, physical demand, temporal demand, and overall burden
Additionally, we recorded completion time per task, edit counts, and distribution of frequently edited fields.
Editing behavior analysis shows that users make an average of 1.7 edits per sentence in Edit mode, with the most frequent being hand gestures $( 4 2 \% )$ , duration $( 2 8 \% )$ , and facial expressions $( 1 9 \% )$ , while other fields (such as syntactic markers) account for relatively lower proportions. The average editing time per sentence is $7 . 8 \pm 2 . 3$ seconds.
The internal consistency (Cronbach’s $\alpha$ ) of the system’s subjective scale reached 0.86, indicating high questionnaire reliability. Regression analysis shows that ”interpretability” and ”controllability” have significant predictive effects on trust level (adjusted $R ^ { 2 } ~ = ~ 0 . 5 6$ , $p ~ < ~ . 0 0 1 )$ . The controllability-trust correlation coefficient is Spearman’s $\rho = 0 . 6 3$ , $p < . 0 1$ , indicating a significant positive correlation between the two.
# 4.3. Explainability and cognitive transparency
To comprehensively evaluate the interpretability of AI systems and user understanding, we established three metrics: Explanation Satisfaction Score (ESS), Mental Model Accuracy (MMA), and Expected Calibration Error (ECE), which quantitatively reflect the system’s actual effectiveness in improving cognitive transparency. The specific results are shown in Table 3.
Table 1. Experimental participants and grouping information
Table 2. Quantitative evaluation of main indicators (Auto vs. Edit)
The data shows that in Edit mode, the average ESS score increased from 3.1 to 4.0, with significant growth in MMA as well. This indicates that structured JSON and visual explanation mechanisms helped users clearly understand the system’s reasoning and action generation process, greatly reducing the black box feeling of AI. Meanwhile, ECE decreased from $1 2 . 4 \%$ to $7 . 5 \%$ , meaning users’ confidence in the system’s output became more aligned with its actual performance, avoiding issues of over-trust or blind skepticism.
During interviews, many users reported, $^ { \mathrm { , , } } \mathrm { I }$ can clearly see how the AI makes decisions at each step,” and ”The animations correspond directly with parameters, making it easier for me to identify and fix issues.” Theoretically, structural equation modeling results also confirmed that explanation satisfaction promotes mental model accuracy, which in turn directly enhances user trust in the system (path coefficient $\beta = 0 . 4 1 , p < . 0 0 1 )$ .
Overall, Edit mode improves interpretability and cognitive transparency by allowing users to not only understand the system but also effectively articulate its workings. This significantly strengthens users’ sense of control and trust foundation in AI systems—one of the key values pursued by human-centered AI.
# 4.4. Fairness, inclusivity, and green sustainability
This section focuses on analyzing the system’s performance in group fairness and energy efficiency optimization. Experimental results are shown in Table 4:
In Edit mode, the score differences in key experience metrics (such as usability, comprehension, etc.) between gender and age groups significantly decreased. The Demographic Gap (gender) dropped from 0.42 to 0.18, and the Demographic Gap (age) decreased from 0.38 to 0.16, both showing over $5 0 \%$ improvement. These results indicate that structured intermediate layers and inclusive interface design can effectively reduce experience disparities among different user groups, enhancing overall system fairness. This trend was also confirmed in ANOVA analysis, with significantly reduced variance in subjective scores between groups in Edit mode $( p < 0 . 0 5 )$ .
Regarding energy efficiency, the average energy consumption per frame for on-device inference decreased from $0 . 2 4 J$ to $0 . 1 7 J$ , a $2 9 \%$ reduction. This improvement primarily stems from model pruning, quantization, and efficient inference optimization, enabling the system to maintain smooth interactions while better adapting to energy-constrained mobile or embedded scenarios. The data validates the synergistic advantages of human-centered AI design in improving both fairness and sustainability.
Table 4. Fairness and energy efficiency metrics (Auto vs. Edit)
Table 3. Quantitative evaluation of explainability and transparency
# 4.5. Human-machine co-creation experience and sense of autonomy
Table 5 shows key quantitative metrics of the system in human-AI co-creation experience and user autonomy (Cohen’s d for paired samples was computed as the mean difference divided by the pooled SD, with Hedges’ g correction applied for small-sample bias.):
In Edit mode, the mean Sense of Agency (SoA) score increased by $3 8 \%$ compared to Auto mode, with highly statistically significant improvement $( t = 6 . 0 , p < . 0 0 1 , d = 1 . 2 )$ , demonstrating that the co-creation mechanism effectively enhances users’ control over the interaction process. Error recovery latency decreased from 5.4 seconds to 2.9 seconds ( $4 6 \%$ reduction), indicating that structured editing interfaces significantly improve operational efficiency and reduce correction burdens caused by AI-generated errors. The learning curve slope $( \beta )$ showed positive growth in Edit mode, with subjective scores increasing by approximately 0.11 per completed task, and statistical testing $( t = 3 . 9 , p < . 0 0 1 \$ ) revealed significantly reduced learning costs. The overall Co-Creation Utility (CCU) reached $1 9 \%$ , further quantifying the actual efficiency gains from human-AI collaboration.
Integrating these findings, Edit mode not only significantly enhanced user control and collaboration efficiency but also demonstrated outstanding performance in lowering onboarding barriers and operational burdens. All key metrics showed statistically significant improvements, validating the theoretical hypothesis of human-centered AI empowering co-creation and autonomy.
# 4.6. Emotional resonance and multimodal expression
This section focuses on the performance of AI systems in multi-channel (hand and non-hand) sign language generation, with particular emphasis on quantitative improvements in facial expressions and emotional representation. We employ AU consistency rate (agreement between automated actions and manually annotated action units) and subjective emotional resonance scores (Likert 1-5) as evaluation metrics, with results shown in Table 6.
In Edit mode, the AU (Action Unit) consistency rate increased by 10 percentage points, indicating that the system’s generation of facial expressions and other non-hand markers aligns more closely with manual reference results. Additionally, the subjective emotional resonance score rose from 3.3 to 4.0, a $2 1 \%$ improvement, demonstrating that users perceive animations in Edit mode as more accurately conveying emotions and tone, with significantly enhanced naturalness and warmth in communication.
Correlation analysis further revealed a moderate positive relationship between AU consistency and subjective emotional scores $( r = 0 . 5 6 , p < 0 . 0 1 )$ ), suggesting that the accuracy of multimodal facial and body generation directly impacts users’ recognition of the system’s expressive richness. In expert annotations, Edit mode also showed marked superiority over Auto mode in the completeness and subtlety of non-hand signals when conveying emotional nuances and professional contexts. These results collectively demonstrate that improvements in multimodal generation not only enrich the expressiveness of AI sign language animations but also provide robust data-driven support for the inclusivity and effectiveness of accessible interaction systems.
In summary, Edit mode enhances the coordination of multichannel generation and emotional expressiveness, enabling the system to better meet the dual demands of ”content $+$ emotion” in real-world communication scenarios. This progress underscores the practical value of human-centered AI in multimodal interaction and social communication.
Table 5. Quantitative evaluation of human–machine co-creation metrics
Table 6. Evaluation of multimodal expressiveness metrics
Table 7. Expert annotation results
# 4.7. Qualitative evaluation and expert feedback
The qualitative section employed a multi-method approach combining semi-structured interviews, expert annotations, and open-text feedback to conduct thematic analysis (NVivo coding) of users’ authentic experiences and pain points. Key themes emerged:
• T1. Control and Trust Enhancement: ”The editor lets me dictate animation details—the system feels more like an assistant.”
• T2. Low Learning Curve: ”Five minutes to get started, intuitive interface, seamless edits.”
• T3. Feature Refinement Requests: ”Need richer facial expressions; better predictive text for technical terms.”
• T4. Emotional Journey: ”Satisfaction grows during the preview-edit-repreview cycle, but frustration arises when auto-mode makes errors.”
Expert annotation results are shown in Table 7.
Expert annotation results, as summarized in Table 7, quantitatively confirm these qualitative insights: Edit mode demonstrated a $1 2 \%$ increase in term accuracy, a $1 6 \%$ reduction in non-hand signal omissions, and a 0.9-point gain in expert consensus score. These improvements reflect both the enhanced semantic precision and the expanded non-manual expressiveness achieved via structured editing and participatory feedback mechanisms.
Both users and experts emphasized the seamless integration between the JSON editor and animation preview, which facilitated rapid correction and iterative refinement. Future development should focus on further expanding the range of facial action units (AUs) and implementing intelligent auto-completion for specialized vocabulary, as highlighted in both open feedback and expert suggestions.
In summary, the qualitative and expert findings corroborate the quantitative trends reported in Sections 4.2–4.6, underscoring the value of human-in-the-loop, explainable, and co-creative system design for accessible and expressive sign language animation.
# 5. Discussion
# 5.1. Key Findings and Implications
Our experimental results robustly validate the practical value of a human-centered approach in speech-to-sign-language generation. The integration of a structured JSON intermediate representation and interactive editor yields significant gains in comprehension, naturalness, and usability (SUS), while also enhancing interpretability, controllability, and user trust. Edit mode empowers users to promptly correct errors, tailor outputs to personal linguistic habits, and maintain smooth communication—all with minimal additional cognitive load, as evidenced by NASA-TLX scores.
Statistical analysis further shows that controllability and interpretability are strong predictors of trust, highlighting the importance of user agency in AI-assisted communication. Qualitative feedback and expert annotations underscore these findings, confirming that participatory workflows not only reduce translation errors and omissions of non-manual information, but also foster inclusivity and professional reliability. The combined quantitative and qualitative evidence establishes a robust paradigm for future accessible, explainable, and user-adaptive sign language AI systems.
# 5.2. Limitations
Despite these advances, several limitations remain:
• Nuanced Expression: Current models capture only core actions and primary facial expressions, with limited support for subtle emotions, spatial rhetoric, and personalized sign styles.
• Non-Manual Coverage: Automated generation does not yet include full-body non-manual signals such as shoulder movement, body posture, and gaze, limiting expressiveness for complex semantics and grammar.
• Editor Extensibility: The editor currently supports only basic fields; fine-grained editing for parameters such as intensity, orientation, and speed is not yet implemented.
• Sample Diversity: User studies, while diverse in gender and age, remain limited in scale and regional coverage, with further work needed for international and dialectal adaptation.
• Edge Device Adaptability: Latency and stability have not been fully validated on low-end devices and in poor network environments.
# 5.3. Future Directions
Future work should address these limitations by:
• Diversifying Action Generation: Incorporate style transfer, emotion tagging, and diversity sampling to enable richer, more expressive sign animation for literature, performance, and multicultural contexts. • Advancing Multimodal Data and Annotation: Expand datasets to cover full-body, micro-expressions, and eye gaze; refine hierarchical annotation systems for better non-manual signal learning. • Intelligent and Personalized Editing: Develop adaptive editor features—such as auto-completion, grammar correction, and style archiving—for personalized, accessible, and inclusive interaction. • Real-World and Community Deployment: Collaborate with schools, organizations, and enterprises for long-term field studies, real-world deployment, and continuous learning.
• Mobile Optimization: Research model compression, elastic architectures, and edge-cloud solutions to ensure low-latency and high reliability on mobile and resource-constrained devices.
# 5.4. Outlook
This research demonstrates both the theoretical value and real-world impact of human-centered AI in sign language generation. Ongoing efforts will focus on bridging design and technology, deepening collaboration with the deaf community and practitioners, and promoting the evolution of AI sign language tools from research prototypes to universally accessible infrastructure, advancing the cause of barrier-free communication and information equity.
# References
Amershi, S., Cakmak, M., Knox, W. B., and Kulesza, T. Power to the people: The role of humans in interactive machine learning. AI magazine, 35(4):105–120, 2014.
Bishop, C. Mixture density networks. Workingpaper, Aston University, 1994.
Cui, Z., Chen, Z., Li, Z., and Wang, Z. Spatial–temporal graph transformer with sign mesh regression for skinnedbased sign language production. IEEE Access, 10: 127530–127539, 2022.
Damdoo, R. and Kumar, P. Signedgelvm transformer model for enhanced sign language translation on edge devices. Discover Computing, 28(1):15, 2025.
Design Council. A study of the design process – the double diamond. Retrieved from http://www. designcouncil.org.uk/sites/default/ files/asset/document/ElevenLessons_ Design_Council%20(2).pdf, 2005.
Dimou, A.-L., Papavassiliou, V., Goulas, T., Vasilaki, K., Vacalopoulou, A., Fotinea, S.-E., and Efthimiou, E. What about synthetic signing? a methodology for signer involvement in the development of avatar technology with generative capacity. Frontiers in Communication, 7: 798644, 2022.
Dong, L., Chaudhary, L., Xu, F., Wang, X., Lary, M., and Nwogu, I. Signavatar: Sign language 3d motion reconstruction and generation, 2024a.
Dong, L., Wang, X., and Nwogu, I. Word-conditioned 3D American Sign Language motion generation. In AlOnaizan, Y., Bansal, M., and Chen, Y.-N. (eds.), Findings of the Association for Computational Linguistics: EMNLP 2024, pp. 9993–9999, Miami, Florida, USA, November 2024b. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.584.
Fails, J. A. and Olsen Jr, D. R. Interactive machine learning. In Proceedings of the 8th international conference on Intelligent user interfaces, pp. 39–45, 2003.
Gan, S., Yin, Y., Jiang, Z., Xie, L., and Lu, S. Towards realtime sign language recognition and translation on edge devices. In Proceedings of the 31st ACM International Conference on Multimedia, pp. 4502–4512, 2023.
Gulati, A., Qin, J., Chiu, C.-C., Parmar, N., Zhang, Y., Yu, J., Han, W., Wang, S., Zhang, Z., Wu, Y., et al. Conformer: Convolution-augmented transformer for speech recognition. arXiv preprint arXiv:2005.08100, 2020.
Hecker, C., Raabe, B., Enslow, R. W., DeWeese, J., Maynard, J., and Van Prooijen, K. Real-time motion retargeting to highly varied user-created morphologies. ACM Transactions on Graphics (TOG), 27(3):1–11, 2008.
Hwang, E. J., Lee, H., and Park, J. C. A gloss-free sign language production with discrete representation. In 2024 IEEE 18th International Conference on Automatic Face and Gesture Recognition (FG), pp. 1–6. IEEE, 2024.
IDEO.org. The Field Guide to Human-Centered Design. IDEO.org / Design Kit, 1st edition, 2015. Retrieved from https://www.designkit.org/resources/1.
Koller, O., Forster, J., and Ney, H. Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers. Computer Vision and Image Understanding, 141:108–125, 2015.
Kothadiya, D. R., Bhatt, C. M., Rehman, A., Alamri, F. S., and Saba, T. Signexplainer: an explainable ai-enabled framework for sign language recognition with ensemble learning. IEEE Access, 11:47410–47419, 2023.
Li, D., Rodriguez, C., Yu, X., and Li, H. Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 1459–1469, 2020.
Liang, W. and Xu, X. Sindiff: Spoken-to-sign language generation based transformer diffusion model. Available at SSRN 4611530, 2023.
Lin, K., Wang, X., Zhu, L., Sun, K., Zhang, B., and Yang, Y. Gloss-free end-to-end sign language translation. arXiv preprint arXiv:2305.12876, 2023.
Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Doll´ar, P. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017.
Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A., Tzionas, D., and Black, M. J. Expressive body capture: 3d hands, face, and body from a single image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10975–10985, 2019.
Saunders, B., Camgoz, N. C., and Bowden, R. Adversarial training for multi-channel sign language production. arXiv preprint arXiv:2008.12405, 2020a.
Saunders, B., Camgoz, N. C., and Bowden, R. Progressive transformers for end-to-end sign language production. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XI 16, pp. 687–705. Springer, 2020b.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Shi, T., Hu, L., Shang, F., Feng, J., Liu, P., and Feng, W. Pose-guided fine-grained sign language video generation. In European Conference on Computer Vision, pp. 392– 409. Springer, 2024.
Shneiderman, B. Human-centered AI. Oxford University Press, 2022.
Tan, S., Khan, N., An, Z., Ando, Y., Kawakami, R., and Nakadai, K. A review of deep learning-based approaches to sign language processing. Advanced Robotics, 38(23): 1649–1667, 2024.
Tavella, F., Schlegel, V., Romeo, M., Galata, A., and Cangelosi, A. Wlasl-lex: a dataset for recognising phonological properties in american sign language. arXiv preprint arXiv:2203.06096, 2022.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. Advances in neural information processing systems, 30, 2017.
WHO. World report on hearing. World Health Organization, 2021.
Xie, P., Peng, T., Du, Y., and Zhang, Q. Sign language production with latent motion transformer. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3024–3034, 2024.
Yin, A., Zhong, T., Tang, L., Jin, W., Jin, T., and Zhao, Z. Gloss attention for gloss-free sign language translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2551–2562, 2023.
# A. Evaluation Questionnaire Forms
# A.1. Instructions
After each task (Auto or Edit), please rate your agreement with each statement on a 5-point Likert scale: $1 = { \cal S }$ trongly Disagree $2 =$ Disagree $3 =$ Neutral $4 =$ Agree $5 =$ Strongly Agree
# A.2. Comprehensibility (C1–C4)
# A.3. Naturalness (C5–C8)
# A.4. Explainability & Controllability (C19–C26)
# A.5. Trust & Satisfaction (C27–C30)
# A.6. System Usability Scale (SUS)
Please indicate your agreement (1–5) with the following:
Scoring: Convert items 1,3,5,7,9 to 0–4 by subtracting 1; reverse-score items 2,4,6,8,10; sum and multiply by 2.5 to yield 0–100.
# A.7. NASA-TLX
The NASA-Task Load Index (TLX) assesses perceived workload on six dimensions using a two-step process: (1) weight derivation via pairwise comparisons; (2) workload rating. Compute overall workload as the weighted average of dimension ratings.
Step 1: Weight Derivation Rank the relative importance of the six dimensions by indicating, for each pair, which dimension contributed more to your workload. Record your choices in the pairwise matrix below.
From this matrix, compute each dimension’s weight $W _ { i }$ as the number of times it was chosen (range 0–5), then normalize:
$$
\tilde { W } _ { i } = \frac { W _ { i } } { \sum _ { j = 1 } ^ { 6 } W _ { j } } ,
$$
where $\sum { \tilde { W _ { i } } } = 1$ .
Step 2: Workload Rating Rate each dimension on a 0–100 scale:
Overall Workload Score Compute the weighted workload:
$$
\mathrm { N A S A \mathrm { - } T L X } = \sum _ { i = 1 } ^ { 6 } { \tilde { W } } _ { i } \times R _ { i } ,
$$
where $R _ { i }$ is the 0–100 rating for dimension $i$ .
This detailed procedure yields a scalar workload index reflecting both the importance and the perceived level of each demand dimension.“
# B. Interview Guide & Coding Scheme
# B.1. Semi-Structured Interview Protocol
Participants were asked the following core questions after completing each condition. Probes (in italics) were used to elicit depth.
1. Overall Experience “How would you describe your overall experience with the system today?” Probe: Which part felt most intuitive or most challenging?
2. Control and Trust “Can you tell me about a moment when you felt in control of the animation?” Probe: Did any aspect make you doubt the system’s reliability?
3. Learning Curve “How quickly did you learn to perform edits?” Probe: Which features took longer to grasp, if any?
4. Error Handling “Describe how you fixed any mistakes made by the system.” Probe: How easy was it to identify and correct an error?
5. Emotional Response “How did the system’s animations affect your emotional engagement?” Probe: Did you feel more satisfied watching Edit mode vs. Auto mode?
6. Interface Feedback “What suggestions do you have for improving the editor or preview?” Probe: Are there any controls you wish were available?
# B.2. NVivo Codebook
# B.3. Coding Procedure
1. Familiarization: Transcribe audio recordings verbatim and read through all transcripts.
2. Open Coding: Assign initial codes line-by-line in NVivo, allowing new themes to emerge.
3. Axial Coding: Group related open codes under the four pre-defined themes (T1–T4).
4. Selective Coding: Refine themes by merging or splitting codes to maximize internal homogeneity and external heterogeneity.
5. Inter-Rater Reliability: A second coder independently coded $20 \%$ of transcripts; Cohen’s $\kappa = 0 . 8 2$ .
# C. Experimental Design and Randomization
This appendix details the Latin-square task ordering, counterbalancing scheme, and laboratory setup used to control for order and carryover effects.
# C.1. Latin Square Task Ordering
We employed a $4 { \times } 4$ Latin square to balance the order of four task types across participants. Each row represents one of four participant groups.
| Group | Task 1 | Task 2 | Task 3 | Task 4 | + + | G1 | Greeting | Instruction| Terminology| Emotion | | G2 | Instruction| Terminology| Emotion | Greeting | | G3 | Terminology| Emotion | Greeting Instruction| | G4 | Emotion | Greeting | Instruction| Terminology| +
Figure 4. Latin-square assignment of the four dialogue tasks (Greeting, Instruction, Terminology, Emotion) across four groups (G1–G4)
# C.2. Counterbalancing Scheme
Participants $\scriptstyle ( \mathrm { N } = 2 5$ ) were randomly assigned to one of the four Latin-square groups. The following flowchart illustrates the randomization process:
[ All Participants ( $N = 2 5$ ) ] Random Shuffle / | \ \ G1 (6) G2 (6) G3 (6) G4 (7) | | | | Sequence1 Sequence2 Sequence3 Sequence4
Figure 5. Random assignment of participants into four groups (G1–G4) with approximately equal group sizes, each following a distinc task order (Sequences 1–4).
# C.3. Experimental Environment Setup
All sessions were conducted in a quiet interview room. The participant sat at a desk facing a 24-inch monitor $1 9 2 0 { \times } 1 0 8 0$ px, $6 0 ~ \mathrm { H z }$ ) displaying the Unity 2023.3 animation preview. Directly beneath the monitor sat the experimental desktop (Windows 11, Intel i7-13700k CPU, 16 GB RAM, Nvidia RTX 4070), which ran both Unity and OBS Studio to capture synchronized screen, audio, and webcam video.
A Logitech C920 webcam $( 1 0 8 0 \ p \ \textcircled { a } \ 3 0 \$ fps) was mounted on a tripod $0 . 5 \mathrm { { m } }$ above the top edge of the monitor, angled downward at $3 0 ^ { \circ }$ to capture the participant’s upper body and hands. All video and audio streams were recorded at 30 fps via OBS with lossless compression.
To prevent visual cues, a $3 0 \mathrm { c m }$ high opaque divider was placed between the participant and the experimenter’s workstation. Ambient lighting was kept constant at $3 0 0 \mathrm { l u x }$ , and background noise was below 50 dB to ensure consistent recording quality.
# D. Energy Consumption and Performance Measurement
This appendix describes the measurement equipment, methods for synchronizing power and frame events, and the mobile/embedded deployment configurations used in our energy and performance evaluation.
# D.1. Measurement Equipment and Methodology
• Power Meter: Monsoon Power Monitor v3 – Accuracy: $\pm 0 . 5 \%$
– Voltage range: $0 { - } 5 \mathrm { v }$ DC – Sampling rate: $5 \mathrm { k H z }$ ( $2 0 0 ~ \mu \mathrm { s }$ resolution) – Connection: inline to the device’s $5 \mathrm { V }$ supply line
# • Logic Analyzer for Frame Sync: Saleae Logic Pro 16
– Sample rate: 24 MHz
– Channels: $^ *$ Channel 1: TTL “frame start” pulse generated by Unity via GPIO $^ *$ Channel 2: optional “inference start” marker
– Used to align power trace with frame boundaries.
# • Data Capture Workflow:
1. Start Monsoon trace and Logic capture simultaneously.
2. Launch inference script; Unity emits a GPIO pulse at each frame presentation.
3. Stop capture after 1000 frames to ensure statistical significance.
4. Post-process: parse TTL pulses to segment per-frame energy $E _ { i }$ , compute average and standard deviation.
# D.2. Mobile and Embedded Deployment Configurations
Budget and Platform Choices All hardware was procured under a limited research budget ( $\$ 200$ USD per platform). We selected commodity devices with community support.
# • Smartphone (Mobile):
– Model: Google Pixel 7 (Snapdragon 8 Gen 2)
– OS: Android 13
– Framework: TensorFlow Lite with NNAPI acceleration
– Pruning: $30 \%$ filter-level magnitude pruning applied in PyTorch prior to conversion
– Quantization: Post-training dynamic range quantization to INT8
– Measurement: Monsoon inline at USB Type-C power, sampling at $5 \mathrm { k H z }$
# • Embedded (Edge):
– Board: Raspberry Pi 4 Model B (4 GB RAM)
– OS: Raspberry Pi OS (64-bit)
– Framework: TensorFlow Lite with Edge TPU (Coral USB Accelerator)
– Pruning: $2 5 \%$ structured channel pruning (TensorFlow Model Optimization Toolkit)
– Quantization: Full integer quantization (weights $^ +$ activations to INT8)
– TPU Config: Edge TPU compiler v16.0, batch $\mathrm { s i z e } = 1$
– Measurement: INA260 I²C power sensor (Adafruit breakout) at $2 \mathrm { k H z }$ sampling, logged on Pi
# D.3. Performance Metrics and Analysis
• Per-Frame Energy:
$$
E _ { \mathrm { f r a m e } } = \frac { 1 } { N } \sum _ { i = 1 } ^ { N } V _ { i } \times I _ { i } \times \Delta t ,
$$
where $V _ { i }$ , $I _ { i }$ are instantaneous voltage/current samples during frame $i$ , $\Delta t = 2 0 0 \mu \mathrm { s }$ .
• Inference Latency: – Measured from “inference start” TTL to “frame start” TTL – Reported as mean $\pm \thinspace \mathrm { S D }$ over 1,000 frames
# • CPU/GPU Utilization (Mobile):
– Sampled via Android’s adb shell top at $1 0 0 ~ \mathrm { { m s } }$ intervals – Correlated with power trace to attribute energy to compute load | This paper presents a human-centered, real-time, user-adaptive speech-to-sign
language animation system that integrates Transformer-based motion generation
with a transparent, user-editable JSON intermediate layer. The framework
overcomes key limitations in prior sign language technologies by enabling
direct user inspection and modification of sign segments, thus enhancing
naturalness, expressiveness, and user agency. Leveraging a streaming Conformer
encoder and autoregressive Transformer-MDN decoder, the system synchronizes
spoken input into upper-body and facial motion for 3D avatar rendering. Edits
and user ratings feed into a human-in-the-loop optimization loop for continuous
improvement. Experiments with 20 deaf signers and 5 interpreters show that the
editable interface and participatory feedback significantly improve
comprehension, naturalness, usability, and trust, while lowering cognitive
load. With sub-20 ms per-frame inference on standard hardware, the system is
ready for real-time communication and education. This work illustrates how
technical and participatory innovation together enable accessible, explainable,
and user-adaptive AI for sign language technology. | [
"cs.HC",
"cs.AI"
] |
# 1 Introduction
In recent times, end-to-end (E2E) ASR models have started taking the main stage in industrial usecases (Povey et al., 2016). Recurrent neural networks (RNNs) are crucial as they can model the temporal dependencies in audio sequences effectively (Chiu et al., 2018; Rao et al., 2017; Sainath et al., 2020). The transformer architecture with self-attention has gained substantial attention in ASR to capture long distance global context and show high training efficiency (Zhang et al., 2020b;
Vaswani et al., 2017; Hsu et al., 2021; Chen et al., 2022). Alternatively, ASR based on convolutional neural networks (CNNs) has also been successful due to its ability to exploit local information (Li et al., 2019; Han et al., 2020a; Abdel-Hamid et al., 2014). Recently, the conformer ASR model (Gulati et al., 2020) was proposed for combining the advantages of CNN and transformer models, to extract both local and global information from a speech sequence (Han et al., 2020b; Shi et al., 2021; Kim et al., 2022; Yao et al., 2023). Zipformer (Yao et al., 2023) is an extension of the previous conformer models, providing a transformer that is faster, more memory-efficient, and betterperforming.
Latency-accuracy is a critical trade-off for an ASR model, especially for streaming ASR models. In systems with concurrent call processing, it becomes critical to find the optimal operating point in the latency-concurrency-accuracy trio. Streaming decoders work on chunk-based processing, where, for each frame the encoder has access to, the entire left-context and a variable right-context depending on the frame’s position in a chunk are used.
Right context has a significant role in the context of a unified model in streaming and in offline production environment. Typically, the WER of the offline model is significantly lower compared to that of a streaming model. Therefore, separate models are generally trained for offline and streaming use-cases. This requires twice the compute resource to train the models and additional resource to maintain and update the models. Adding rightcontext helps bridge the gap in WER between offline and streaming models with a small degradation in latency in the streaming case.
In Swietojanski et al. (2023), authors use variable attention masking in a transformer transducer setting, however the influence of different numbers of right-context frames is not explored and the work instead focuses on using right-context ranging from multiple chunks to full context, which may not be possible for a streaming setup. Li et al. (2023) propose a dynamic chunk-based convolution, where the core idea is to restrict the convolution at chunk boundaries so that it does not have access to any future context and resembles the inference scenario. Our approach, by contrast, uses limited additional right-context frames beyond chunk boundaries. Our proposed method is also different from that of Tripathi et al. (2020), where initial layers are trained with zero right context and the final few layers are trained with variable context. If we wanted a streaming model with different latency during inference, the model would need to be retrained. Zhang et al. (2020a) use dynamic chunk sizes for different batches in training and the attention scope varies from left-context only to full context. The authors in Wu et al. (2021) further enhance their strategy by employing bidirectional decoders in both forward and backward direction of the labeling sequence. In both passes, they use either full right-context or full left-context attention masking, which may adversely impact the real-time streaming use-case.
Our work is significantly different from the aforementioned approaches in terms of training with variable right-context while decoding with extra right-context frames in addition to the chunk being decoded in the inference phase. We propose to unify streaming and non-streaming zipformerbased ASR models by leveraging future context. The conventional zipformer model uses chunked attention masking and utilizes only left-context while we use a variable number of right-context frames for different mini-batches during training, providing the flexibility to select a desired number of right-context frames during inference, according to the desired accuracy-latency tradeoff. We study the effect of choosing different amounts of right context on latency and accuracy, finding that as the number of decoding right-context frames increases, the streaming zipformer ASR model can approach the performance of the corresponding non-streaming model without significantly degrading latency. We evaluate our method on both opensource read speech and industry-scale productionspecific conversational speech data.
# 2 Right-context in Zipformer
Here we review the zipformer model and the attention masking employed to incorporate right
Figure 1: Zipformer encoder architecture showing each layer at different frame rates (left) and different modules in each encoder layer (right).
context information (Gulati et al., 2020; Yao et al., 2023).
# 2.1 Zipformer model
The zipformer model is a significant advancement in transformer-based ASR encoding, offering superior speed, memory efficiency, and performance compared to conventional conformer models. A conformer model adds a convolution module to a transformer to add both local and global dependencies. In contrast to the fixed frame rate of $2 5 \mathrm { { H z } }$ used by conformers, the zipformer employs a UNet-like structure, enabling it to learn temporal representations at multiple resolutions in a more streamlined manner.
In the zipformer encoder architecture, we have six encoder blocks, each at different sampling rates learning temporal representation at different resolutions in a more efficient way. Specifically, given the acoustic features with frame rate of 100 Hz, a convolution based module reduces it first to $5 0 ~ \mathrm { H z }$ , followed by the six cascaded stacks to learn temporal representation at frame rates of 50Hz, 25Hz, $1 2 . 5 \mathrm { H z }$ , $6 . 2 5 \mathrm { H z }$ , $1 2 . 5 \mathrm { H z }$ , and $2 5 \mathrm { { H z } }$ , respectively as shown on the left side of Figure 1. The middle block operates at $6 . 2 5 \ : \mathrm { H z }$ undergoing stronger downsampling, thus facilitating more efficient training by reducing the number of frames to process. The frame rate between each block is consistently $5 0 ~ \mathrm { H z }$ . Different stacks have different embedding dimensions, and the middle stacks have larger dimensions. The output of each stack is truncated or padded with zeros to match the dimension of the next stack. The final encoder output dimension is set to the maximum of all stacks’ dimensions.
The inner structure of each encoder block is shown in the right side of Figure 1. The primary motivation is to reuse attention weights to improve efficiency in both time and memory. The block input is first processed by a multi-head attention module, which computes the attention weights These weights are then shared across a non-linear attention module and two self-attention modules. Meanwhile, the block input is also fed into a feedforward module followed by the non-linear attention module.
# 2.2 Attention masking
The multi-head self-attention facilitates finegrained control over neighboring information at each time step. At each time t, Zipf ormer $( x , t )$ may be derived from an arbitrary subset of features in x, as defined by the masking strategy implemented in the self-attention layers (Vaswani et al., 2017). Given the attention input $\begin{array} { r l } { Y } & { { } = } \end{array}$ $( y _ { 1 } , \dots , y _ { L _ { y } } ) , y _ { t } \in \mathbb { R }$ self-attention computes
$$
Q = \mathcal { F } ^ { q } ( Y ) , K = \mathcal { F } ^ { v } ( Y ) , V = \mathcal { F } ^ { v } ( Y ) ,
$$
$$
A t t ( Q , K , V ) = s o f t m a x \left( \frac { \mathcal { M } ( Q K ^ { T } ) } { \sqrt { d } } \right) V ^ { T } ,
$$
where, $d$ is the attention dimension, $\mathcal { M }$ is the attention mask with values 0 and 1 of dimension $L _ { y } \times L _ { k }$ . The attention mask in the Equation 2 regulates the allowance of number of left and rightcontext frames corresponding to each frame of $Y$ .
# 2.3 Right-context attention masking
The attention masks constrain the receptive field in each layer without the need for physically segmenting the input sequence. In a streaming ASR setup, to mitigate computational costs and latency, the processing occurs at the chunk level rather than at the frame level. A specific number of frames are grouped into chunks, and each chunk is then encoded as a batch. Following Shi et al. (2021); Chen et al. (2021), we use chunked attention masking to confine the receptive field during self-attention computation. In conventional chunked attention masking, each frame within a chunk is exposed to varying extents of left- and right-context frames. The initial frames in a chunk have access to some right-context frames, while the later frames have no access to right-context frames, enforcing a causal constraint at chunk boundaries.
The conformer and zipformer ASR recipes in $k 2$ -fsa icefall1 (Gulati et al., 2020; Shi et al., 2021) deploy chunked attention masking and use only left-context as shown in Figure 2(a). For streaming decoders, each frame in the encoder accesses left-context and variable right-context depending on the frame’s position in a chunk.
However, the right-context information is very relevant to learn the acoustic-linguistic attributes of a chunk. Utilizing a modest right and left context may yield improved performance in terms of WER and latency when compared to solely relying on an extensive left-context. Incorporating right-context will thus help to narrow the gap in WER between streaming and non-streaming models. Furthermore, due to the varying temporal resolutions of each layer within the zipformer encoder block, the utilization of right-context frames becomes more efficient. In this work, we deploy chunked masking with right-context as shown in Figure 2(b), where the extent of right-context and left-context can be varied based on requirements. We note that the right-context frames are the frames beyond the chunk boundaries, not within the chunks.
Figure 2: Attention masking in zipformer; (a) chunked masking with left-context and no right-context, (b) chunked masking with both left-context and rightcontext.
# 3 Experiments
Below we discuss the database used and experiments conducted to demonstrate the effectiveness of right-context in unified streaming and nonstreaming ASR models.
# 3.1 Dataset
We conduct experiments using two data setups, using Librispeech and large in-house conversational data. In the Librispeech setup, we use the standard 960 hours of training data, as well as testclean $( 5 . 4 0 \ \mathrm { h r s } )$ and test-other $( 5 . 1 0 \mathrm { h r } )$ partitions for testing. Using the Librispeech setup we train a conventional conformer transducer streaming model and a baseline zipformer streaming model without any right-context during training. Using this setup, we also train a zipformer streaming model with proposed right-context strategy and a non-streaming model.
Using the large in-house conversational setup, we train zipformer models without right-context, with right-context and the non-streaming variant. The in-house training data is derived by combining different open source databases along with inhouse conversational and simulated conversational telephonic datasets as shown in Table 1. In total we use 12,468 hours of training data. The training data also includes a synthesized corpus generated using a text-to-speech model. We employ diverse in-house test datasets listed in Table 1 that comprise different domains and accents. The DefinedAI en-in, en-ph, en-au and en-gb subsets correspond to Indian, Filipino, Australian and UKaccented English, respectively. To evaluate the latency and inference time in the server-client setup, we use long conversations as test data to mimic the production use-cases.
Table 1: Duration and domain information for different training and test sets used in the experiments.
# 3.2 Experimental setup
To assess the effectiveness of the proposed approach to unify streaming and non-streaming ASR models, we setup our experiments using Librispeech and large in-house conversational dataset. For both the setups, we evaluate different baseline and right-context models using Icefall’s simulated streaming decoding approach. We further evaluate the large in-house ASR models in server-client production setup.
# 3.2.1 Librispeech models
Using the Librispeech setup, we initially train a baseline conformer transducer streaming model (Kuang et al., 2022) (ConformerBaseline) without any right-context. Further, we train two zipformer streaming models: the baseline model (LibriBaseline), the right-context model (LibriRC-0-64-128-256). Additionally, a non-streaming model (LibriNS) is trained using this setup.
# 3.2.2 Large-data conversational models
Utilizing the large in-house conversational English data, we showcase the efficacy of the proposed approach in a more challenging conversational environment with different test cases comprising different domains and accents. Using this data, We train two streaming zipformer models: $\mathrm { L a r g e } _ { \mathrm { B a s e l i n e } }$ , and $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ , and a nonstreaming model LargeNS.
# 3.2.3 Training setup
All experiments described above (except ConformerBaseline model) adhere to the standard zipformer recipe2 within the Icefall toolkit. The conformer model (ConformerBaseline) is trained using the pruned_transducer_stateless4 recipe in Icefall toolkit. We use the zipformer-medium setup for Librispeech model and zipformer-large for the large in-house models (Yao et al., 2023). The base learning rate is 0.045 for the Librispeech setup, and 0.05 for the large in-house model training. Additionally, the chunk-size varies among the values [16, 32, 64] frames during training, where, each frame corresponds to $1 0 ~ \mathrm { m s }$ in both training and decoding. Based on our experiments on a a small-data setup, we use varying numbers of right-context frames by randomly choosing from the set $\{ 0 , 6 4 , 1 2 8 , 2 5 6 \}$ for each batch during training. All models undergo training for up to 30 epochs, using eight NVIDIA V100 GPUs.
Evaluation is conducted using 128 left-context frames, a chunk size of 32 frames, 30 epochs with an averaging over 6. We evaluate different baseline and right-context models using Icefall’s simulated streaming decoding approach for both Librispeech and Large in-house setups. We also demonstrate performance in server-client setup for the in-house models.
# 3.2.4 Server-client-based evaluation
To demonstrate the performance of the proposed unified ASR training approach, we evaluate the inhouse models $( \mathrm { L a r g e } _ { \mathrm { B a s e l i n e } }$ , LargeRC-0-64-128-256, $\mathrm { L a r g e } _ { \mathrm { N S } } .$ ) in server-client setup. We use Sherpa websocket server for real-time streaming3. The ASR model is loaded on a cpp-based websocket server, which listens to a specific port on a server machine. A Python client is used to create multiple and simultaneous websocket connections to the server to support concurrent processing. The client streams audio chunks of $5 0 0 \mathrm { m s }$ in real time. When an endpoint is reached in the audio, the transcripts are sent back to the client. “Final-chunk latency” is the metric used to measure the latency of the ASR output: latency is measured in the client as the time from when the last chunk is streamed to the server to the time when the final transcript is received back in the client. The server used in this experiment is a $\mathrm { g } 5 . 2$ xlarge AWS instance, which has 1 Nvidia A10G GPU, 8 vPUs and 32GB RAM.
# 3.3 Evaluation metrics
We use word error rate (WER) as the performance metric for recognition accuracy. Final-chunk latency as described above is evaluated in the clientserver setting and simply referred to as latency here. Another measure to analyze the inference time is inverse real time factor (RTFX). RTFX is calculated as, RTFX $\mathbf { \Sigma } = \mathbf { \Sigma }$ duirnafteiroen coef tiesmtes . Higher RTFX corresponds to less inference time. As in production environment, we process multiple calls at the same time, we analyse the latency and RTFX over different concurrency values. Concurrency can be defined as the number of concurrent calls being sent from the client to the server at a given point in time.
We measure latency only for streaming ASR and RTFX for both streaming and non-streaming
Figure 3: Comparison of conventional conformer (ConformerBaseline) and zipformer (MediumBaseline) models in terms of $\mathrm { W E R } ( \% )$ with different number of right-context frames during inference.
ASR models. Non-streaming models do not support concurrency in our setup, as they process a conversation by splitting it into smaller segments.
# 4 Results
# 4.1 Librispeech setup
In Figure 3, we compare the WER $( \% )$ of the ConformerBaseline model with that of the zipformer-based MediumBaseline model for Librispeech test-clean and test-other testsets. We note that these two models are not trained with right-context. Figure 3, illustrates that during inference, increasing the number of right-context frames leads to WER improvement for both models. However, the zipformer-based model shows more pronounced improvement in WER compared to the conformer model. The enhanced performance is due to the varying frame rates across different encoder blocks in the zipformer architecture, making it a superior choice for a unified ASR model.
In Table 2, we show the $\mathrm { W E R } ( \% )$ for LibriBaseline and LibriRC-0-64-128-256 models for different numbers of right-context frames during decoding. A noteworthy observation is the improvement in WER of the LibriBaseline model, which decreases from $3 . 3 3 \%$ to $2 . 8 3 \%$ as the number of decoding right-context frames increases from 0 to 256, despite this model not being trained with right-context. In the baseline model, although we do not explicitly impose right-context frames, the initial frames of a chunk see the entire chunk length as right-context, whereas the later frames do not have access to any right-context. The LibriRC-0-64-128-256 model achieves WERs of $2 . 4 3 \%$ in test-clean and $6 . 5 5 \%$ in test-other, compared to the baseline model’s respective WERs of $3 . 3 3 \%$ and $8 . 9 0 \%$ , bringing it closer to the non-streaming model’s (LibriNS) performance, as shown in Table 2. Across all models, increasing the number of decoding right-context frames consistently contributes to obtaining a viable unified model for both streaming and non-streaming applications.
Table 2: $\mathrm { W E R } ( \% )$ of the models trained on 960 hours of Librispeech data, including LibriBaseline, LibriRC-0-64-128-256 and non-streaming model.
# 4.2 Large in-house conversational setup
In Table 3, we depict the WER values of the $\mathrm { L a r g e } _ { \mathrm { B a s e l i n e } }$ and $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ models with the number of right-context frames in decoding varying from 0 to 256. We can observe that the WER of $\mathrm { L a r g e } _ { \mathrm { B a s e l i n e } }$ improves as we increase the number of right-context frames in decoding, although the model is not trained with right-context. However, the right-context training strategy presented in this paper helps to further improve the performance of the $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ model across all testsets. Notably, with 64 right-context frames during decoding, the average WER improves to $8 . 3 1 \%$ compared to $1 0 . 3 4 \%$ in the baseline without right context during training and decoding. Moreover, the results in Table 3 exhibit the convergence of the streaming model’s performance towards the non-streaming model with the proposed right-context attention mask. This convergence signifies the potential for deploying a streaming ASR model in place of its corresponding non-streaming counterpart, facilitated by increasing the decoding right context frames. Ultimately, these results affirm that a unified zipformer-based model can effectively serve both streaming and non-streaming applications through the proposed right-context chunked and hybrid attention masking training methods. Apart from unifying streaming and non-streaming models, the proposed approach adds flexibility to choose a balance between accuracy and latency by selecting an suitable number of right-context frames in decoding according to requirement.
Table 3: $\mathrm { W E R } ( \% )$ of the models trained on 12,460 hours of in-house conversational data, including $\mathrm { L a r g e } _ { \mathrm { B a s e l i n e } }$ , $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ , and non-streaming model with in-house testsets.
# 4.2.1 Server-client setup
As discussed in Section 3.2, we deploy the large in-house conversation model in server-client environment. In Table 4, we show the WERs for the $\mathrm { L a r g e } _ { \mathrm { B a s e l i n e } }$ model with no right-context in decoding and the $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ model with 0, 32, and 64 right-context frames in decoding along with the non-streaming model $\mathrm { ( L a r g e _ { N S } ) }$ . We note that for the same model there is a difference in performance between the simulated streaming and real streaming (server-client) environments, because of the padding involved in the real streaming case. However, from Table 4 we can observe that the average WER of the in-house model improves from $9 . 0 \%$ to $8 . 2 \%$ with the streaming model, approaching the non-streaming model.
Table 4: $\mathrm { W E R } ( \% )$ of the $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ and nonstreaming models trained on 12,460 hours of in-house conversational data for different in-house testsets.
Table 5: Latency (sec) and RTFX values of the LargeRC-0-64-128-256 and $\mathrm { L a r g e } _ { \mathrm { N S } }$ models trained on 12,460 hours of in-house conversational data in serverclient setup for the long calls testset.
Apart from WER, latency or inference time plays a crucial role in industrial streaming ASR models. In Table 5, we show the latency and RTFX values of the LargeRC-0-64-128-256 model for different numbers of decoding right-context frames for concurrency of 100, 200 and 300. In this evaluation we use the long conversations testset in Table 1. From Table 5, we can observe that there is no significant degradation of user-perceived latency as right-context increases. The RTFX values of the streaming $\mathrm { L a r g e } _ { \mathrm { R C - 0 - 6 4 - 1 2 8 - 2 5 6 } }$ model are higher than that of the non-streaming model in all the cases. The greater RTFX demonstrates less inference time for the streaming model with rightcontext compared to the non-streaming model. For the streaming application, with the introduction of right-context we observe increase in accuracy and a small degradation in latency; for the nonstreaming use-case the accuracy drops with the re duction in inference time or latency. As we further increase the number of decoding right-context frames, the accuracy of streaming model eventually comes close to that of the non-streaming model. | There has been increasing interest in unifying streaming and non-streaming
automatic speech recognition (ASR) models to reduce development, training, and
deployment costs. We present a unified framework that trains a single
end-to-end ASR model for both streaming and non-streaming applications,
leveraging future context information. We propose to use dynamic right-context
through the chunked attention masking in the training of zipformer-based ASR
models. We demonstrate that using right-context is more effective in zipformer
models compared to other conformer models due to its multi-scale nature. We
analyze the effect of varying the number of right-context frames on accuracy
and latency of the streaming ASR models. We use Librispeech and large in-house
conversational datasets to train different versions of streaming and
non-streaming models and evaluate them in a production grade server-client
setup across diverse testsets of different domains. The proposed strategy
reduces word error by relative 7.9\% with a small degradation in user-perceived
latency. By adding more right-context frames, we are able to achieve streaming
performance close to that of non-streaming models. Our approach also allows
flexible control of the latency-accuracy tradeoff according to customers
requirements. | [
"cs.SD",
"cs.AI",
"eess.AS"
] |
# 1 Introduction
# 1.1 Background and Motivation
In the era of big data, processing massive data streams has become a crucial challenge across various domains. Mining valuable information in data streams has broad applications such as network traffic monitoring[1], social network analysis[2], financial transaction monitoring[3], and recommendation systems[4]. A critical task in data stream analysis is tracking frequent items, such as heavy hitters and heavy changers, which dominate the features of a data stream. Heavy hitters are the items that frequently occur in a data stream, while heavy changers are the items whose frequency changes heavily between two consecutive time intervals.
In real-world scenarios, the high velocity of incoming items and their unpredictable keys pose significant challenges for recording the desired information. To address these challenges, the research community has developed numerous sketch algorithms. These algorithms are probabilistic in nature and are designed to achieve low memory overhead and high update rates. Most sketch algorithms utilize hash table data structures to complete update operations in constant time, often enabling parallel or pipelined execution. However, due to the high time cost associated with memory access, these data structures must typically be deployed in on-chip memory, such as L2 cache or shared memory, which are scarce resources on most processors. Therefore, maintaining high accuracy with less memory overhead has become the development direction of sketch algorithms.
Sketch algorithms are primarily based on hash tables, where each item in the data stream is mapped to multiple buckets in the table using independent hash functions. Classic sketch algorithms, such as CM sketch[5], CU sketch[6], and Count sketch[7], offer both theoretical and practical guarantees for accurate frequency estimation of frequent items. However, since these traditional sketch algorithms cannot store the item keys, they are limited in applications like anomaly detection. Tracking keys presents a significant challenge because, while frequency estimates can be computed with reasonable accuracy, recording keys should ensure their completeness and correctness.
In response to this challenge, researchers have developed various algorithms to record item keys. They can be broadly categorized into two classes based on their key recording strategies. The first class includes explicit key recording sketch algorithms, which add key fields to each bucket in the hash table to store item key information[8–12]. However, these algorithms are susceptible to hash collisions, necessitating complex strategies for retaining the keys of frequently occurring items. For example, Elastic Sketch[9] employs an Ostracism strategy to evict less frequent keys and retain high-frequency ones. Nevertheless, these algorithms often require substantial memory to maintain a low miss rate, leading to inefficiencies.
The second class encompasses implicit key recording sketch algorithms, which aim to reduce memory usage by not directly recording keys[13–18]. Prior methods have developed key mixing or index-based encoding approaches for implicit key recording. In the key mixing approach, exemplified by FlowRadar, multiple keys are mixed within the key field. This method requires auxiliary data structures for decoding, and when the decoding fails, nearly all the recorded information is lost. In contrast, the index-based encoding approach, such as Reversible Sketch, embeds key information into bucket indices. While this method avoids auxiliary structures, it demands a large number of buckets to ensure decoding accuracy, leading to potential memory inefficiency.
# 1.2 Our Solution
To address the limitations of existing methods, we propose Hidden Sketch, a novel invertible sketch algorithm designed to record keys and frequencies with minimal memory overhead efficiently. Our algorithm draws inspiration from previous implicit key recording approaches while addressing their key deficiencies. The core idea behind Hidden Sketch lies in leveraging a hybrid design that integrates a CM Sketch for frequency estimation with a Reversible Bloom Filter for key encoding. By combining these components, Hidden Sketch fully exploits the information embedded in bucket indices and compactly encodes key information. Unlike traditional methods, where key encoding may require significant memory, the Reversible Bloom Filter achieves efficiency by using 1-bit buckets, drastically reducing memory requirements. At the same time, the CM Sketch is modeled as a system of linear equations, enabling accurate decoding of item frequencies when sufficient memory is available.
A key innovation of Hidden Sketch is its robust and systematic decoding process. The Reversible Bloom Filter identifies candidate keys with high efficiency, avoiding the exhaustive search overhead seen in traditional Bloom Filters. Meanwhile, we regard the CM Sketch as a system of linear equations, enabling precise decoding of item frequencies through established mathematical methods. By solving these equations, the algorithm ensures accurate recovery of frequency information for significant items. This hybrid strategy not only ensures precision but also overcomes the common limitations of prior methods, such as catastrophic information loss in FlowRadar or excessive memory demands in Reversible Sketch.
To track frequent items efficiently, we embed Hidden Sketch into a two-stage framework. The first stage employs a lightweight filter to pre-process the data stream. This filter efficiently excludes most low-frequency and unimportant items, ensuring that only significant items are passed to the second stage for precise processing. This two-stage design ensures that only frequent items are processed in detail, significantly reducing the computational and memory overhead.
The contributions of this paper can be summarized as below:
• We propose Hidden Sketch, a novel reversible sketch that can record both the key and frequency of items exactly with high space efficiency and reliability.
• We conduct extensive experiments on different tasks and show that our algorithm outperforms SOTA algorithms.
• We prove the memory bound of Hidden Sketch and show the space efficiency theoretically.
• We open-source our code on Github for future study[19].
The remainder of this paper is organized as follows. In $\ S 2$ , we first formalize the definitions of the data stream and tasks we address in this paper. Then we conduct a detailed analysis of prior works. We introduce the core design, Hidden Sketch, in $\ S 3$ , a fully invertible data structure that records items implicitly. Based on the Hidden Sketch, we introduce how to complete tasks such as heavy hitter detection in $\ S 4$ . We use experiments to evaluate our solution for
heavy hitter detection and heavy change detection tasks in $\ S 5$ .
Finally, we conclude this paper in $\ S 6$ .
# 2 Background and Related Work
In this section, we first formalize the definitions of data stream and the tasks we address in this paper. Then, we review existing methods, emphasizing their strengths and limitations.
# 2.1 Problem Statement
Data Stream Model: A data stream $s$ is formally defined as a sequence of items $\langle e _ { 1 } , e _ { 2 } , . . . , e _ { | S | } \rangle ( e _ { i } \in \mathcal { K } )$ , where $| S |$ is the total size of the data stream, and $\mathcal { K }$ is the key space of items. Items with the same key can appear more than once, and the frequency of an item key $e$ is defined as $\begin{array} { r } { f ( e ) : = \sum _ { e _ { i } = e } 1 } \end{array}$ .
Frequency Estimation: Given a data stream $s$ , this task aims to estimate the frequency $f ( e )$ of an querying item $e \in \mathcal K$ . Frequency estimation is the most basic task since many tasks are relative to the frequency of items, including heavy hitter detection and heavy changer detection.
• Heavy Hitter Detection: In this task, we want to identify items in a data stream whose frequency exceeds a certain threshold. Formally, given a data stream $s$ and a pre-defined threshold $T$ , the task is to find all items $e$ such that $f ( e ) \geq T$ . Note that both the key and the frequency of heavy hitters are needed.
• Heavy Changer Detection: This task involves identifying items whose frequency changes significantly between two contiguous windows of a data stream. Formally, give two contiguous and non-overlapping data stream windows $S _ { 1 }$ and $S _ { 2 }$ , the task is to find items $e \in \mathcal { K }$ such that $\left| f _ { S _ { 1 } } ( e ) - \right.$ $f _ { S _ { 2 } } ( e ) | > \Delta$ , where $f _ { S _ { i } } ( e )$ is the frequency of $e$ in window $S _ { i }$ , and $\Delta$ is a pre-defined threshold.
Accurate tracking of frequent items forms the core of both heavy hitter and heavy changer detection. The central challenge lies in achieving high-precision frequency estimation while maintaining the keys of these items. This dual demand amplifies complexity in resource-constrained environments.
# 2.2 Related Work
Existing sketch algorithms for recording keys and frequencies can be broadly classified into two categories: explicit key recording and implicit key recording. Each category has its unique advantages and limitations, which will be discussed in detail below.
2.2.1 Explicit key recording: Explicit key recording sketches[8– 12, 20] add a key field to each bucket in a hash table to directly store the key of the item hashed to the bucket. However, due to hash collisions, multiple keys may map to the same bucket, and these algorithms can only select one of them to record in the key field. To address this, various replacement strategies have been proposed to prioritize high-frequency keys. For instance, HeavyKeeper[8] employs a count-with-exponential-decay strategy to keep heavy hitters and reduce the effect of low-frequency items. Similarly, Elastic Sketch[9] utilizes an Ostracism strategy to evict unimportant keys from the key fields. Other algorithms, such as MV-Sketch[10], LD-Sketch[20], and TightSketch[12], also employ auxiliary bucket fields and delicate evicting strategies. The limitation of these algorithms lies in their need to protect the completeness the recorded keys. In a hash table, some buckets are never mapped by heavy hitters, while others are mapped by multiple heavy hitters due to hash collisions. The replacement strategies can only solve collisions between high-frequency and low-frequency items, while they are powerless against collisions between high-frequency items. To guarantee the recall rate of frequent items, these approaches have to allocate enough buckets, making some buckets empty or filled by low-frequency items, which is a waste of memory. Thus, the space efficiency of these approaches is naturally limited.
2.2.2 Implicit key recording: Implicit key recording sketches aim to reduce memory usage by encoding keys into the data structure rather than storing them explicitly[14–18]. They brilliantly encode keys into sketch data structures and decode them offline later. Specifically, there are two main encoding methods at present.
The first encoding method, introduced by FlowRadar [17], encodes keys by mixing them in a designated key field while maintaining a count of distinct keys in a separate field. During decoding, the algorithm identifies buckets mapped by a unique key. The key values in these buckets are extracted iteratively, leading to new buckets mapped by a unique key until either all keys are decoded or all buckets contain multiple distinct keys. This process can be modeled as a 2-core pruning operation on a hypergraph, where buckets represent nodes and items form hyperedges connecting nodes corresponding to hashed buckets. Then, the decoding process can be regarded as iteratively finding nodes of 1 degree and then eliminating the nodes and their hyperedge. Successful decoding is guaranteed if all the hyperedges are eliminated, i.e. no 2-cores exist in the hypergraph. Although this method only requires a linear number of buckets to achieve a high probability of a successful decoding process, it requires an additional Bloom Filter to identify distinct keys and a field for each bucket to store the number of distinct keys. Moreover, when the size of the Bloom Filter or the number of buckets is insufficient, the decoding process may fail, and nearly all the key information is lost, as the mixed keys in the key field are not recoverable.
The second method leverages bucket indices in sketches, avoiding additional structures by designing specialized hash functions. These algorithms exploit the skewed distribution of real-world data streams, where buckets mapped by heavy hitters possess significantly larger values. By identifying items associated with these buckets, these algorithms effectively decode high-frequency keys. To the extent of decoding results, these algorithms are equivalent to traversing the whole key space and finding the ones with extremely high frequency. To avoid the unacceptable overhead of traversal, these algorithms use special hash functions, which can recover the origin key from several hash values. For instance, Reversible Sketch uses modular hashing, which partitions the entire key into $q$ words, does $q$ hash functions on them respectively, and concatenates the $q$ hash values to get an entire hash function. When given several bucket indices, Reversible Sketch can easily report possible keys that can be mapped to them by separately handling the partitioned keys. Although it seems that these algorithms employ no extra memory, their encoding capacity is affected by the number of buckets. They need enough index space to fully encode the keys, which may exceed the need to report accurate frequency estimation and cause more memory waste. Moreover, when the threshold is lowered, buckets of heavy hitters cannot be significantly different from others, thus causing inaccurate detection.
# 2.3 Summary
Our analysis reveals that existing sketch designs face a fundamental dilemma between space efficiency and tracking reliability. The explicit key recording suffers from inherent space inefficiency due to hash collision, and implicit approaches struggle with decoding fragility. We propose Hidden Sketch, a novel implicit approach that independently encodes keys and frequencies, supporting a reliable decoding process and high space efficiency.
# 3 Hidden Sketch Design
We adopt the index-based encoding method to record keys. Prior approaches using this method directly use the counter index to encode keys. This causes redundant counters for frequency recording. Therefore, we separate the key encoding part from the frequency recording part. In the key encoding part, we use the index of 1-bit buckets to encode the keys. On this occasion, the CM Sketch degenerates into a Bloom Filter. We apply the index-based encoding method to the Bloom Filter and propose the Reversible Bloom Filter(RBF). The reversibility of RBF is reflected in that it provides a feasible key recovery process while not reducing the false positive rate of Bloom Filters.
As for the frequency recording part, we utilize a CM Sketch to enable a systematic decoding process. Specifically, after we decode the existing keys from the RBF, we can decode their exact frequencies by solving the linear equation system established by the CM Sketch. Meanwhile, the false positives reported by the RBF can be filtered since their decoded frequencies are zero.
# 3.1 Reversible Bloom Filter(RBF)
3.1.1 Data Structure: The Reversible Bloom Filter(RBF) employs a hierarchical structure to encode keys efficiently. On the one hand, the RBF is essentially a Bloom Filter with special hash functions for index-based key encoding. On the other hand, we divide the key conceptually based on a hierarchical tree structure and allocate a part of the memory for each tree node.
We illustrate a Reversible Bloom Filter recording 32-bit keys as an example in Fig. 1. We divide the 32-bit key into four 8-bit base key segments corresponding to the tree’s leaf nodes. The internal nodes represent concatenations of segments from their child nodes, progressively combining segments as one moves up the tree. We allocate a Bloom Filter for each internal node, represented by different colors in the block of Bloom Filters in Fig. 1. For the leaf nodes, we use a bitmap block array instead of Bloom Filters due to the small segment spaces. Each block contains multiple bitmaps that correspond to the leaf nodes respectively. When a key is inserted, we obtain the segments of the internal nodes and add them into the corresponding Bloom Filters. Simultaneously, we hash the complete key to a block in the bitmap block array and insert its leaf segments into their corresponding bitmaps in the hashed block. We also illustrate an insertion example of RBF in Fig. 1. When inserting the key 192.168.133.1, we add its internal node segments, including 192.168, 133.1, and 192.168.133.1, to the corresponding Bloom Filters. We also use a hash function $h$ that maps 192.168.133.1 to the second block in the bitmap block array. Within the block, there are four bitmaps corresponding to the four leaf nodes. We set the $1 9 2 \mathrm { n d }$ bit, 168th bit, 133rd, and 1st bits of the four bitmaps to 1, respectively.
Figure 1: An example of Reversible Bloom Filter with 32-bit key.
The Reversible Bloom Filter is also a Bloom Filter but with delicately designed hash functions. Specifically, the Bloom Filters of internal nodes can be seen as employing hash functions that only depend on the corresponding key segment. And the bitmap block array can be viewed as a Bloom Filter that uses hash functions $H a s h ( k e y ) = h ( k e y ) * 2 ^ { l } + s e g ( k e y )$ , where $h ( \cdot )$ is the hash function that maps $k e y$ to a certain block, 𝑙 is the length of leaf node segment, and $s e g ( \cdot )$ is the function that maps the key to its corresponding segment.
3.1.2 Key Set Recovery: The key recovery process in the Reversible Bloom Filter is efficient and systematic, enabling the reconstruction of encoded keys through a bottom-up approach in the tree. For each bitmap block in the array, we obtain a candidate set of each leaf segment by examining the bitmaps within the block. Next, we recursively concatenate the candidate sets of segments to create longer segment candidate sets, ultimately resulting in the candidate set of the complete key. Specifically, for each internal node $N$ , once we yield all the candidate sets of its child nodes, we can compute the Cartesian product of these sets to form a candidate key segment set of $\mathcal { N }$ . Then we filter the candidate set using the Bloom Filter of $\mathcal { N }$ . Formally, if an internal node $N$ has $k$ child nodes with candidate sets $S _ { 1 } , S _ { 2 } , . . . , S _ { k }$ , the candidate set of $N$ is computed as follows:
$$
S = \sigma _ { B F _ { N } } \{ s e g _ { 1 } \oplus . . . \oplus s e g _ { k } | s e g _ { i } \in S _ { i } , i = 1 , 2 , . . . , k \}
$$
In this equation, $\oplus$ represents the concatenation operator for binary strings, and $\sigma _ { B F _ { N } }$ denotes the filtering operation based on the Bloom Filter associated with $N$ . After computing the candidate sets for all internal nodes, the root node generates the candidate set for
# Algorithm 1: Key Set Recovering Process
1 let 𝑟𝑜𝑜𝑡 be the root node representing the entire key;
2 $S = \emptyset$ ;
3 for $i$ in [1..𝐵] do
𝑆 = 𝑆 ∪ 𝜎ℎ 𝑘𝑒𝑦 =𝑖 (getCandKeys(root, i));
5 return $s$ ;
6 Def getCandKeys(𝑛𝑜𝑑𝑒, 𝑖𝑛𝑑𝑒𝑥):
7 if node is leaf then
8 $S = \emptyset$ ;
9 let $M$ be the bitmap corresponding to 𝑛𝑜𝑑𝑒 in
𝑏𝑖𝑡𝑚𝑎𝑝𝐵𝑙𝑜𝑐𝑘 [𝑖𝑛𝑑𝑒𝑥];
10 for i in $[ 0 . . 2 ^ { n o d e . l } ]$ do
11 if $M [ i ] = = 1$ then
12 一 $S = S \cup \{ i \}$ ;
13 return S;
14 else
15 ${ \cal { S } } = \{ \epsilon \}$ ;
16 for each 𝑐ℎ𝑖𝑙𝑑 of 𝑛𝑜𝑑𝑒 do
17 $S = S \times$ getCandKeys(child, index);
18 $S = \sigma _ { B F _ { n o d e } } ( S )$ ;
19 return S;
the complete key. Each candidate key is then verified against the bitmap block array to ensure it hashes to the correct block.
Algorithm 1 illustrates the pseudocode of the recovering process. We utilize a recursive function getCandKeys to compute the candidate key set corresponding to 𝑛𝑜𝑑𝑒 induced by the bitmap block 𝑖𝑛𝑑𝑒𝑥. If 𝑛𝑜𝑑𝑒 is a leaf key segment node, it uses the corresponding bitmap in block 𝑖𝑛𝑑𝑒𝑥 to recover the candidate key set (lines 7-13). If 𝑛𝑜𝑑𝑒 has child nodes, it computes the Cartesian product of the candidate key set from its child nodes and then filters this set using the corresponding Bloom Filter (lines 16-18). The recovery process computes the union of the candidate key sets induced by the $B$ blocks and filters the candidate keys using the block hash function $h$ (lines 2-4).
It is important to note that the output set of the recovery process consists entirely of the keys whose mapping positions are set to 1 in the Reversible Bloom Filter. In other words, the recovery process is functionally equivalent to a traversal method that checks and verifies the entire key space within the Reversible Bloom Filter in terms of results. However, its time complexity is more efficient compared to the traversal method. This leads us to prove that the false positive rate of the Reversible Bloom Filter is comparable to that of a standard Bloom Filter. We will discuss this in detail in Appendix A.
# 3.2 Frequency Decoding with CM Sketch
We select the CM Sketch for frequency recording since it can be regarded as a linear mapping[21, 22]. Formally, suppose the number of possible items is $n$ , and the number of buckets employed by the CM Sketch is $m$ . Let $\vec { x }$ represents a column vector with $n$ dimensions, where the 𝑖th element corresponds to the frequency of item 𝑖. Let $\vec { y }$ denotes a column vector with $m$ dimensions, where the element on the $j$ th dimension presents the value of bucket $j$ . Then the CM Sketch can be mathematically expressed by the following equation:
$$
\Phi \cdot \vec { x } = \vec { y } ,
$$
where $\Phi$ is a $m \times n$ matrix. The element $\Phi _ { i , j }$ indicates that the 𝑖th bucket is incremented by $\Phi _ { i , j }$ if inserting an item $i$ .
As we have yielded the candidate key set from the key recovery process of RBF, we can construct the matrix $\Phi$ that maps the frequency vector of candidate keys to the value vector of buckets. $\vec { y }$ can be directly obtained from the recorded values in the CM Sketch. Therefore, we can decode the frequencies of candidate keys by solving the equation $\Phi \cdot { \vec { x } } = { \vec { y } }$ . However, the unique solution of the equation exists only when the rank of $\Phi$ equals the dimension of $\vec { x }$ . The frequencies can not be determined when the null space of $\Phi$ is not empty. Since the construction of matrix $\Phi$ depends on the hash functions of the CM Sketch, it can be regarded as a random matrix with the same sum on each column. Intuitively, when the matrix has sufficient rows, it is likely to be full rank. To illustrate the upper bound of the sufficient number, we introduce the pure bucket extraction process as the first step of our frequency decoding process.
The pure bucket extraction step is inspired by the decoding process of FlowRadar[17]. We identify a bucket of the CM Sketch as a pure bucket if only one candidate key is mapped to the bucket. In other words, the corresponding row in the matrix $\Phi$ has only one non-zero element. We can directly determine that key’s frequency as the bucket’s value. After determining a key’s frequency, we can extract it from other buckets it maps to and update their values. The extraction operation may cause the updated bucket to become a new pure bucket. The pure bucket extraction step is illustrated in Algorithm 2. We maintain a queue for the pure bucket, which is initialized by scanning all the buckets and enqueue the pure ones (lines 2-5). Then we iteratively extract items and find new pure buckets (lines 7-18). The loop is terminated when the queue is empty. That is to say, there are no buckets containing only one
Input: The candidate key set $s$ , the vector of bucket value 𝐵𝑢𝑐𝑘𝑒𝑡 [𝑚]. Output: A map of key-frequency pairs. 1 Construct a set array $k e y S e t [ m ]$ , where 𝑘𝑒𝑦𝑆𝑒𝑡 [𝑖] contains the keys mapping to 𝐵𝑢𝑐𝑘𝑒𝑡 [𝑖].; 2 𝑝𝑢𝑟𝑒𝑄𝑢𝑒 $$ empty queue; 3 for i in 1..𝑚 do 4 if 𝑘𝑒𝑦𝑆𝑒𝑡 [𝑖] $. s \mathrm { i z e } ( ) = 1$ then 5 𝑝𝑢𝑟𝑒𝑄𝑢𝑒.enqueue(𝑖); 6 𝑅𝑒𝑠𝑢𝑙𝑡 $$ empty map; $/ \star$ iterative resolution \*/ 7 while not 𝑝𝑢𝑟𝑒𝑄𝑢𝑒.isEmpty() do 8 $i \gets$ 𝑝𝑢𝑟𝑒𝑄𝑢𝑒.dequeque(); 9 if keySet[𝑖].isEmpty() then 10 continue// the key has been removed 11 let 𝑘𝑒𝑦 be the unique element in 𝑘𝑒𝑦𝑆𝑒𝑡 [𝑖]; 12 $f r e q \gets B u c k e t [ i ]$ ; 13 𝑅𝑒𝑠𝑢𝑙𝑡 .insert $( k e y , f r e q )$ ; $^ { \prime \star }$ extract the item from the buckets it maps $\star /$ 14 for bucket index $k$ hashed by 𝑘𝑒𝑦 do 15 $\lfloor u c k e t [ k ] \gets B u c k e t [ k ] - f r e q ;$ 16 𝑘𝑒𝑦𝑆𝑒𝑡 [𝑘].remove $[ k e y ]$ ; 17 if keySet[k].size() = 1 then 18 𝑝𝑢𝑟𝑒𝑄𝑢𝑒.enqueue(𝑘);
19 return Result;
key. If all the buckets are empty, the frequency decoding process is completed, i.e., all the item frequencies are decoded. As it has been proved in [23, 24], when using $k$ hash functions, the probability of failing to decode all the items is bounded by $O ( n ^ { - k + 2 } )$ , if $m > c _ { k } n$ where $\scriptstyle { c _ { k } }$ is a constant associates with $k$ .
The second step of the frequency decoding process addresses scenarios where the pure bucket extraction step fails to decode all the items. This failure occurs when every remaining bucket contains at least two keys, making it impossible to directly determine the frequency of any key through pure bucket analysis. In such cases, we resort to solving the equation $\Phi \cdot { \vec { x } } = { \vec { y } }$ based on Singular Value Decomposition(SVD). SVD is a robust method for solving linear equations, particularly when the coefficient matrix $\Phi$ is not of full rank. It decomposes $\Phi$ into three matrices $U \Sigma V ^ { T }$ , where $U , V$ are orthogonal matrices, and $\Sigma$ is a diagonal matrix containing the singular values of $\Phi$ . After that, we can compute a pseudo-inverse of $\Phi$ :
$$
\boldsymbol { \Phi } ^ { + } = \boldsymbol { V } \cdot \boldsymbol { \Sigma } ^ { + } \cdot \boldsymbol { U } ^ { T } ,
$$
where ${ \boldsymbol { \Sigma } } ^ { + }$ is also a diagonal matrix which transforms the non-zero elements of $\Sigma$ to their reciprocals. Then we can obtain a possible solution $\vec { x } = \Phi ^ { + } \cdot \vec { y }$ . When the rank of $\Phi$ equals the number of its columns, such a solution is the unique solution $\vec { x }$ . Otherwise, it is a possible solution with the least L2-norm.
Although we can directly derive the solution without the pure bucket extraction step, the SVD-based decoding method has some key limitations. The computation of SVD has a time complexity of $\overset { \cdot } { O } ( n ^ { 3 } )$ , which can be very slow for large matrices. Meanwhile, the computation of SVD and pseudo-inverse matrices suffer from numerical precision issues, leading to inaccurate results. By integrating the pure bucket extraction with the SVD-based decoding, we achieve a comprehensive frequency decoding framework. The pure bucket extraction step efficiently resolves buckets with unique keys, reducing the complexity of the problem. While the SVD-base decoding step handles the remaining unresolved buckets leveraging mathematical guarantees for approximate or exact solutions. This two-step process ensures high accuracy and scalability, making it well-suited for large-scale data streams.
# 3.3 Optimization
To improve the efficiency of the frequency decoding process, we introduce an optional optimization for the CM Sketch, leveraging the fact that only non-negative integer solutions are required. In the traditional CM Sketch insertion method, an item’s frequency is incremented by 1 in the hashed buckets. To enhance the decoding process, we propose replacing this increment with a prime number derived from the item’s hash. Specifically, we maintain an array of large prime numbers, denoted as PRIME[]. For each incoming item 𝑒, a prime number $p _ { e }$ is associated based on the hash function $g ( \cdot )$ , specifically $\scriptstyle { p _ { e } = \mathrm { P R I M E } [ g ( e ) ] }$ . The corresponding buckets are then incremented by $\mathit { p _ { e } }$ .
The modification on the CM sketch changes the non-zero elements of the equivalent matrix $\Phi$ into random primes instead of 1. This makes the integer solution sparser in the solution space. Formally, the matrix of the modified CM sketch can be written as:
$$
\Phi \cdot \Sigma _ { \cal P } \cdot \vec { x } = \vec { y } ,
$$
where $\Phi$ is also the matrix constructed by the mapping relationship between items and buckets, and $\Sigma _ { p }$ is a diagonal matrix that multiplies the elements of $\vec { x }$ with primes. Suppose $\vec { x } _ { a }$ be the actual vector of item frequencies, then the solutions of Equation (1) can be represented by $\hat { \vec { x } } = \vec { x } _ { a } + \vec { x } _ { \epsilon }$ , where $\vec { x } _ { \epsilon }$ is in the null space of $\Phi \cdot \Sigma _ { p }$ . Note that $\Sigma _ { \ / P } \cdot \vec { x } _ { \epsilon }$ can be also seen as a vector in the null space of $\Phi$ . However, its elements are divisible by the corresponding prime number, which makes it sparse in the solution space.
To solve the equation of modified CM Sketch, we also do the pure bucket extraction step to minimize the number of undetermined items. However, when we find there are multiple solutions in the SVD step (the rank of matrix $\Phi$ ), we use Integer Linear Programming (ILP) to solve the remaining items. The constraint set of ILP comprises the unsolved equations and the non-negative constraint on each variable, while the optimization object is not considered. Using this method, the decoding process can yield actual frequencies even when the matrix is not full rank. Thus, the encoding capacity of Hidden Sketch is enhanced.
Since we often use 4-byte counters for frequent item recording, surpassing the maximum counter significantly, some bits of the counters are never used in traditional CM Sketch. This optimization cleverly exploits the unused bits and enhances the encoding capacity of a CM Sketch.
Figure 2: Two-stage framework for tracking frequent items.
# 4 Implementation
In this section, we illustrate how to use Hidden Sketch to track frequent items and solve the heavy hitter detection and heavy change detection tasks. Note that heavy changers are also heavy hitters in at least one window. We can detect heavy hitters in both the two consecutive windows and find items whose reported frequencies change heavily between the two windows.
As the number of heavy hitters is relatively small compared with the total items, we do not want to precisely record insignificant items. Since most items in a data stream are infrequent, we employ a two-stage framework that separates frequent items from infrequent items. For frequent items, we record their keys and frequencies accurately, while for infrequent items, we only provide frequency estimation to save memory.
As depicted in Fig. 2, the first stage employs a lightweight cold filter to pre-process incoming items. The filter estimates item frequencies in real time and excludes most low-frequency items. Only items exceeding a predefined filtering threshold are passed to the second stage for precise recording. For instance, if the threshold of heavy hitter is predefined as 200, we can use a CU sketch[6] with 8-bit buckets width as the code filter. Each incoming item is hashed into multiple buckets, and the bucket with the lowest value is incremented. If the value after the increment exceeds 200, the item is also inserted into the Hidden Sketch in the second stage; otherwise, no further action is taken. This design ensures that the second stage focuses solely on high-value data, reducing both computational and memory overhead.
When handling heavy changer detection, we obtain the heavy hitters in two consecutive windows. For each heavy hitter, we query its frequency in the other window. Specifically, if it is also a heavy hitter in the other window, the reported frequency is the decoded result after adding the filtering threshold. Otherwise, we query it in the cold filter using its query operation. Then, we check whether the absolute value of the difference between the item frequencies in the two windows exceeds the heavy change threshold. If so, we report the item and its change value.
# 5 Experiment
In this section, we use experiment results to evaluate the performance of our algorithm on the tasks given in 2.1. The experiment results show that our algorithm outperforms previous approaches on frequent item tracking tasks. It can achieve relatively high accuracy when the memory is extremely limited.
# 5.1 Experimental Setup
5.1.1 Datasets. We use three real-world datasets to generate our workloads.
• CAIDA: We use public traffic traces of CAIDA[25], which records 5-tuple of each packet in a data center. We divide each trace into 5s-long time intervals, which contain about 2.2M items and 60K distinct keys in each interval.
MAWI: The MAWI dataset is sourced from a traffic data repository maintained by the MAWI Working Group of the WIDE Project[26]. Each trace is divided into 45s-long time intervals containing approximately 2.5M items and 50K distinct keys.
IMC: The IMC dataset comes from an empirical study of the network-level traffic characteristics of current data centers[27]. We set time intervals so that there are around 2M items and 9K distinct keys within each interval.
# 5.1.2 Tasks.
• Frequency estimation: The frequency estimation task queries all the items in a time window and reports their frequency estimation. It reflects the most basic feature of data streams. • Heavy hitter detection: We use algorithms to detect heavy hitters whose frequency exceeds $0 . 0 1 \%$ of the total frequency. Algorithms should provide the set of detected heavy hitters and report their accurate frequencies simultaneously. • Heavy changer detection: Heavy changers are items whose frequency changes heavily in two consecutive time windows. We use algorithms to detect heavy changers whose frequency change exceeds $0 . 0 5 \%$ of the total change.
# 5.1.3 Evaluation metrics.
• F1 score: $\scriptstyle { \frac { 2 \times P R \times R R } { P R + R R } }$ , where $P R$ denotes the precision rate, and $R R$ denotes the recall rate. We use F1 score to evaluate the accuracy of heavy hitter and heavy changer detection. • ARE(Average Relative Error): 𝑛1 Σ𝑖𝑛=1 |𝑓𝑖 𝑓−𝑓𝑖 | , w here $n$ is the number of distinct items, $f _ { i }$ and $\hat { f } _ { i }$ are the true and estimated frequency of item 𝑖 respectively. We use ARE to evaluate the precision of frequency estimation of heavy hitters and all the items.
5.1.4 Algorithms and parameters. We compare our algorithm with several algorithms, including FlowRadar[17], Reversible Sketch[14], Sketchlearn[18], Elastic Sketch[9], and UnivMon[28]. Reversible Sketch and Sketchlearn are implicit approaches that can detect heavy hitters and heavy changers, while Elastic Sketch and UnivMon are explicit approaches. We select these algorithms as baselines since they can perform well on all the tasks we focus on.
• FlowRadar: Although FlowRadar records all the item keys and frequencies, we can embed it into the two-stage framework. We allocate enough memory for the FlowRadar to keep 1800 distinct items and use the remaining memory for the cold filter.
• Reversible Sketch: For Reversible Sketch, we use 4 arrays as recommended in [14]. We adjust the width of arrays to adapt to different memory. We allocate the spare memory to the second phase, which is a CM sketch using 3 hash functions.
Elastic Sketch: We use the hardware version of Elastic Sketch. We allocate 2048 buckets for each array.
Sketchlearn: We use one bucket array in each level, which is the best parameter in our experiment result.
UnivMon: We use 14 levels for UnivMon, where each level contains a Count sketch[7] with 5 arrays. For each level, we use a heap of 900 buckets to record the heavy hitters.
Ours: We divide the 32-bit key into 4 8-bit partial keys and organize them in the tree structure as illustrated in Figure 1. We reserve enough memory for the Hidden Sketch with a capacity of 1800 items, which is about 20KB of memory. For the cold filter, we select a CU sketch with 8-bit counters.
# 5.2 Results and Analysis
Figures 3, 4, and 5 illustrate the accuracy of the algorithms on the three datasets. We test the algorithms under memory budgets ranging from 100KB to 500KB, demonstrating that our method consistently achieves the highest accuracy on all tracking tasks and datasets.
5.2.1 Frequency Estimation. Figures 3(a), 4(a), and 4(a) illustrate the ARE of frequency estimation of different algorithms. We find that Hidden Sketch consistently outperforms all baselines. On the CAIDA dataset, Hidden Sketch achieves an ARE reduction of at least $6 8 . 1 \%$ compared to the closest competitor. On the MAWI dataset, the ARE of Hidden Sketch is at least $4 1 . 1 \%$ less than baseline algorithms. As for the IMC dataset, Hidden Sketch also achieves the lowest ARE on each memory overhead. When the memory is 500KB, the ARE even reduces to 0.
The superior ARE performance of Hidden Sketch in frequency estimation can be attributed to its efficient memory allocation strategy. The majority of the ARE comes from estimation errors of infrequent items. Hidden Sketch dedicates a small portion of memory to accurately record frequent items, while the remaining memory is used for frequency estimation with smaller-sized counters. This design significantly increases the number of available counters, thereby improving estimation accuracy.
5.2.2 Heavy Hitter Detection. Figures 3(b), 4(b), and 5(b) present the F1 score of different algorithms on the heavy hitter detection task. On the three datasets, the F1 score of Hidden Sketch is always near to 1, even when the memory is limited to 100KB. Implicit approaches such as Reversible Sketch and Sketchlearn employ unreliable recovering processes, while explicit approaches such as Elastic Sketch require enough buckets to reserve heavy hitters. Therefore, they could not achieve comparable F1 scores on heavy hitter detection tasks when the memory is limited.
104 E 1.0 10° evesible + etrtear 1.0
E1 FlowRadar E1 FlowRadar
10 0.2 0.2 Beversible Skechleam 0.0 HwModar 10-4 0.0 FowRach
100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 Memory(KB) Memory(KB) Memory(KB) Memory(KB)
(a) ARE of frequency estimation (b) F1 score of heavy hitter detection (c) ARE of heavy hitter detection (d) F1 score of heavy changer detection
10 库 101 佳 E 10 101 F04 佳 Ours 10 F04 E 0.0 0.0 FlowRadar 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 Memory(KB) Memory(KB) Memory(KB) Memory(KB) (a) ARE of frequency estimation (b) F1 score of heavy hitter detection (c) ARE of heavy hitter detection (d) F1 score of heavy changer detection
10 E 广 10 E 10° ? FlowRadar
10 PE 102
10-4 0.2 Reveritble 10 0.2 + ? Sketchlearn Reversible
0t 0.0 0 0.0 FlowRada
100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 100 200 300 400 500 Memory(KB) Memory(KB) Memory(KB) Memory(KB)
(a) ARE of frequency estimation (b) F1 score of heavy hitter detection (c) ARE of heavy hitter detection (d) F1 score of heavy changer detection
Figures 3(c), 4(c), and 5(c) represent the ARE of different algorithms for heavy hitters’ frequency estimation. The ARE of Hidden Sketch is always 1 order lower than the best baseline algorithms. Especially on the IMC dataset, Hidden Sketch provides zero-error estimation for heavy hitters. Different from baseline algorithms, the decoding process of Hidden Sketch recovers the exact frequency of items. The error of frequent items only comes from its online estimation in the cold filter before it is inserted into the Hidden Sketch.
5.2.3 Heavy Changer Detection. Figures 3(d), 4(d), and 5(d) show the F1 score of different algorithms on the heavy changer detection task. Hidden Sketch also achieves F1 scores near to 1 in all the datasets and outperforms other algorithms. Hidden Sketch can report frequent item sets as the candidate heavy changers, and it can estimate item frequencies accurately. Therefore, Hidden Sketch can report heavy changers accurately by monitoring the frequency changes of frequent items. Although Reversible Sketch and Sketchlearn can detect heavy changers through the difference in the sketches of two windows, their recovery processes are unreliable since items with similar frequencies would confuse each other. | Modern data stream applications demand memory-efficient solutions for
accurately tracking frequent items, such as heavy hitters and heavy changers,
under strict resource constraints. Traditional sketches face inherent
accuracy-memory trade-offs: they either lose precision to reduce memory usage
or inflate memory costs to enable high recording capacity. This paper
introduces Hidden Sketch, a space-efficient reversible data structure for key
and frequency encoding. Our design uniquely combines a Reversible Bloom Filter
(RBF) and a Count-Min (CM) Sketch for invertible key and frequency storage,
enabling precise reconstruction for both keys and their frequencies with
minimal memory. Theoretical analysis establishes Hidden Sketch's space
complexity and guaranteed reversibility, while extensive experiments
demonstrate its substantial improvements in accuracy and space efficiency in
frequent item tracking tasks. By eliminating the trade-off between
reversibility and space efficiency, Hidden Sketch provides a scalable
foundation for real-time stream analytics in resource-constrained environments. | [
"cs.DB"
] |
# 1 Introduction
Database engines take advantage of physical design such as index structures, zone maps [32] and partitioning to prune irrelevant data as early as possible during query evaluation. In order to prune data, database systems need to determine statically (at query compile time) what data is needed to answer a query and which physical design artifacts to use to skip irrelevant data. For instance, to answer a query with a WHERE clause condition $\mathrm { ~ \tt ~ A ~ } = \mathrm { ~ 3 ~ }$ filtering the rows of a table R, the optimizer may decide to use an index on $\mathbb { A }$ to filter out rows that do not fulfill the condition. However, as was demonstrated in [37], for important classes of queries like queries involving top- $\mathbf { \nabla } \cdot \mathbf { k }$ and aggregation with HAVING, it is not possible to determine statically what data is needed, motivating the use of dynamic relevance analysis techniques that determine during query execution what data is relevant to answer a query. In [37] we introduced such a dynamic relevance analysis technique called provenance-based data skipping (PDBS). In PDBS, we encode what data is relevant for a query as a so-called provenance sketch. Given a range-partition of a table accessed by a query, a provenance sketch records which fragments
SELECT brand, SUM(price $\star$ numSold) AS rev
FROM sales
GROUP BY brand
HAVING SUM(price \* numSold) > 5000
sales
Figure 1: Example query and relevant subsets of the database.
of the partition contain provenance. That is, provenance sketches compactly encode an over-approximation of the provenance of a query. [37] presents safety conditions that ensure a sketch is sufficient, i.e., evaluating the query over the data represented by the sketch is guaranteed to produce the same result as evaluating the query over the full database. Thus, sketches are used to speed up queries by filtering data not in the sketch.
EXAMPLE 1.1. Consider the database shown in Fig. 1 and query $Q _ { T o p }$ that returns products whose total sale volume is greater than $\$ 5000$ . The provenance of the single result tuple (𝐴𝑝𝑝𝑙𝑒, 5074) are the two tuples (tuples $s _ { 3 }$ and $s _ { 4 }$ shown with purple background), as the group for Apple is the only group that fulfills the HAVING clause. To create a provenance sketch for this query, we select a rangepartition of the sales table that optionally may correspond to the physical storage layout of this table. For instance, we may choose to partition on attribute price based on ranges $\phi _ { p r i c e }$ :
$$
{ \tiny \begin{array} { c } { 0 . 6 0 0 } \end{array} } \mathrm { , } \rho _ { 2 } = [ 6 0 1 , 1 0 0 0 ] , \rho _ { 3 } = [ 1 0 0 1 , 1 5 0 0 ] , \rho _ { 4 } = [ 1 5 0 1 , 1 0 0 0 0 ] .
$$
In Fig. 1, we show the fragment 𝑓𝑖 for the range $\rho _ { i }$ each tuple belongs to. Two fragments $( f _ { 3 }$ and $f _ { 4 }$ highlighted in red) contain provenance and, thus, the provenance sketches for $Q _ { T o p }$ wrt. $F _ { s a l e s , p r i c e }$ is $\mathcal { P } =$ $\{ \rho _ { 3 } , \rho _ { 4 } \}$ . Evaluating the query over the sketch’s data is guaranteed to produce the same result as evaluation on the full database.1
Figure 2: IMP manages a set of sketches. For each incoming query, IMP determines whether to (i) capture a new sketches, (ii) use an existing non-stale sketch, or (iii) incrementally maintain a stale sketch and then utilize the updated sketch to answer the query.
As demonstrated in [37], provenance-based data skipping can significantly improve query performance — we pay upfront for creating sketches for some of the queries of a workload and then amortize this cost by using sketches to answer future queries by skipping irrelevant data. To create, or capture, a sketch for a query $\boldsymbol { Q }$ we execute an instrumented version of $\boldsymbol { Q }$ . Similarly, to use a sketch for a query $\boldsymbol { Q }$ , this query is instrumented to filter out data that does not belong to the sketch. For instance, consider the sketch for $Q _ { T o p }$ from Ex. 1.1 containing two ranges $\rho _ { 3 } = [ 1 0 0 1 , 1 5 0 0 ]$ and $\rho _ { 4 } = [ 1 5 0 1 , 1 0 0 0 0 ]$ . To skip irrelevant data, we create a disjunction of conditions testing that each tuple passing the WHERE clause belongs to the sketch, i.e., has a price within $\rho _ { 3 }$ or $\rho _ { 4 }$ :2
WHERE (price BETWEEN 1001 AND 1500) OR (price BETWEEN 1501 AND 10000)
PDBS enables databases to exploit physical design for new classes of queries, significantly improving the performance of aggregation queries with HAVING and top-k queries [37] and, more generally, any query where only a fraction of the database is relevant for answering the query. For instance, for a top-k query only tuples contributing to a result tuple in the top-k are relevant, but which tuples are in the top- $\mathbf { \nabla } \cdot \mathbf { k }$ can only be determined at runtime. A counterexample are queries with selection conditions with low selectivity for which the database can effectively filter the data without PDBS.
However, just like materialized views, a sketch captured in the past may no longer correctly reflect what data is needed (has become stale) when the database is updated. The sketch then has to be maintained to be valid for the current version of the database.
EXAMPLE 1.2 (STALE SKETCHES). Continuing with our running example, consider the effect of inserting a new tuple
𝑠8 = (8, HP, HP ProBook 650 G10, 1299, 1)
into relation sales. Running $Q _ { T o p }$ over the updated table returns a second result tuple (HP, 6194) as the total revenue for HP is now above the threshold specified in the HAVING clause. For the updated database, the three tuples for HP also belong to the provenance. Thus, the sketch has become stale as it is missing the range 𝜌2 which contains these tuples. Evaluating $Q _ { T o p }$ over the outdated sketch leads to an incorrect result that misses the group for HP.
Consider a partition $F$ of a table $R$ accessed by a query $\boldsymbol { Q }$ . We use $\boldsymbol { Q } _ { R , F }$ to denote the capture query for $\boldsymbol { Q }$ and $F$ , generated using the rewrite rules from [37]. Such a query propagates coarse-grained provenance information and ultimately returns a sketch. A straightforward approach to maintain sketches under updates is full maintenance which means that we rerun the sketch’s capture query $Q _ { R , F }$ to regenerate the sketch. Typically, $\boldsymbol { Q _ { R , F } }$ is more expensive than $\boldsymbol { Q }$ . Thus, frequent execution of capture queries is not feasible. Alternatively, we could employ incremental view maintenance (IVM) techniques [23, 26] to maintain $\boldsymbol { Q } _ { R , F }$ . However, capture queries use specialized data types and functions to efficiently implement common operations related to sketches. For instance, we use bitvectors to encode sketches compactly and utilize optimized (aggregate) functions and comparison operators for this encoding. To give two concrete examples, a function implementing binary search over the set of ranges for a sketch is used to determine which fragment an input tuple belongs to and an aggregation function that computes the bitwise-or of multiple bitvectors is used to implement the union of a set of partial sketches. To the best of our knowledge these operations are not supported by state-of-the-art IVM frameworks. Furthermore, sketches are compact over-approximations of the provenance of a query that are sound: evaluating the query over the sketch yields the same result as evaluating it over the full database. It is often possible to further over-approximate the sketch, trading improved maintenance performance for increased sketch size. Existing IVM methods do not support such trade-offs as they have to ensure that incremental maintenance yields the same result as full maintenance.
In this work, we study the problem of maintaining sketches under updates such that a sketch created in the past can be updated to be valid for the current state of the database. Towards this goal we develop an incremental maintenance framework for sketches that respects the approximate nature of sketches, has specialized data structures for representing data annotated with sketches, and maintenance rules tailored for data annotated with sketches.
We start by introducing a data model where each row is associated with a sketch and then develop incremental maintenance rules for operators over such annotated relations. We then present an implementation of these rules in an in-memory incremental engine called IMP (Incremental Maintenance of Provenance Sketches). The input to this engine is a set of annotated delta tuples (tuples that are inserted / deleted) that we extract from a backend DBMS. To maintain a sketch created by a capture query $\boldsymbol { Q } _ { R , F }$ at some point in the past, we extract the delta between the current version of the database and the database instance at the original time of capture (or the last time we maintained the sketch) and then feed this delta as input to our incremental engine to compute a delta for the sketch. IMP outsources some of the computation to the backend database.
This is in particular useful for operations like joins where deltas from one side of the join have to be joined with the full table on the other side similar to the delta rule $\Delta R \bowtie S$ used in standard incremental view maintenance. Additionally, we present several optimizations of our approach: (i) filtering deltas determined by the database to prune delta tuples that are guaranteed to not affect the result of incremental maintenance and (ii) filtering deltas for joins using bloom filters. IMP is effective for any query that benefits from sketches, e.g., queries with HAVING, as long as the cost of maintaining sketches is amortized by using sketches for answering queries.
In summary, we present IMP, the first incremental engine for maintaining provenance sketches. Our main contributions are:
• We develop incremental versions of relational algebra operators for sketch-annotated data.
• We implement these operators in IMP, an in-memory engine for incremental sketch maintenance. IMP enables PDBS for any DBMS by acting as a middleware between the user and the database that manages and maintains sketches.
• We experimentally compare IMP against full maintenance and against a baseline that does not use PDBS using TPC-H, real world datasets and synthetic data. IMP outperforms full maintenance, often by several orders of magnitude. Furthermore, PDBS with IMP significantly improves the performance of mixed workloads including both queries and updates.
The remainder of this paper is organized as follows: Sec. 2 presents an overview of IMP. We discuss related work in Sec. 3. We formally define incremental maintenance of sketches and introduce our annotated data model in Sec. 4. In Sec. 5, we introduce incremental sketch maintenance rules for relational operators and prove their correctness in Sec. 6. We discuss IMP’s implementation in Sec. 7, present experiments in Sec. 8, and conclude in Sec. 9.
# 2 Overview of IMP
Fig. 2 shows a overview of IMP that operates as a middleware between the user and a DBMS. We highlight parts of the system that utilize techniques from [37]. The dashed blue pipeline is for capture rewrite and dashed green pipeline is for use rewrite. Users send SQL queries and updates to IMP that are parsed using IMP’s parser and translated into an intermediate representation (relational algebra with update operations). The system stores a set of provenance sketches in the database. For each sketch we store the sketch itself, the query it was captured for, the current state of incremental operators for this query, and the database version it was last maintained at or first captured at for sketches that have not been maintained yet. As sketches are small (100s of bytes), we treat sketches as immutable and retain some or all past versions of a sketch. This has the advantage that it avoids write conflicts (for updating the sketch) between concurrent transactions that need to update to different versions of the sketch. We assume that the DBMS uses snapshot isolation and we can use snapshot identifiers used by the database internally to identify versions of sketches and of the database. For systems that use other concurrency control mechanisms, IMP can maintain version identifiers. Furthermore, the system can persist the state that it maintains for its incremental operators in the database. This enables the system to continue incremental maintenance from a consistent state, e.g., when the database is restarted, or when we are running out of memory and need to evict the operator states for a query. IMP enables PDBS for workloads with updates on-top of any SQL databases.
IMP supports multiple incremental maintenance strategies. Under eager maintenance, the system incrementally maintains each sketch that may be affected by the update (based on which tables are referenced by the sketch’s query) by processing the update, retrieving the delta from the database, and running the incremental maintenance. Eager maintenance can be configured to batch updates. If the operator states for a sketch’s query are not currently in memory, they will be fetched from the database. The updates to the sketches determined by incremental maintenance are then directly applied. Under lazy maintenance, the system passes updates directly to the database. When a sketch is needed to answer a query, this triggers maintenance for the sketch. For that, IMP fetches the delta between the version of the database at the time of the last maintenance for the sketch and the current database state and incrementally maintains the sketch. The result is a sketch that is valid as of the current state of the database. More advanced strategies can be designed on top of these two primitives, e.g., triggering eager maintenance during times of low resource usage or eagerly maintaining sketches for queries with strict response time requirements to avoid slowing down such queries when maintenance is run for a large batch of updates.
For queries sent by the user, IMP first determines whether there exists a sketch that can be used to answer the query $\boldsymbol { Q }$ . For that, it applies the mechanism from [37] to determine whether a sketch captured for a query $Q ^ { \prime }$ in the past can be safely used to answer $Q$ If such a sketch $\mathcal { P }$ exists, we determine whether $\mathcal { P }$ is stale. If that is the case, then IMP incrementally maintains the sketch (solid red pipeline). Afterwards, the query $\boldsymbol { Q }$ is instrumented to filter input data based on sketch $\mathcal { P }$ and then the instrumented query is sent to the database and its results are forwarded to the user (dashed green pipeline)[37]. If no existing provenance sketch can be used to answer $\boldsymbol { Q }$ , then IMP creates a capture query for $\boldsymbol { Q }$ and evaluates this query to create a new sketch $\mathcal { P }$ (dashed blue pipeline) [37]. This sketch is then used to answer $\boldsymbol { Q }$ (dashed green pipeline) [37]. IMP is an in-memory engine, exploiting the fact that sketches are small and that deltas and state required for incremental maintenance are typically small enough to fit into main memory or can be split into multiple batches if this is not that case.
# 3 Related Work
Provenance. Provenance can be captured by annotating data and propagating these annotations using relational queries or by extending the database system [25] [39] [38]. Systems like GProM [7], Perm [19], Smoke [41], Smoked Duck [33], Links [16], ProvSQL[42] and DBNotes[8] capture provenance for SQL queries. In [37], we introduced provenance-based data skipping (PBDS). The approach captures sketches over-approximating the provenance of a query and utilizes these sketches to speed-up subsequent queries. We present the first approach for maintaining sketches under updates, thus, enabling efficient PBDS for databases that are subject to updates.
Incremental View Maintenance (IVM). View maintenance has been studied extensively [9, 11, 17, 23, 27, 43]. [22, 44] gives an overview of many techniques and applications of view maintenance.
Figure 3: Glossary
Early work on view maintenance, e.g., [9, 11], used set semantics. This was later expanded to bag semantics (e.g., [13, 20]). We consider bag semantics. Materialization has been studied for Datalog as well [21, 23, 34]. Incremental maintenance algorithms for iterative computations have been studied in [1, 10, 35, 36]. [26] proposed higher-order IVM. [45] maintains aggregate views in temporal databases. [40] proposes a general mechanism for aggregation functions. [2, 47] studied automated tuning of materialized views and indexes in databases. As mentioned before, existing view maintenance techniques can not be directly applied for provenance sketches maintenance, since [37] uses specialized data types and functions to efficiently handle sketches during capture, which are not supported in start-of-the-art IVM systems. Furthermore, classical IVMs solutions have no notion of over-approximating query results and, thus, can not trade sketch accuracy for performance. Several strategies have been studied for maintaining views eagerly and lazily. For instance, [14] presented algorithms for deferred maintenance and [9, 12, 23] studied immediate view maintenance. Our approach supports both cases: immediately maintaining sketches after each update or sketches can be updated lazily when needed.
Maintaining Provenance. [46] presents a system for maintenance of provenance in a distributed Datalog engine. In contrast to our work, [46] is concerned with efficient distributed computation and storage for provenance. Provenance maintenance has to deal with large provenance annotations that are generated by complex queries involving joins and operations like aggregation that compute a small number of result tuples based on a large number of inputs. [46] addresses this problem by splitting the storage of provenance annotations across intermediate query results requiring recursive reconstruction at query time. In contrast, provenance sketches are small and their size is determined upfront based on the partitioning that is used. Because of this and because of their coarse-grained nature, sketches enable new optimizations, including trading accuracy for performance.
# 4 Background and Problem Definition
In this section we introduce necessary background and introduce notation used in the following sections. Let $\mathbb { U }$ be a domain of values. An instance $R$ of an n-ary relation schema $\operatorname { S C H } ( R ) = ( a _ { 1 } , \ldots , a _ { n } )$ is a function $\mathbb { U } ^ { n } \to \mathbb { N }$ mapping tuples to their multiplicity. We use $\{ \cdot \}$ to denote bags and $t ^ { n } \in R$ to denote tuple $t$ with multiplicity $n$ in relation $R$ , i.e., $R ( t ) = n$ . A database $D$ is a set of relations $R _ { 1 }$ to $R _ { m }$ . The schema of a database $\operatorname { S C H } ( D )$ is the set of relation schemas ${ \mathrm { S C H } } ( R _ { i } )$ for $i \in [ 1 , m ]$ . Fig. 4 shows the bag semantics relational algebra used in this work. We use $\operatorname { S C H } ( Q )$ to denote the schema of the query $\boldsymbol { Q }$ and $Q ( D )$ to denote the result of evaluating query $\boldsymbol { Q }$ over database $D$ . Selection $\sigma _ { \boldsymbol { \theta } } ( R )$ returns all tuples from relation $R$ which satisfy the condition $\theta$ . Projection $\Pi _ { A } ( R )$ projects all input tuples on a list of projection expressions. Here, $A$ denotes a list of expressions with potential renaming (denoted by $e a$ ) and $t . A$ denotes applying these expressions to a tuple $t$ . For example, $a + b c$ denotes renaming the result of $a + b$ as $c$ . $R \times S$ is the cross product for bags. For convenience we also define join $R \bowtie _ { \theta } S$ and natural join $R \Join S$ in the usual way. Aggregation $\gamma _ { f ( a ) ; G } ( R )$ groups tuples according to their values in attributes $G$ and computes the aggregation function $f$ over the bag of values of attribute $a$ for each group. We also allow the attribute storing $f ( a )$ to be named explicitly, e.g., $\gamma _ { f ( a ) x ; G } ( R )$ , renames $f ( a )$ as $x$ . Duplicate removal $\delta ( R )$ removes duplicates (definable using aggregation). Top-K $\tau _ { k , O } ( R )$ returns the first $k$ tuples from the relation $R$ sorted on order-by attributes $O$ . We use ${ < } _ { O }$ to denote the order induced by $O$ . The position of a tuple in $R$ ordered on $O$ is denoted by $\mathbf { p o s } ( t , R , O )$ and defined as: $\begin{array} { r } { { \bf p o s } ( t , R , O ) ~ = ~ \sum _ { t ^ { \prime } < o t \wedge t ^ { m } \in R } m } \end{array}$ . Fig. 3 shows an overview of the notations used in this work.
Figure 4: Bag Relational Algebra
# 4.1 Range-based Provenance Sketches
We use provenance sketches to concisely represent a superset of the provenance of a query (a sufficient subset of the input) based on horizontal partitions of the input relations of the query.
4.1.1 Range Partitioning. Given a set of intervals over the domains of a set of partition attributes $A \subset { \mathrm { S C H } } ( R )$ , range partitioning determines membership of tuples to fragments based on their $A$ values. For simplicity, we define partitioning for a single attribute $a$ , but all of our techniques also apply when $| A | > 1$ .
DEFINITION 4.1 (RANGE PARTITION). Consider a relation $R$ and $a \ \in \ { \mathrm { S C H } } ( R )$ . Let ${ \mathbb D } ( a )$ denote the domain of 𝑎 and $\phi \ =$ $\{ \rho _ { 1 } , . . . , \rho _ { n } \}$ be a set of intervals $[ l , u ] \subseteq { \mathbb { D } } ( a )$ such that $\textstyle \bigcup _ { i = 0 } ^ { n } \rho _ { i } =$ ${ \mathbb D } ( a )$ and $\rho _ { i } \cap \rho _ { j } \ = \ \varnothing$ for $i \neq j$ . The range-partition of $R$ on 𝑎 according to $\phi$ denoted as $F _ { \phi , a } ( R )$ is defined as:
$$
F _ { \phi , a } ( R ) = \{ R _ { \rho _ { 1 } } , . . . , R _ { \rho _ { n } } \} \quad w h e r e \quad R _ { \rho } = \left\{ t ^ { n } \ | \ t ^ { n } \in R \wedge t . a \in \rho \right\} .
$$
We will use $F$ instead of $F _ { \phi , a }$ if $\phi$ and $a$ are clear from the context and $f , f ^ { \prime } , f _ { i }$ , etc. to denote fragments. We also extend range partitioning to databases. For a database $D = \{ R _ { 1 } , \ldots , R _ { n } \}$ , we use $\Phi$ to denote a set of range - attribute pairs $\{ ( \phi _ { 1 } , a _ { 1 } ) , . . . , ( \phi _ { n } , a _ { n } ) \}$ such that $F _ { \phi _ { i } , a _ { i } }$ is a partition for $R _ { i }$ . Relations $R _ { i }$ that do not have a sketch can be modeled by setting $\phi _ { i } \ = \ \{ [ m i n ( { \mathbb D } ( a _ { i } ) ) , m a x ( { \mathbb D } ( a _ { i } ) ) ] \}$ , a single range covering all domain values.
4.1.2 Provenance Sketches. Consider a database $D$ , query $\boldsymbol { Q }$ , and a range partition $F _ { \Phi }$ of $D$ . We use $P ( Q , D ) \ \subseteq \ D$ to denote the provenance of $\boldsymbol { Q }$ wrt. $D$ . For the purpose of PDBS, any provenance model that represents the provenance of $\boldsymbol { Q }$ as a subset of
$D$ can be used as long as the model guarantees sufficiency3 [18]: $Q ( P ( Q , D ) ) = Q ( D )$ . A provenance sketch $\mathcal { P }$ for $\boldsymbol { Q }$ according to $\Phi$ is a subset of the ranges $\phi _ { i }$ for each $\phi _ { i } \in \Phi$ such that the fragments corresponding to the ranges in $\mathcal { P }$ fully cover $\boldsymbol { Q }$ ’s provenance within each $R _ { i }$ in $D$ , i.e., ${ \cal P } ( Q , D ) \cap { \cal R } _ { i }$ . We will write $\boldsymbol { \rho } \in \boldsymbol { \Phi }$ to denote that $\rho \in \phi _ { i }$ for some $\phi _ { i } \in \Phi$ and $D _ { \rho }$ for $\rho$ from $\phi _ { i }$ to denote the subsets of the database where all relations are empty except for $R _ { i }$ which is set to $R _ { i , \rho }$ , the fragment for $\rho$ . We use $\mathcal { P } _ { \Phi } ( D , \Phi , Q ) \subseteq \Phi$ to denote the set of ranges whose fragments overlap with the provenance $P ( Q , D )$ :
$$
\mathcal { P } _ { \Phi } ( D , \Phi , Q ) = \{ \rho \mid \rho \in \phi _ { i } \land \exists t \in P ( Q , D ) : t \in R _ { i , \rho } \}
$$
DEFINITION 4.2 (PROVENANCE SKETCH). Let $\boldsymbol { Q }$ be a query, $D$ a database, $R$ a relation accessed by $Q$ , and $\Phi$ a partition of $D$ . We call a subset $\mathcal { P }$ of $\Phi$ a provenance sketch iff $\mathcal { P } \supseteq \mathcal { P } _ { \Phi } ( D , \Phi , Q )$ . A sketch is accurate i $f \mathcal { P } = \mathcal { P } _ { \Phi } ( D , \Phi , Q )$ . The instance $D _ { \mathcal { P } }$ of $\mathcal { P }$ is defined as $\begin{array} { r } { D \varphi = \bigcup _ { \rho \in \mathcal P } D _ { \rho } } \end{array}$ . A sketch is safe $i f Q ( D \varphi ) = Q ( D )$ .
Consider the database consisting of a single relation (sales) from our running example shown in Fig. 1. According to the partition $\Phi = \{ ( \phi _ { \mathit { p r i c e } } , p r i c e ) \}$ , the accurate provenance sketch $\mathcal { P }$ for the query $Q _ { T o p }$ according to $\Phi$ consists of the set of ranges $\{ \rho _ { 3 } , \rho _ { 4 } \}$ (the two tuples in the provenance of this query highlighted in Fig. 1 belong to the fragments $f _ { 3 }$ and $f _ { 4 }$ corresponding to these ranges). The instance $D _ { \mathcal { P } }$ , i.e., the data covered by the sketch, consists of all tuples contained in fragments $f _ { 3 }$ and $f _ { 4 }$ which are: $\{ s _ { 3 } , s _ { 4 } , s _ { 5 } \}$ . This sketch is safe. We use the method from [37] to determine for an attribute $a$ and query $\boldsymbol { Q }$ whether a sketch build on any partition of $R$ on $a$ will be safe.
# 4.2 Updates, Histories, and Deltas
For the purpose of incremental maintenance we are interested in the difference between database states. Given two databases $D _ { 1 }$ and $D _ { 2 }$ we define the delta between $D _ { 1 }$ and $D _ { 2 }$ to be the symmetric difference between $D _ { 1 }$ and $D _ { 2 }$ where tuples $t$ that have to be inserted into $D _ { 1 }$ to generate $D _ { 2 }$ are tagged as $\Delta t$ and tuples that have to be deleted to derive $D _ { 2 }$ from $D _ { 1 }$ are tagged as $\textit { t }$ :
$$
\Delta ( D _ { 1 } , D _ { 2 } ) = \left\{ \textit { \textbf { f } } | \textit { t } \in D _ { 1 } - D _ { 2 } \right\} \cup \left\{ \Delta t \ | \textit { t } \in D _ { 2 } - D _ { 1 } \right\}
$$
For a given delta $\Delta D$ , we use $\Delta D \left( \Delta D \right)$ to denote $\ P \Delta t \ | \Delta t \in \Delta D \}$ $( \ P \mathbb { A } t \ | \mathbb { A } t \in \Delta D \} .$ ). We use $D \cup \Delta D$ to denote applying delta $\Delta D$ to database $D$ :
$$
D \ \backslash \ \backslash \ \Delta D = D - \left\{ \left. \Delta t \right. \ \Delta t \in \Delta D \right\} \cup \left\{ \left. \Delta t \right. \ \Delta t \in \Delta D \right\}
$$
EXAMPLE 4.1. Reconsider the insertion of tuple 𝑠8 (also shown below) into sales as shown in Ex. 1.2.
$s _ { 8 } = ( 8 , \mathrm { H P } , \mathrm { H P } \mathrm { P r o B o o k } 6 5 0 \mathrm { G 1 0 } , 1 2 9 9 , 1 )$ Let us assume that the database before (after) the insertion of this tuple is $D _ { 1 }$ $( D _ { 2 } ,$ ), then we get: $\Delta D _ { 2 } = \ P \mathbb { A } \ s _ { 8 } \|$
We use the same delta notation for sketches, e.g., for two sketch versions $\mathcal { P } _ { 1 }$ and $\mathcal { P } _ { 2 }$ , $\Delta \mathcal { P }$ is their delta if $\mathcal { P } _ { 2 } = \mathcal { P } _ { 1 } \cup \Delta \mathcal { P }$ , where $\uplus$ on sketches is defined as expected by inserting $\mathbb { A } \mathscr { P }$ and deleting $\Delta \mathcal { P }$ .
# 4.3 Sketch-Annotated Databases And Deltas
Our incremental maintenance approach utilizes relations whose tuples are annotated with sketches. We define an incremental semantics for maintaining the results of operators over such annotated relations and demonstrate that this semantics correctly maintains sketches.
DEFINITION 4.3 (SKETCH ANNOTATED RELATION). A sketch annotated relation $\mathcal { R }$ of arity $m$ for a given set of ranges $\phi$ over the domain of some attribute $a \in { \mathrm { S C H } } ( R )$ , is a bag of pairs $\langle t , \mathcal { P } \rangle$ such that 𝑡 is an 𝑚-ary tuple and ${ \mathcal { P } } \subseteq \phi$ .
We next define an operator annotate $( R , \Phi )$ that annotates each tuple with the singleton set containing the range its value in attribute $a$ belongs to. This operator will be used to generate inputs for incremental relational algebra operators over annotated relations.
DEFINITION 4.4 (ANNOTATING RELATIONS). Given a relation $R ,$ , attribute $a \in { \mathrm { S C H } } ( R )$ and ranges $\Phi = \{ \dots , ( \phi , a ) , \dots \}$ , i.e., $( \phi , a )$ is the partition for $R$ in $\Phi$ , the operator annotate returns a sketchannotated relation $\mathcal { R }$ with the same schema as $R$ :
$$
\mathbf { a t e } ( R , \Phi ) = \left\{ \langle t , \{ \rho \} \rangle \mid t \in R \land t . a \in \rho \land \rho \in \phi \right\}
$$
We define annotated deltas as deltas where each tuple is annotated using the annotate operator. Consider a delta $\Delta R$ between two versions $R _ { 1 }$ and $R _ { 2 }$ of relation $R$ . Given ranges $\phi$ for attribute $a \in { \mathrm { S C H } } ( R )$ , we define $\Delta \mathcal { R }$ as: $\Delta \mathcal { R } =$ annotate $( \Delta R , \Phi )$ . $\Delta \mathcal { R }$ contains all tuples from $R$ that differ between $R _ { 1 }$ and $R _ { 2 }$ tagged with +Δ or -Δ depending on whether they got inserted or deleted. Each tuple $t$ is annotated with the range $\rho \in \phi$ that 𝑡.𝑎 belongs to. Analog we use $\mathcal { D }$ to denote the annotated version of database $D$ and use $\Delta \mathcal { D }$ to denote the annotated version of delta database $\Delta D$ .
EXAMPLE 4.2. Continuing with Ex. 4.1, the annotated version of $\Delta D _ { 2 }$ according to $\phi _ { p r i c e }$ is $\{ \langle \Delta \ : s _ { 8 } , \{ \rho _ { 3 } \} \rangle \}$ , because $s _ { 8 }$ .𝑝𝑟𝑖𝑐𝑒 belongs to $\rho _ { 3 } = [ 1 0 0 1 , 1 5 0 0 ] \in \phi _ { p r i c e }$ .
# 4.4 Problem Definition
We are now ready to define incremental maintenance procedures $( I M s )$ that maintain provenance sketches. An IM takes as input a query $\boldsymbol { Q }$ and an annotated delta $\Delta \mathcal { D }$ for the ranges $\Phi$ of a provenance sketch $\mathcal { P }$ and produces a delta $\Delta \mathcal { P }$ for the sketch. Note that we assume that all attributes used in $\Phi$ are safe. An attribute $a$ is safe for a query $\boldsymbol { Q }$ if every sketch based on some range partition on $a$ is safe. We use the safety test from [37] to determine safe attributes. IMs are allowed to store some state $s$ , e.g., information about groups produced by an aggregation operator, to allow for more efficient maintenance. Given the current state and $\Delta \mathcal { D }$ , the IM should return a delta $\Delta \mathcal { P }$ for the sketch $\mathcal { P }$ and an updated state $s ^ { \prime }$ such that $\mathcal { P } \cup \Delta \mathcal { P }$ over-approximates an accurate sketch for the updated database.
DEFINITION 4.5 (INCREMENTAL MAINTENANCE PROCEDURE). Given a query $\boldsymbol { Q }$ , a database $D$ and a delta $\Delta D$ . Let $\mathcal { P }$ be a provenance sketch over $D$ for $\boldsymbol { Q }$ wrt. some partition $\Phi$ . An incremental maintenance procedure $\boldsymbol { \tau }$ takes as input a state $s$ , the annotated delta $\Delta \mathcal { D }$ , and returns an updated state $s ^ { \prime }$ and a provenance sketch delta $\Delta \mathcal { P }$ :
$$
\ b { \mathcal { I } } ( \ b { Q } , \ b { \Phi } , \ b { S } , \Delta \ b { \mathcal { D } } ) = ( \Delta \ b { \mathcal { P } } , \ b { S } ^ { \prime } )
$$
Let ${ \mathcal { P } } [ Q , \Phi , D ]$ denote an accurate sketch for $\boldsymbol { Q }$ over $D$ wrt. $\Phi$ . Niu et al. [37] demonstrated that any over-approximation of a safe sketch is also safe, i.e., evaluating the query over the over-approximated sketch yields the same result as evaluating the query over the full database. Thus, for a $\ Ḋ \mathrm { Ḋ } \tau Ḍ Ḍ$ to be correct, the following condition has to hold: for every sketch $\mathcal { P }$ that is valid for $D$ and delta $\Delta D$ , $\boldsymbol { \tau }$ , if provided with the state $s$ for $D$ and the annotated version $\Delta \mathcal { D }$ of $\Delta D$ , returns an over-approximation of the accurate sketch ${ \mathcal { P } } [ Q , \Phi , D \uplus \Delta D ]$ :
$$
\mathcal { P } [ \boldsymbol { Q } , \Phi , D \circledast \Delta D ] \subseteq \mathcal { P } \Cup \mathscr { I } ( \boldsymbol { Q } , \Phi , S , \Delta \mathcal { D } )
$$
# 5 Incremental Annotated Semantics
We now introduce an IM that maintains sketches using annotated and incremental semantics for relational algebra operators. Each operator takes as input an annotated delta produced by its inputs (or passed to the IM in case of the table access operator), updates its internal state, and outputs an annotated delta. Together, the states of all such incremental operators in a query make up the state of our IM. For an operator $O$ (or query $\boldsymbol { Q }$ ) we use $\mathcal { I } ( O , \Phi , \Delta \mathcal { D } , S ) \left( \mathcal { I } ( Q , \Phi , \Delta \mathcal { D } , S ) \right)$ to denote the result of evaluating $O$ $( Q )$ over the annotated delta $\Delta \mathcal { D }$ using the state $s$ . We will often drop $s$ and $\Phi$ . Our IM evaluates a query $\boldsymbol { Q }$ expressed in relational algebra producing an updated state and outputting a delta where each row is annotated with a partial sketch delta. These partial sketch deltas are then combined into a final result $\Delta \mathcal { P }$ .
EXAMPLE 5.1. Fig. 5 shows annotated tables $\mathcal { R }$ and $\mathcal { S }$ , ranges $\phi _ { a }$ and $\phi _ { c }$ for attribute 𝑎 (table $R$ ) and 𝑐 (table 𝑆), the delta Δ𝑅 and the sketches before the delta has been applied: $\mathcal { P } _ { R }$ and $\mathcal { P } _ { S }$ . Consider the following query over 𝑅 and 𝑆:
SELECT a, sum(c) as sc
FROM (SELECT a, b FROM R WHERE a $> 3$ ) JOIN S on $( { \mathrm { b } } \ = \ { \mathrm { d } } )$ )
GROUP BY a HAVING $\mathtt { S U M } \left( \mathtt { C } \right) \ > \ \mathtt { S }$
Fig. 5 (right table) shows each operator’s output. We will further discuss these in the following when introducing the incremental semantics for individual operators. In this example, a new tuple is inserted into 𝑅. Leading to a sketch deltas $\Delta \mathcal { P } _ { R } = \Delta \ \{ f _ { 1 } \}$ and $\Delta \mathcal { P } _ { S } = \Delta \ \{ g _ { 2 } \}$ . The tuple inserted into 𝑅 results in the generation of a new group for the aggregation subquery which passes the HAVING condition and in turn causes the two fragments from the tuple belonging to this group to be added to the sketches.
# 5.1 Merging Sketch Deltas
Each incremental algebra operator returns an annotated relation where each tuple is associated with a sketch that is sufficient to produce it. To generate the sketch for a query $\boldsymbol { Q }$ we evaluate the query under our incremental annotated semantics to produce the tuples of $Q ( D )$ each annotated with a partial sketch. We then combine these partial sketches into a single sketch for $\boldsymbol { Q }$ . We now discuss the operator $\mu$ that implements this final merging step. To determine whether a change to the annotated query result will result in a change to the current sketch, this operator maintains as state a map $s : \Phi $ $\mathbb { N }$ that records for each range $\boldsymbol { \rho } \in \Phi$ the number of result tuples for which $\rho$ is in their sketch. If the counter for a fragment $\rho$ reaches 0 (due to the deletion of tuples), then the fragment needs to be removed from the sketch. If the counter for a fragment $\rho$ changes from 0 to a non-zero value, then the fragment now belongs to the sketch for the query (we have to add a delta inserting this fragment to the sketch).
$$
\bar { J } ( \mu ( Q ) , \Delta \mathcal { D } , S ) = ( \Delta \mathcal { P } , S ^ { \prime } )
$$
We first explain how $s ^ { \prime }$ , the updated state for the operator, is computed and then explain how to compute $\Delta \mathcal { P }$ using $s$ . We define $s ^ { \prime }$ pointwise for a fragment $\rho$ . Any newly inserted (deleted) tuple whose sketch includes $\rho$ increases (decreases) the count for $\rho$ . That is the total cardinality of such inserted tuples (of bag + $\mathcal { D }$ and -Δ $\mathcal { D }$ , respectively) has to be added (subtracted) from the current count for $\rho$ . Depending on the change of the count for $\rho$ between $s$ and $s ^ { \prime }$ , the operator $\mu$ has to output a delta for $\mathcal { P }$ . Specifically, if $S [ \rho ] = 0 \neq S ^ { \prime } [ \rho ]$ then the fragment has to be inserted into the sketch and if $S [ \rho ] \neq 0 = S ^ { \prime } [ \rho ]$ then the fragment was part of the sketch, but no longer contributes and needs to be removed.
$$
\begin{array} { r l } & { S ^ { \prime } [ \rho ] = S [ \rho ] + | \Delta \mathcal { D } _ { \rho } | - | \Delta \mathcal { D } _ { \rho } | } \\ & { \mathbb { A } \mathcal { D } _ { \rho } = \{ \mathbb { A } \left. t , \mathcal { P } \right. ^ { n } | \ \mathbb { A } \left. t , \mathcal { P } \right. ^ { n } \in I ( Q , \Delta \mathcal { D } ) \wedge \rho \in \mathcal { P } \} } \\ & { \mathbb { A } \mathcal { D } _ { \rho } = \{ \mathbb { A } \left. t , \mathcal { P } \right. ^ { n } | \ \mathbb { A } \left. t , \mathcal { P } \right. ^ { n } \in I ( Q , \Delta \mathcal { D } ) \wedge \rho \in \mathcal { P } \} } \\ & { \quad \Delta \mathcal { P } = \bigcup _ { \rho : S [ \rho ] = 0 \wedge S ^ { \prime } [ \rho ] \neq 0 } \underset { \rho : S [ \rho ] \neq 0 \wedge S ^ { \prime } [ \rho ] = 0 } { \bigcup } \ \{ \Delta \rho \} } \end{array}
$$
EXAMPLE 5.2. Reconsider our running example from $E x .$ . 1.1 that partitions based on $\phi _ { p r i c e }$ . Assume that there are two result tuples $t _ { 1 }$ and $t _ { 2 }$ of a query $\boldsymbol { Q }$ that have $\rho _ { 2 } = [ 6 0 1 , 1 0 0 0 ]$ in their sketch and one result tuple $t _ { 3 }$ that has $\rho _ { 1 }$ and $\rho _ { 2 }$ in its sketch. Then the current sketch for the query is $\mathcal { P } = \{ \rho _ { 1 } , \rho _ { 2 } \}$ and the state of $\mu$ is as shown below. If we are processing a delta -Δ $\langle t _ { 3 } , \{ \rho _ { 1 } , \rho _ { 2 } \} \rangle$ deleting tuple 𝑡3, the updated counts $s ^ { \prime }$ are:
$$
S [ \rho _ { 1 } ] = 1 \quad S [ \rho _ { 2 } ] = 3 \qquad S ^ { \prime } [ \rho _ { 1 } ] = 0 \quad S ^ { \prime } [ \rho _ { 2 } ] = 2
$$
As there is no longer any justification for $\rho _ { 1 }$ to belong to the sketch (its count changed to 0), $\mu$ returns a delta: $\{ \begin{array} { r l } \end{array} \rho _ { 1 } \}$
Consider the merge operator $\mu$ in Ex. 5.1. The state data before maintenance contains ranges $f _ { 2 }$ and $g _ { 1 }$ . A single tuple annotated with $f _ { 1 }$ and $g _ { 2 }$ is added to the input of this operator. Both ranges were not present in $s$ and, thus, in addition adding them to $s ^ { \prime }$ the merge operator returns a sketch delta +Δ $\{ f _ { 1 } , g _ { 2 } \}$ .
# 5.2 Incremental Relational Algebra
5.2.1 Table Access Operator. The incremental version of the table access operator $R$ returns the annotated delta $\Delta \mathcal { R }$ for $R$ passed as part of $\Delta \mathcal { D }$ to the IM unmodified. This operator has no state.
$$
J ( R , \Delta \mathcal { D } ) = \Delta \mathcal { R }
$$
Fig. 5 (top) shows the result of annotating the relation $\Delta R$ from Ex. 5.1.
5.2.2 Projection. The projection operator does not maintain any state as each output tuple is produced independently from an input tuple if we consider multiple duplicates of the same tuple as separate tuples. For each annotated delta tuple $\Delta \langle t , { \mathcal { P } } \rangle$ , we project $t$ on the project expressions $A$ and propagate $\mathcal { P }$ unmodified as $t . A$ in the result depends on the same input tuples as $t$ .
$$
\begin{array} { r } { \boldsymbol { { \mathcal { I } } } ( \Pi _ { A } ( Q ) , \Delta { \mathcal { D } } ) = \| \Delta \langle t . A , \mathcal { P } \rangle ^ { n } \mid \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \boldsymbol { \mathcal { I } } ( Q , \Delta \mathcal { D } ) \| } \end{array}
$$
Table, Ranges and Delta
Output for each incremental operator
Figure 5: Using our IM to evaluate a query under incremental annotated semantics.
5.2.3 Selection. The incremental selection operator is stateless and the sketch of an input tuple is sufficient for producing the same tuple in the output of selection. Thus, selection returns all input delta tuples that fulfill the selection condition unmodified and filters out all other delta tuples. In our running example (Fig. 5), the single input delta tuple fulfills the condition of selection $\sigma _ { a > 3 }$ .
$$
\begin{array} { r } { \boldsymbol { \mathcal { I } } ( \sigma _ { \theta } ( Q ) , \Delta \mathcal { D } ) = \ S \{ \Delta \langle t , \mathcal { P } \rangle ^ { n } \mid \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \boldsymbol { \mathcal { I } } ( Q , \Delta \mathcal { D } ) \land t \mid = \theta \} } \end{array}
$$
5.2.4 Cross Product. The incremental version of a cross product (and join) $Q _ { 1 } \times Q _ { 2 }$ combines three sets of deltas: (i) joining the delta of $Q _ { 1 }$ with the current annotated state of $Q _ { 2 }$ $\varrho _ { 2 } \left( Q _ { 2 } ( \mathcal { D } ) \right)$ , (ii) joining the delta of the $Q _ { 2 }$ with $Q _ { 1 } ( \mathcal { D } )$ , (iii) joining the deltas of $Q _ { 1 }$ and $Q _ { 2 }$ . For (iii) there are four possible cases depending on which of the two delta tuples being joined is an insertion or a deletion. For two inserted tuples that join, the joined tuple $s \circ t$ is inserted into the result of the cross product. For two deleted tuples, we also have to insert the joined tuple $s \circ t$ into the result. For a deleted tuple joining an inserted tuple, we should delete the tuple $s \circ t$ . The non-annotated version of these rules have been discussed in [14, 20, 26, 31]. We use $\Delta Q _ { i }$ to denote $\mathcal { I } ( Q _ { i } , \Delta \mathcal { D } )$ for $i \in \{ 1 , 2 \}$ below.
$$
\begin{array} { r l } & { I ( Q _ { 1 } \times Q _ { 2 } , \Delta \mathcal { D } ) = } \\ & { \qquad \{ \mathbb { A } \langle s \circ t , \mathcal { P } _ { 1 } \circ \mathcal { P } _ { 2 } \rangle ^ { n \cdot m } \mid ( \mathbb { A } \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in \Delta Q _ { 1 } \wedge \mathbb { A } \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in \Delta Q _ { 2 } ) } \\ & { \qquad \quad \vee \left( \mathbb { A } \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in \Delta Q _ { 1 } \wedge \mathbb { A } \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in \Delta Q _ { 2 } \right) } \\ & { \qquad \quad \vee \left( \mathbb { A } \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in \Delta Q _ { 1 } \wedge \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in Q _ { 2 } ( \mathcal { D } ) \right) } \\ & { \qquad \quad \vee \left( \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in Q _ { 1 } ( \mathcal { D } ) \wedge \mathbb { A } \left. t , \mathcal { P } _ { 2 } \right. ^ { m } \in \Delta Q _ { 2 } \right) \} } \end{array}
$$
∪
$$
\begin{array} { r l } { \{ } & { \langle s \circ t , \mathcal { P } _ { 1 } \mid \mathfrak { s } \mathcal { P } _ { 2 } \rangle ^ { n \cdot m } \mid ( \quad \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in \Delta Q _ { 1 } \wedge \Delta \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in \Delta Q _ { 2 } ) } \\ & { \qquad \vee ( \Delta \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in \Delta Q _ { 1 } \wedge \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in \Delta Q _ { 2 } ) } \\ & { \qquad \vee ( \quad \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in \Delta Q _ { 1 } \wedge \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in Q _ { 2 } ( \mathcal { D } ) ) } \\ & { \qquad \vee ( \langle s , \mathcal { P } _ { 1 } \rangle ^ { n } \in Q _ { 1 } ( \mathcal { D } ) \wedge \quad \langle t , \mathcal { P } _ { 2 } \rangle ^ { m } \in \Delta Q _ { 2 } ) \} } \end{array}
$$
Continuing with Ex. 5.1, as $\Delta \mathcal { S } = \emptyset$ and $\Delta \mathcal { R } = \{ \ A \ \langle ( 5 , 8 ) , \{ f _ { 1 } \} \rangle \}$ only contains insertions, only $\Delta \mathcal { R } \bowtie _ { b = d } \mathcal { S }$ returns a non-empty result (the third case above). As $( 5 , 8 )$ only joins with tuple $( 7 , 8 )$ , a single delta tuple +Δ $\langle ( 5 , 8 , 7 , 8 ) , \{ f _ { 1 } , g _ { 2 } \} \rangle$ is returned.
5.2.5 Aggregation: Sum, Count, and Average. For the aggregation operator, we need to maintain the current aggregation result for each individual group and record the contribution of fragments from a provenance sketch towards the aggregation result to be able to efficiently maintain the operator’s result. Consider an aggregation operator $\gamma _ { \mathrm { f } ( a ) ; G } ( R )$ where f is an aggregation function and $G$ are the group by attributes $G = \varnothing$ for aggregation without group-by). Given an version $R$ of the input of the aggregation operator, we use ${ \mathcal { G } } = \{ t . G | t \in R \}$ to denote the set of distinct group-by values.
The state data needed for aggregation depends on what aggregation function we have to maintain. However, for all aggregation functions the state maintained for aggregation is a map $s$ from groups to a per-group state storing aggregation function results for this group, the sketch for the group, and a map $\mathcal { F } _ { g }$ recording for each range $\rho$ of $\Phi$ the number of input tuples belonging to the group with $\rho$ in their provenance sketch. Intuitively, $\mathcal { F } _ { g }$ is used in a similar fashion as for operator $\mu$ to determine when a range has to be added to or removed from a sketch for the group. We will discuss aggregation functions sum, count, and avg that share the same state.
Sum. Consider an aggregation $\gamma _ { \mathbf { s u m } ( a ) ; G } ( Q )$ . To be able to incrementally maintain the aggregation result and provenance sketch for a group $g$ , we store the following state:
$$
S [ g ] = \left( \operatorname { S U M } , \operatorname { C N T } , \mathcal { P } , \mathcal { F } _ { g } \right)
$$
SUM and CNT store the sum and count for the group, $\mathcal { P }$ stores the group’s sketch, and $\mathcal { F } _ { g } : \Phi \mathbb { N }$ introduced above tracks for each range $\boldsymbol { \rho } \in \boldsymbol { \Phi }$ how many input tuples from $Q ( D )$ belonging to the group have $\rho$ in their sketch. State $s$ is initialized to $\varnothing$ .
Incremental Maintenance. The operator processes an annotated delta as explained in the following. Consider an annotated delta $\Delta \mathcal { D }$ . Let $\Delta Q$ denote $\varGamma ( Q , \Delta \mathcal { D } )$ , i.e., the delta produced by incremental evaluation for $\boldsymbol { Q }$ using $\Delta \mathcal { D }$ . We use $\mathcal { G } _ { \Delta Q }$ to denote the set of groups present in $\Delta Q$ and $\Delta Q _ { g }$ to denote the subset of $\Delta Q$ including all annotated delta tuples $\Delta \langle t , { \mathcal { P } } \rangle$ where $t . G \ = \ g$ . We now explain how to produce the output for one such group. The result of the incremental aggregation operators is then just the union of these results. We first discuss the case where the group already exists and still exists after applying the input delta.
Updating an existing group. Assume the current and updated state for $g$ as shown below:
$$
\begin{array} { r l } { S [ g ] = ( \mathrm { S U M } , \mathrm { C N T } , \mathcal { P } , \mathcal { F } _ { g } ) } & { { } \ : S ^ { \prime } [ g ] = ( \mathrm { S U M } ^ { \prime } , \mathrm { C N T } ^ { \prime } , \mathcal { P } ^ { \prime } , \mathcal { F } _ { g } ^ { \prime } ) } \end{array}
$$
The updated sum is produced by adding $t . a \cdot n$ for each inserted input tuple with multiplicity $n$ : +Δ $\langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g }$ and subtracting this amount for each deleted tuple: -Δ $\langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g }$ . For instance, if the delta contains the insertion of 3 duplicates of a tuple with $a$ value 5, then the SUM will be increased by $3 \cdot 5$ .
$$
\mathrm { S U M } ^ { \prime } = \mathrm { S U M } + \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } } t . a \cdot n - \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } } t . a \cdot n
$$
The update for CNT is computed in the same fashion using $n$ instead of $t . a \cdot n$ . The updated count in $\mathcal { F } _ { g } ^ { \prime }$ is computed for each $\boldsymbol { \rho } \in \Phi$ as:
$$
\mathcal { F } _ { g } ^ { \prime } [ \rho ] = \mathcal { F } _ { g } [ \rho ] + \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } \wedge \rho \in \mathcal { P } } n - \sum _ { \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } \wedge \rho \in \mathcal { P } } n
$$
Based on $\mathcal { F } _ { g } ^ { \prime }$ we then determine the updated sketch for the group:
$$
\mathcal { P } ^ { \prime } = \{ \rho \mid \mathcal { F } _ { g } ^ { \prime } [ \rho ] > 0 \}
$$
We then output a pair of annotated delta tuples that deletes the previous result for the group and inserts the updated result:
$$
\begin{array} { r } { \Delta \left. g \circ ( \mathrm { s U M } ) , \mathcal { P } \right. \qquad \Delta \left. g \circ ( \mathrm { s U M } ^ { \prime } ) , \mathcal { P } ^ { \prime } \right. } \end{array}
$$
Creating and Deleting Groups. For groups $g$ that are not in $s$ , we initialize the state for $g$ as shown below: $S ^ { \prime } [ g ] = ( 0 , 0 , \emptyset , \emptyset )$ and only output +Δ $\langle g \circ ( \mathrm { { S U M } ^ { \prime } } ) , \mathcal { P } ^ { \prime } \rangle$ . An existing group gets deleted if $\mathbf { C } \mathbf { N } \mathbf { T } \neq 0$ and $\mathbf { C N T ^ { \prime } } = 0$ . In this case we only output -Δ $\langle g \circ ( \mathsf { S U M } ) , \mathcal { P } \rangle$ .
Average and Count. For average we maintain the same state as for sum. The only difference is that the updated average is computed as $\mathrm { \frac { \Delta S U M ^ { \prime } } { C N T ^ { \prime } } }$ . For count we only maintain the count and output $\mathbf { C N T ^ { \prime } }$ .
Continuing with Ex. 5.1, the output of the join (single delta tuple with group 5) is fed into the aggregation operator using sum. As no such group is in $s$ we create new entry $\boldsymbol { \mathscr { s } } [ 5 ]$ . After maintaining the state, the output delta produced for this group is $\| \Delta \left. ( 5 , 7 ) , \{ f _ { 1 } , g _ { 2 } \} \right. \}$ . This result satisfies HAVING condition (selection $\sigma _ { \mathrm { s u m } ( c ) > 5 }$ ) and is passed on to the merge operator.
5.2.6 Aggregation: minimum and maximum. The aggregation functions min and max share the same state. To be able to efficiently determine the minimum (maximum) value of the aggregation function min (max), we use a data structure like balanced search trees that can provide efficiently access to the value in sort order.
Min. Consider an aggregation $\gamma _ { \mathbf { m i n } ( a ) ; G } ( Q )$ . To be able to maintain the aggregation result and provenance sketch incrementally for a group $g$ , we store the following state:
$$
S [ g ] = ( \mathrm { C N T } , \mathcal { P } , \mathcal { F } _ { g } )
$$
$\mathcal { P }$ and $\mathcal { F } _ { g }$ are the same as described in aggregation function sum. $\mathcal { P }$ stores the groups’ sketch and $\mathcal { F } _ { g }$ stores for each range how many tuples in this group have this range in their sketch. CNT is an balanced search tree that record all values of aggregate attribute in sort order, and for each node in CNT, we store the multiplicity of this aggregate value.
Incremental Maintenance. Consider an aggregation $\gamma _ { \mathbf { m i n } ( a ) ; G } ( Q )$ and an annotated delta $\Delta \mathcal { D }$ . Recall that $\Delta Q$ denotes the $\mathcal { I } ( Q , \Delta \mathcal { D } )$ and $\Delta Q _ { g }$ to denote the subset of $\Delta Q$ including all annotated delta tuples $\Delta \langle t , { \mathcal { P } } \rangle$ where $t . G = g$ . We now discuss how to produce the output for one such group and how the incremental maintenance works for the case where the group already exists and still exits after maintenance.
Updating an existing group. Assume the current and updated state for $g$ as shown below:
$$
{ \cal S } [ g ] = ( \mathrm { C N T } , { \mathcal P } , { \mathcal F } _ { g } ) \qquad { \cal S } ^ { \prime } [ g ] = ( \mathrm { C N T } ^ { \prime } , { \mathcal P } ^ { \prime } , { \mathcal F } _ { g } ^ { \prime } )
$$
The CNT is updated as follow: for each annotated tuple in $\Delta Q _ { g }$ , the multiplicity of aggregate attribute value $( a )$ will be increased by 1 if it is an inserted annotated tuple $( \Delta \left. t , \mathcal { P } \right. )$ and $t . a = a$ . If this value is new to the tree, we just initialize for this value with a multiplicity 1. Otherwise, the multiplicity will be decreased by one if the annotated tuple is a deletion and $t . a = a$ . We remove a node from the tree if the multiplicity becomes 0.
$$
\mathrm { C N T } [ a ] ^ { \prime } = \mathrm { C N T } [ a ] & + \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } \wedge t . a = a } \quad - \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } \wedge t . a = a } n
$$
Here, the use of $\operatorname { C N T } [ a ]$ and $\mathrm { C N T } [ a ] ^ { \prime }$ do not imply CNT is a map but they indicate the multiplicity of $a$ in CNT.
The updated count in $\mathcal { F } _ { g } ^ { \prime }$ is computed for each $\boldsymbol { \rho } \in \Phi$ as:
$$
\mathcal { F } _ { g } ^ { \prime } [ \rho ] = \mathcal { F } _ { g } [ \rho ] + \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } \wedge \rho \in \mathcal { P } } n - \sum _ { \Delta \langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q _ { g } \wedge \rho \in \mathcal { P } } n
$$
Based on $\mathcal { F } _ { g } ^ { \prime }$ we then determine the updated sketch for the group:
$$
\mathcal { P } ^ { \prime } = \{ \rho \mid \mathcal { F } _ { g } ^ { \prime } [ \rho ] > 0 \}
$$
We then output a pair of annotated delta tuples that deletes the previous result for the group and inserts the updated result:
$$
\begin{array} { r } { \langle g \circ ( m i n ( \mathrm { C N T } ) ) , \mathcal P \rangle \qquad \& \langle g \circ ( m i n ( \mathrm { C N T } ^ { \prime } ) ) , \mathcal P ^ { \prime } \rangle } \end{array}
$$
For the group $g$ , we need to output the minimum value in the balanced search tree.
Creating and Deleting Groups. For groups $g$ that are not in $s$ , we initialize the state for $g$ as shown below: $S ^ { \prime } [ g ] ~ = ~ ( \emptyset , \emptyset , \emptyset )$ and only output +Δ $\langle g \circ ( m i n ( \mathrm { C N T ^ { \prime } } ) ) , \mathcal { P } ^ { \prime } \rangle$ . An existing group gets deleted if size of the tree becomes zero from a non-zero value such that: $\mathrm { | C N T | } \neq 0 = \mathrm { | C N T ^ { \prime } | }$ . In this case we only output $\mathsf { A }$ $\langle g \circ ( \operatorname* { m i n } \mathbf { C } \mathbf { N } \mathrm { T } ) , \mathcal { P } \rangle$ .
Max. For max, we maintain the same state as for min. The only difference is that we output the maximum value from tree CNT and $\mathbf { C N T ^ { \prime } }$ .
5.2.7 Top- $k .$ The top- $\mathbf { \nabla } \cdot \mathbf { k }$ operator $\tau _ { k , O }$ returns the first $k$ tuples sorted on $O$ . As we are dealing with bag semantics, the top-k tuples may contain a tuple with multiplicity larger than 1. As before, we use $\Delta Q$ to denote $\smash { \mathcal { I } ( Q , \Delta \mathcal { D } ) }$ .
State Data. To be able to efficiently determine updates to the top-k tuples with sketch annotations we maintain a nested map. The outer map $s$ is order and should be implemented using a data structure like balanced search trees (BSTs) that provide efficient access to entries in sort order on $O$ , maps order-by values $o$ to another map CNT which stores multiplicities for each annotated tuple $\langle t , \mathcal { P } \rangle$ for which $t . O = o$ .
$$
\begin{array} { r } { S [ o ] = \left( \mathrm { C N T } \right) } \end{array}
$$
and for any $\langle t , \mathcal { P } \rangle$ with $t . O = o$ with $\langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q$ we store
$$
\mathbf { C N T } [ \left. t , \mathcal { P } \right. ] = n
$$
This data structure allows efficient updates to the multiplicity of any annotated tuple based on the input delta as shown below. Consider such a tuple $\langle t , \mathcal { P } \rangle$ with $t . O = o$ with +Δ $\langle t , \mathcal { P } \rangle ^ { n } \in \Delta Q$ and -Δ $\langle t , \mathcal { P } \rangle ^ { m } \in \Delta Q$ .
$$
S ^ { \prime } [ o ] \left[ \langle t , \mathcal { P } \rangle \right] = S [ o ] \left[ \langle t , \mathcal { P } \rangle \right] + n - m
$$
Computing Deltas. As $k$ is typically relatively small, we select a simple approach for computing deltas by deleting the previous top- $\mathbf { \nabla } \cdot \mathbf { k }$ and then inserting the updated top- $\mathbf { \nabla } \cdot \mathbf { k }$ . Should the need arise to handle large $k$ , we can use a balanced search tree and mark nodes in the tree as modified when updating the multiplicity of annotated tuples based on the input delta and use data structures which enable efficient positional access under updates, e.g., order-statistic trees [15]. Our simpler technique just fetches the first tuples in sort order from $s$ and $s ^ { \prime }$ by accessing the keys stored in the outer map $s$ in sort order. For each 𝑜 we then iterate through the tuples in $\boldsymbol { \mathscr { s } } [ o ]$ (in an arbitrary, but deterministic order since they are incomparable) keeping track of the total multiplicity $m$ of tuples we have processed so far. As long as $m \leq k$ we output the current tuple and proceed to the next tuple (or order-by key once we have processed all tuples in ${ \cal { S } } [ o ] )$ . Once $m \geq k$ , we terminate. If the last tuple’s multiplicity exceeds the threshold we output this tuple with the remaining multiplicity. Applied to $s$ this approach produces the tuples to delete and applied to $s ^ { \prime }$ it produces the tuples to insert:
$$
\tau _ { k , O } ( S ) \qquad \& \tau _ { k , O } ( S ^ { \prime } )
$$
# 5.3 Complexity Analysis
We now analyze the runtime complexity of operators. Let $n$ denote the input delta tuple size and $\boldsymbol { p }$ denote the number of ranges of the partition on which the sketch is build on. For table access, selection, and projection, we need to iterate over these $n$ annotated tuples to generate the output. As for these operations we do not modify the sketches of tuples, the complexity is $O ( n )$ . For aggregation, for each aggregation function we maintain a hashmap that tracks the current aggregation result for each group and a count for each fragment that occurs in a sketch for each tuple in the group. For each input delta tuple, we can update this information in $O ( 1 )$ if we assume that the number of aggregation functions used in an aggregation operator is constant. Thus, the overall runtime for aggregation is $O ( n \cdot p )$ . For join operator, we store the bloom filter to pre-filter the input data. Suppose the right input table has $m$ tuples. Building such a filter incurs a one-time cost of $O ( m )$ for scanning the table once. Consider the part where we join a delta of size $n$ for the left input with the right table producing $o$ output tuples. The cost this join depends on what join algorithm is used ranging from $O ( n + m + o )$ for a hash join to $O ( n \cdot m + o )$ for a nested loop join (in both cases assuming the worst case where no tuples are filtered using the bloom filter). For the top- $\mathbf { \nabla } \cdot \mathbf { k }$ operator, we assume there are 𝑙 nodes stored in the balanced search tree. Building this tree will cost $O ( l \cdot \log l )$ (only built once). An insertion, deletion, or lookup will take $O ( \log l )$ time. Thus, the runtime complexity of the top-k operator is $O ( n \cdot \log l )$ to complete the top- $\mathbf { \nabla } \cdot \mathbf { k }$ operator. Regarding space complexity, the selection and projection only require constant space. For aggregation, the space is linear in the number of groups and in $p$ . For join, the bloom filter’s size is linear in $m$ , but for a small constant factor. For top- $\mathbf { \nabla } \cdot \mathbf { k }$ operators,
we store $l \geq k$ entries in the search tree, each requiring $O ( p )$ space.
Thus, the overall space complexity for this operator is $O ( l \cdot p )$ .
# 6 Correctness Proof
We are now ready to state the main result of this paper, i.e., the incremental operator semantics we have defined is an incremental maintenance procedure. That is, it outputs valid sketch deltas that applied to the safe sketch for the database $D$ before the update yield an over-approximation of an accurate sketch for the database $D \cup \Delta D$ .
THEOREM 6.1 (CORRECTNESS). $\boldsymbol { \mathcal { T } }$ as defined in Sec. 4.4 is an incremental maintenance procedure such that it takes as input a state $s$ , the annotated delta $\Delta \mathcal { D }$ , the ranges $\Phi$ , a query $\boldsymbol { Q }$ and returns an updated state $s ^ { \prime }$ and a provenance sketch delta $\Delta \mathcal { P }$ : $\bar { J } ( Q , \Phi , S , \Delta \mathcal { D } ) = ( \Delta \mathcal { P } , S ^ { \prime } )$ . For any query $Q$ , sketch $\mathcal { P }$ that is valid for $D$ , and state $s$ corresponding to $D$ we have:
$$
\mathcal { P } [ \boldsymbol { Q } , \Phi , D \circledast \Delta D ] \subseteq \mathcal { P } \Cup \mathscr { I } ( \boldsymbol { Q } , \Phi , S , \Delta \mathcal { D } )
$$
In this section, we will demonstrate the correctness of Theorem 6.1. Specifically, we introduce two auxiliary notions that two aspects: 1. Tuple correctness (tuple correctness): the incremental maintenance procedure can always generate the correct bag of tuples for the operators it maintains. 2. Fragment correctness (fragment correctness): the maintenance procedure can output the correct delta sketches as well for the operators it maintains. Before we present the proof of the theorem, we will establish several lemmas that used in the proof, and propose the criteria of tuple correctness and fragment correctness.
# 6.1 Tuple Correctness, Fragment Correctness, and Auxiliary Results
In this section, we will introduce two functions: tuple extract $\mathbb { T } ( \cdot )$ and fragment extract $\mathbb { F } ( \cdot )$ where the function $\mathbb { T } ( \cdot )$ function specifies the procedure for handing tuples from annotated relations (database) and the function $\mathbb { F } ( \cdot )$ specifies that for handing fragments from annotated relations (database). To demonstrate the tuple correctness and fragment correctness, we will introduce a series of auxiliary lemmas which present properties of the two function $\mathbb { T } ( \cdot )$ and $\mathbb { F } ( \cdot )$ and are used during the proof for operators when present the correctness of tuple and fragment. Then, we will define the tuple correctness and fragment correctness as tools for the proof of Theorem 6.1. Next, we introduce a lemma that for each operator, the Theorem 6.1 holds if we can demonstrate both tuple correctness and fragment correctness hold.
We denote $Q ^ { n }$ to be a query having at most $n$ operators and $\mathbb { Q } ^ { i }$ to be the class of all queries with $i$ operators such $Q ^ { i } \in \mathbb { Q } ^ { i }$ .
Given a database $D = \left\{ { t } _ { 1 } , \ldots , { t } _ { n } \right\}$ and the annotated database $\mathcal { D } = \left\{ \langle t _ { 1 } , \mathcal { P } _ { 1 } \rangle , \ldots , \langle t _ { n } , \mathcal { P } _ { n } \rangle \right\}$ , the results of running query $\boldsymbol { Q }$ over the database $D$ and running query over the annotated database are:
$$
\begin{array} { r l } { Q ( D ) = \{ \{ t _ { o _ { 1 } } , \dots , t _ { o _ { m } } \} } & { { } Q ( \mathcal { D } ) = \{ \langle t _ { o _ { 1 } } , \mathcal { P } _ { o _ { 1 } } \rangle , \dots , \langle t _ { o _ { m } } , \mathcal { P } _ { o _ { m } } \rangle \} } \end{array}
$$
Given the delta database $\Delta D = \Delta D \cup \textit { \textbf { D } }$ where $\mathsf { A } D = \{ { t } _ { i _ { 1 } } , \ldots , { t } _ { i _ { k } } \|$ and $D = \{ t _ { d _ { 1 } } , \dots , t _ { d _ { x } } \}$ . Let $D ^ { \prime }$ to be the updated database such that: $D ^ { \prime } = D \ \odot \Delta D$ . Suppose the results of running query $\boldsymbol { Q }$ over the database and annotated database after updating are:
$$
\begin{array} { r l } { Q ( D ^ { \prime } ) = \{ \{ t _ { o _ { 1 } } , \dots , t _ { o _ { l } } \} } & { { } Q ( \mathcal { D } ^ { \prime } ) = \{ \{ t _ { o _ { 1 } } , \mathcal { P } _ { o _ { 1 } } \} , \dots , \langle t _ { o _ { l } } , \mathcal { P } _ { o _ { l } } \rangle \} } \end{array}
$$
We define $\mathbb { T } ( \cdot )$ (extract tuples), a function that takes as input a bag of annotated tuples and returns all the tuples from the bag such that:
$$
\mathbb { T } ( \{ \langle t _ { 1 } , \mathcal { P } _ { 1 } \rangle , . . . , \langle t _ { y } , \mathcal { P } _ { y } \rangle \} ) = \{ \{ t _ { 1 } , . . . , t _ { y } \} \}
$$
LEMMA 6.1. Let $\mathbb { T } ( \cdot )$ be the extract tuples function, $\mathcal { D } _ { 1 }$ and $\mathcal { D } _ { 2 }$ be two annotated databases with the same schema. The following holds:
$$
\mathbb { T } ( \mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 } ) = \mathbb { T } ( \mathcal { D } _ { 1 } ) \cup \mathbb { T } ( \mathcal { D } _ { 2 } )
$$
PROOF. Suppose $\mathcal { D } _ { 1 } = \{ \langle t _ { 1 } , \mathcal { P } _ { t _ { 1 } } \rangle , \ldots , \langle t _ { m } , \mathcal { P } _ { t _ { m } } \rangle \}$ and $\mathcal { D } _ { 2 } ~ = ~ \{ \langle s _ { 1 } , \mathcal { P } _ { s _ { n } } \rangle , . . . , \langle s _ { n } , \mathcal { P } _ { s _ { n } } \rangle \}$ . Then $\mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 }$ , $\mathbb { T } ( \mathcal { D } _ { 1 } )$ and $\mathbb { T } ( \mathcal { D } _ { 2 } )$ are:
$$
\begin{array} { r l } & { \mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 } = \{ \vert \langle t _ { 1 } , \mathcal { P } _ { t _ { 1 } } \rangle , \dotsc , \langle t _ { n } , \mathcal { P } _ { t _ { n } } \rangle , \langle s _ { 1 } , \mathcal { P } _ { s _ { 1 } } \rangle , \dotsc , \langle s _ { n } , \mathcal { P } _ { s _ { n } } \rangle \vert \} } \\ & { \quad \mathbb { T } ( \mathcal { D } _ { 1 } ) = \{ t _ { 1 } , \dotsc , t _ { m } \} \quad \quad \mathbb { T } ( \mathcal { D } _ { 2 } ) = \{ \vert s _ { 2 } , \dotsc , s _ { n } \vert \} } \end{array}
$$
We can get that $\mathbb { T } ( \mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 } )$ is:
$$
\mathbb { T } ( \mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 } ) = \{ { t } _ { 1 } , \dotsc , { t } _ { m } , s _ { 1 } , \dotsc , s _ { n } \}
$$
and $\mathbb { T } ( \mathcal { D } _ { 1 } ) \cup \mathbb { T } ( \mathcal { D } _ { 2 } )$ is:
$$
\mathbb { T } ( \mathcal { D } _ { 1 } ) \cup ^ { \prime } \mathbb { T } ( \mathcal { D } _ { 2 } ) = \{ t _ { 1 } , . . . , t _ { m } \cup \{ t _ { m } , s _ { 1 } , . . . , s _ { n } \} = \{ t _ { 1 } , . . . , t _ { m } , s _ { 1 } , . . . , s _ { n } \} \}
$$
Therefore, $\mathbb { T } ( \mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 } ) = \mathbb { T } ( \mathcal { D } _ { 1 } ) \cup \mathbb { T } ( \mathcal { D } _ { 2 } )$
Analog we can know that $\mathbb { T } ( \Delta \mathcal { D } ) \ = \ \mathbb { T } ( \Delta D ) \cup \mathbb { T } ( \ D )$ , since $\Delta \mathcal { D } = \Delta \mathcal { D } \cup \Delta \mathcal { D }$
LEMMA 6.2. Let $\mathcal { D } _ { 1 }$ and $\mathcal { D } _ { 2 }$ be two annotated databases over the same schema. We have:
$$
\mathbb { T } ( \mathcal { D } _ { 1 } - \mathcal { D } _ { 2 } ) = \mathbb { T } ( \mathcal { D } _ { 1 } ) - \mathbb { T } ( \mathcal { D } _ { 2 } )
$$
PROOF. The proof is analog to the proof for Lemma 6.1.
LEMMA 6.3. Let $\mathbb { T } ( \cdot )$ be the extract tuples function, $\mathcal { D }$ and $\Delta \mathcal { D }$ be an annotated database and an annotated database delta. The following property holds:
$$
\mathbb { T } ( \mathcal { D } \cup \mathcal { \Delta D } ) = \mathbb { T } ( \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } )
$$
PROOF. Suppose $\mathcal { D }$ and $\Delta \mathcal { D }$ are:
$$
\begin{array} { r l } & { \mathcal { D } = \{ \langle t _ { 1 } , \mathcal { P } _ { t _ { 1 } } \rangle , \ldots , \langle t _ { m } , \mathcal { P } _ { t _ { m } } \rangle \} } \\ & { \Delta \mathcal { D } = \Delta \mathcal { D } \cup \Delta \mathcal { D } = \{ \Delta \langle t _ { d _ { 1 } } , \mathcal { P } _ { d _ { 1 } } \rangle , \ldots , \Delta \langle t _ { d _ { j } } , \mathcal { P } _ { d _ { j } } \rangle \} } \\ & { \quad \cup \{ \| \Delta \langle t _ { i _ { 1 } } , \mathcal { P } _ { i _ { 1 } } \rangle , \ldots , \Delta \langle t _ { i _ { i } } , \mathcal { P } _ { i _ { i } } \rangle \| } \end{array}
$$
Then $\mathcal { D } \left. \bullet \right. \Delta \mathcal { D }$ is:
$$
\begin{array} { r l } & { \quad \mathcal { D } \cup \Delta \mathcal { D } } \\ & { = \mathcal { D } - \quad \mathcal { D } \cup \Delta \mathcal { D } } \\ & { = \ P ( t , \mathcal { P } ) \mid \langle t , \mathcal { P } \rangle \in \mathcal { D } \| - \ P \langle t , \mathcal { P } \rangle \mid \langle t , \mathcal { P } \rangle \in \quad \mathcal { D } \| } \\ & { \qquad \cup \{ \langle t , \mathcal { P } \rangle \mid \langle t , \mathcal { P } \rangle \in \Delta \mathcal { D } \| } \\ & { = \ P ( t _ { 1 } , \mathcal { P } _ { t _ { 1 } } ) , \ldots , \langle t _ { m } , \mathcal { P } _ { t _ { m } } \rangle \| } \\ & { \qquad - \Downarrow \langle t _ { d _ { 1 } } , \mathcal { P } _ { d _ { 1 } } \rangle , \ldots , \ \langle t _ { d _ { j } } , \mathcal { P } _ { d _ { j } } \rangle \| } \\ & { \qquad \cup \Downarrow \langle t _ { 1 } , \mathcal { P } _ { i _ { 1 } } \rangle , \ldots , \ \Delta \langle t _ { i _ { i } } , \mathcal { P } _ { i _ { i } } \rangle \| } \end{array}
$$
Then $\mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } )$ :
$$
\begin{array} { r l } & { \quad \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) } \\ & { = \{ t _ { 1 } , . . . , t _ { m } \} } \\ & { \qquad - \left\{ \Delta t _ { d _ { 1 } } , . . . , \Delta t _ { d _ { j } } \right\} } \\ & { \qquad \cup \left\{ \Delta t _ { i _ { 1 } } , . . . , \Delta t _ { i _ { i } } \right\} } \end{array}
$$
$\mathbb { T } ( \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } )$ are:
$$
\begin{array} { r l } & { \quad \mathbb { T } ( \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } ) } \\ & { = \mathbb { T } ( \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } \cup \Delta \mathcal { D } ) } \\ & { = \mathbb { T } ( \mathcal { D } ) - \mathbb { T } ( \Delta \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } ) } \\ & { = \{ \boldsymbol { t _ { 1 } } , . . . , \boldsymbol { t _ { m } } \} } \\ & { \qquad - \left\{ \Delta \boldsymbol { t _ { d _ { 1 } } } , . . . , \Delta \boldsymbol { t _ { d _ { j } } } \right\} } \\ & { \qquad \cup \left\{ \Delta \boldsymbol { t _ { i _ { 1 } } } , . . . , \Delta \boldsymbol { t _ { i _ { i } } } \right\} } \end{array}
$$
Therefore, $\mathbb { T } ( \mathcal { D } \mid \mathfrak { d } \mathcal { D } ) = \mathbb { T } ( \mathcal { D } ) \ \mathfrak { o } \ \mathbb { T } ( \Delta \mathcal { D } )$
DEFINITION 6.1 (TUPLE CORRECTNESS). Consider a query 𝑄, a database $D$ and a delta database $\Delta D$ , Let $D ^ { \prime }$ be the updated database such that $D ^ { \prime } = D \ \Theta \ \Delta D$ . Let $\boldsymbol { \tau }$ be an incremental maintenance procedure takes as input a state $s$ , the query $\boldsymbol { Q }$ , the annotated delta $\Delta \mathcal { D }$ and the range partition $\Phi$ . The tuples in the result of the query running over the updated database are equivalent to tuples in the result of applying query running over the database to incremental maintenance procedure such that:
$$
\mathbb { T } ( Q ( \mathcal { D } ) \cup \mathcal { I } ( Q , \Phi , \Delta \mathcal { D } , S ) ) = Q ( D ^ { \prime } )
$$
Recall ${ \mathcal { P } } [ Q , \Phi , D ]$ defines an accurate provenance sketch $\mathcal { P }$ for $\boldsymbol { Q }$ wrt. to $D$ , and ranges $\Phi$ , and $D _ { \mathcal { P } }$ is an instance of $\mathcal { P }$ which is the data covered by the sketch.
Now we define a function $\mathbb { F } ( \cdot )$ (fragments extracting) that takes as input a bag of annotated tuples and return all the sketches from this bag such that:
$$
\mathbb { F } ( \{ \langle t _ { 1 } , \mathcal { P } _ { 1 } \rangle , \dots , \langle t _ { y } , \mathcal { P } _ { y } \rangle \} ) = \{ \mathcal { P } _ { 1 , \dots , \mathcal { P } _ { y } \mathbb { J } }
$$
And the $\mathbb { F } ( \cdot )$ function has the following property:
LEMMA 6.4. $\mathbb { F } ( \mathcal { D } _ { 1 } \cup \mathcal { D } _ { 2 } ) = \mathbb { F } ( \mathcal { D } _ { 1 } ) \cup \mathbb { F } ( \mathcal { D } _ { 2 } )$
The proof of this property is similar to lemma 6.1 where for $\mathbb { F } ( \cdot )$ , we focus on fragments instead of tuples. Therefore, $\mathbb { F } ( \Delta \mathcal { D } ) =$ $\mathbb { F } ( \Delta \mathcal { D } ) \cup \mathbb { F } ( \Delta \mathcal { D } )$ , since $\Delta D = \Delta D \cup \Delta D$ .
Like Lemma 6.3, the extract fragments function has the following properties as well.
LEMMA 6.5. $\mathbb { F } ( \mathcal { D } \mid \mathfrak { o } \mid \Delta \mathcal { D } ) = \mathbb { F } ( \mathcal { D } ) \mid \mathfrak { o } \mid \mathbb { F } ( \Delta \mathcal { D } )$
PROOF. Suppose $\mathcal { D }$ and $\Delta \mathcal { D }$ are:
$$
\begin{array} { r l } & { \mathcal { D } = \{ \langle t _ { 1 } , \mathcal { P } _ { t _ { 1 } } \rangle , \ldots , \langle t _ { m } , \mathcal { P } _ { t _ { m } } \rangle \} } \\ & { \Delta \mathcal { D } = \Delta \mathcal { D } \cup \Delta \mathcal { D } = \{ \Delta \langle t _ { d _ { 1 } } , \mathcal { P } _ { d _ { 1 } } \rangle , \ldots , \Delta \langle t _ { d _ { j } } , \mathcal { P } _ { d _ { j } } \rangle \} } \\ & { \quad \cup \{ \| \Delta \langle t _ { i _ { 1 } } , \mathcal { P } _ { i _ { 1 } } \rangle , \ldots , \Delta \langle t _ { i _ { i } } , \mathcal { P } _ { i _ { i } } \rangle \| } \end{array}
$$
Then $\mathcal { D } \Delta \mathcal { D }$ is:
$$
\begin{array} { r l } & { \quad \mathcal { D } \stackrel { \uplus } { \emptyset } \Delta \mathcal { D } } \\ & { = \mathcal { D } - \quad \mathcal { D } \cup \Delta \mathcal { D } } \\ & { = \ P ( t , \mathcal { P } ) \mid \langle t , \mathcal { P } \rangle \in \mathcal { D } \| - \ P \langle t , \mathcal { P } \rangle \mid \langle t , \mathcal { P } \rangle \in \quad \mathcal { D } \mathbb { I } } \\ & { \qquad \cup \{ \langle t , \mathcal { P } \rangle \mid \langle t , \mathcal { P } \rangle \in \Delta \mathcal { D } \} } \\ & { = \emptyset \langle t _ { 1 } , \mathcal { P } _ { t _ { 1 } } \rangle , \ldots , \langle t _ { m } , \mathcal { P } _ { t _ { m } } \rangle \| } \\ & { \qquad - \emptyset \quad \langle t _ { d _ { 1 } } , \mathcal { P } _ { d _ { 1 } } \rangle , \ldots , \ \langle t _ { d _ { j } } , \mathcal { P } _ { d _ { j } } \rangle \| } \\ & { \qquad \cup \{ \Delta \langle t _ { 1 } , \mathcal { P } _ { i _ { 1 } } \rangle , \ldots , \Delta \langle t _ { i _ { i } } , \mathcal { P } _ { i _ { i } } \rangle \} } \end{array}
$$
Then $\mathbb { F } ( \mathcal { D } \cup \Delta \mathcal { D } )$ :
$$
\begin{array} { r l } & { \mathbb { F } ( \mathcal { D } \cup \Delta \mathcal { D } ) } \\ & { = \{ \mathcal { P } _ { 1 } , . . . , \mathcal { P } _ { m } \} } \\ & { \qquad - \left\{ \begin{array} { r l } { \mathcal { P } _ { d _ { 1 } } , . . . , \mathcal { P } _ { d _ { j } } \} } \\ { \cup \{ \Delta \mathcal { P } _ { i _ { 1 } } , . . . , \Delta \mathcal { P } _ { i _ { i } } \} } \end{array} \right. } \end{array}
$$
$\mathbb { F } ( \mathcal { D } ) \cup \mathbb { F } ( \Delta \mathcal { D } )$ are:
$$
\begin{array} { r l } & { \qquad \mathbb { F } ( \mathcal { D } ) \cup \mathbb { F } ( \Delta \mathcal { D } ) } \\ & { = \mathbb { F } ( \mathcal { D } ) \cup \mathbb { F } ( \Delta \mathcal { D } \cup \Delta \mathcal { D } ) } \\ & { = \mathbb { F } ( \mathcal { D } ) - \mathbb { F } ( \Delta \mathcal { D } ) \cup \mathbb { F } ( \Delta \mathcal { D } ) } \\ & { = \{ \mathcal { P } _ { 1 } , . . . , \mathcal { P } _ { m } \} } \\ & { \qquad - \left\{ \Delta \mathcal { P } _ { d _ { 1 } } , . . . , \Delta \mathcal { P } _ { d _ { j } } \right\} } \\ & { \qquad \cup \left\{ \Delta \mathcal { P } _ { i _ { 1 } } , . . . , \Delta \mathcal { P } _ { i _ { i } } \right\} } \end{array}
$$
Therefore, $\mathbb { F } ( \mathcal { D } \mid \ J \cup \mathcal { D } ) = \mathbb { F } ( \mathcal { D } ) \cup \mathbb { F } ( \Delta \mathcal { D } )$
We now define $\mathbb { S } ( \cdot )$ as a function that takes as input a bag of fragments where each fragment has a multiplicity at least 1, and returns a set of fragments such that the multiplicity of each fragment in input bag is 1 in the output set. For a query $\boldsymbol { Q }$ running over a annotated database $\mathcal { D }$ , the annotated result is $Q ( { \mathcal { D } } )$ . The fragments in the annotated result are $\mathbb { F } ( Q ( \mathcal { D } ) )$ . Suppose the provenance sketch captured for this query given the ranges $\Phi$ is ${ \mathcal { P } } [ Q , \Phi , D ]$ , then we can get that:
$$
\mathcal { P } [ Q , \Phi , D ] = \mathbb { S } ( \mathbb { F } ( Q ( \mathcal { D } ) ) )
$$
A provenance sketch covers relevant data of the database to answer a query such that:
$$
Q ( D ) = Q ( D _ { \mathcal { P } [ Q , \Phi , D ] } ) = Q ( D _ { \mathbb { S } ( \mathbb { F } ( Q ( \mathcal { D } ) ) ) } )
$$
For a query running over a set of fragments, it is the same as running a bag of fragments where the ranges are exactly the same as in the set but each one having different multiplicity. The reason is for a fragment appearing multiple times in the bag, when using this fragment to answer, they will be translate into the same expression in the WHERE clause multiple times concatenating with OR. And this expression appearing multiple times will be treated as a single one when the database engine evaluates the WHERE. For example, WHERE (a BETWEEN 10 AND 20) OR (a BETWEEN 10 AND 20) has the same effect as WHERE (a BETWEEN 10 AND 20) for a query. Thus, the following holds:
$$
Q ( D ) = Q ( D _ { \mathbb { S } ( \mathbb { F } ( Q ( \mathcal { D } ) ) ) } = Q ( D _ { \mathbb { F } ( Q ( \mathcal { D } ) ) } )
$$
DEFINITION 6.2 (FRAGMENT CORRECTNESS). Consider a query $Q$ , a database $D$ and a delta database $\Delta D$ , Let $D ^ { \prime }$ be the updated database such that $D ^ { \prime } = D \mid \mid D \mid \Delta D$ . Let $\boldsymbol { \tau }$ be an incremental maintenance procedure takes as input a state $s$ , the query $\boldsymbol { Q }$ , the annotated delta $\Delta \mathcal { D }$ and the range partition $\Phi$ . The result of the running query over the updated database is equivalent to the result of running query over the data of updated database covered by applying the fragments in current provenance sketch to fragments generated from the incremental maintenance procedure such that:
$$
Q ( D ^ { \prime } ) = Q ( D _ { \mathbb { S } ( \mathbb { R } ( Q ( \mathcal { D } ) ) \cup \mathbb { F } ( \mathcal { I } ( Q , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \prime } )
$$
LEMMA 6.6. For an operator that the sketch is maintained by the incremental procedure, if both tuple correctness and fragment correctness hold, then the Theorem 6.1 holds for this operator.
In the following, we will prove Theorem 6.1 by induction over the structure of queries starting with the base case which is the correctness of query consisting of a single table access operator followed by inductive steps for other operators distinguishing the cases of tuple and fragment correctness to show the Lemma 6.6.
Figure 6: Annotated tuples’ relation between $Q ^ { i }$ and $Q ^ { i + 1 }$
# 6.2 Proof of Theorem 6.1
Having the properties tuple correctness and fragment correctness defined, we are ready to prove Theorem 6.1 by induction over the structure of a query showing that the tuple correctness and fragment correctness properties hold for supported query which implies the theorem.
6.2.1 Base Case. We start with $Q ^ { 1 }$ , which is a single table 𝑅. We will show that for any $Q ^ { 1 } \in \mathbb { Q } ^ { 1 }$ , the tuple correctness and fragment correctness properties hold for table access operator.
Suppose the relation and its annotated relation are:
$$
R = \left\{ t _ { 1 } , . . . , t _ { n } \right\} \quad { \mathcal { R } } = \left\{ \langle t _ { 1 } , { \mathcal { P } } _ { 1 } \rangle , . . . , \langle t _ { n } , { \mathcal { P } } _ { n } \rangle \right\}
$$
and delta relation and annotated delta relation are:
$$
\begin{array} { r } { \mathbb { A } \ R = \ Y t _ { i _ { 1 } } , \dots , t _ { i _ { i } } \ Y \qquad R = \ Y t _ { d _ { 1 } } , \dots , t _ { d _ { j } } \ Y } \end{array}
$$
$$
\begin{array} { r } { \lambda \mathcal { R } = \{ \langle t _ { i _ { 1 } } , \mathcal { P } _ { i _ { 1 } } \rangle , . . . , \langle t _ { i _ { i } } , \mathcal { P } _ { i _ { i } } \rangle \} \quad \Delta \mathcal { R } = \{ \langle t _ { d _ { 1 } } , \mathcal { P } _ { d _ { 1 } } \rangle , . . . , \langle t _ { d _ { j } } , \mathcal { P } _ { d _ { j } } \rangle \} } \end{array}
$$
Tuple Correctness.
$$
\begin{array} { r l } & { \mathbb { T } ( Q ^ { 1 } ( \mathcal { D } ) \cup T ( Q ^ { 1 } , \Phi , \Delta \mathcal { D } ) ) } \\ & { = \mathbb { T } ( Q ^ { 1 } ( \mathcal { R } ) \cup T ( Q ^ { 1 } , \Phi , \Delta \mathcal { R } ) ) } \\ & { = \mathbb { T } ( \mathcal { R } \cup \Delta \mathcal { R } ) } \\ & { = \mathbb { T } ( \mathcal { R } - \mathcal { R } \cup \Delta \mathcal { R } ) } \\ & { = \mathbb { T } ( \mathcal { R } - \mathcal { R } \cup \Delta \mathcal { R } ) } \\ & { = \mathbb { T } ( \mathcal { R } ) - \mathbb { T } ( \mathcal { R } \cup \mathcal { R } \cup \mathcal { R } ) } \\ & { = \mathcal { R } - \mathcal { R } \cup \Delta \mathcal { R } } \\ & { = Q ^ { 1 } ( R - \mathcal { R } \cup \Delta R ) } \\ & { = Q ^ { 1 } ( R \cup \Delta R ) } \\ & { = Q ^ { 1 } ( R ^ { \prime } ) } \end{array}
$$
( Lemma 6.1 and Lemma 6.2)
Fragment Correctness. First, determine the fragments after the incremental maintenance:
$$
\begin{array} { r l } & { \mathbb { P } ( \boldsymbol { Q } ^ { 1 } ( \mathcal { D } ) ) \hookrightarrow \mathbb { P } ( \boldsymbol { Z } ( \boldsymbol { Q } ^ { 1 } , \Phi , \Delta \mathcal { D } ) ) } \\ & { = \mathbb { P } ( \mathcal { R } ) \hookrightarrow \mathbb { F } ( \Delta \mathcal { R } ) } \\ & { = \mathbb { F } ( \mathcal { R } ) \hookrightarrow \mathbb { F } ( \Delta \mathcal { R } \cup \mathscr { R } ) } \\ & { = \mathbb { F } ( \mathcal { R } ) - \mathbb { F } ( \mathscr { R } ) \cup \Delta \mathcal { R } } \\ & { = \{ \mathscr { P } _ { 1 } , . . . , \mathscr { P } _ { n } \} - \quad \{ \mathscr { P } _ { d _ { 1 } } , . . . , \mathscr { P } _ { d _ { j } } \} \cup \Delta \mathrm { ~ } \{ \mathscr { P } _ { i _ { 1 } } , . . . , \mathscr { P } _ { i _ { i } } \} } \end{array}
$$
Since for every sketch in $\{ \mathcal { P } _ { 1 } , . . . , \mathcal { P } _ { n } \Rsh \ \mathfrak {hookrightarrow } \quad \{ \mathcal { P } _ { d _ { 1 } } , . . . , \mathcal { P } _ { d _ { j } } \}$ ∪• +Δ $\{ \mathcal { P } _ { i _ { 1 } } , . . . , \mathcal { P } _ { i _ { i } } \}$ , it associates a tuple, and the associated tuple will be inserted or deleted as the sketch does. Thus:
$$
\begin{array} { r l } { D _ { \mathbb { F } ( Q ^ { 1 } ( \mathcal { D } ) ) \cup \mathbb { F } ( \boldsymbol { Z } ( Q ^ { 1 } , \Phi , \Delta \mathcal { D } ) ) } ^ { \prime } = \{ t _ { 1 } , . . . , t _ { n } \} \stackrel { \ L \ L \circ \cup \Delta } { \cup } \{ t _ { i _ { 1 } } , . . . , t _ { i _ { i } } \} } & { } \\ { \stackrel { \ L \cup } { \cup } } & { \{ t _ { d _ { 1 } } , . . . , t _ { d _ { j } } \} } \end{array}
$$
Thus,
$$
\begin{array} { r l } & { \quad Q ^ { 1 } ( D _ { \mathbb { F } ( Q ^ { 1 } ( \mathcal { D } ) ) \backslash \bullet \mathbb { F } ( \mathcal { I } ( Q , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } \\ & { = \ S \{ t _ { 1 } , . . . , t _ { n } \} \ \bowtie \ \ S \mathbb { I } t _ { i _ { 1 } } , . . . , t _ { i _ { i } } \mathbb { f } \ \bowtie \ \ S \ \{ t _ { d _ { 1 } } , . . . , t _ { d _ { j } } \} } \\ & { = R \cup \ \Delta \ R - \Delta \ R } \end{array}
$$
Since $Q ^ { 1 } ( R ^ { \prime } ) = Q ^ { 1 } ( R \backprime \Delta R ) = R -$ - 𝑅 + $R$ , then we get $\begin{array} { r l } & { \mathrm { f o r ~ } Q ^ { 1 } \in \overset \sim \ Q ^ { 1 } \mathrm { t h a t : } \hat { Q ^ { 1 } } ( D ^ { \prime } ) = Q ^ { 1 } ( D _ { \mathbb { F } ( Q ^ { 1 } ( \mathcal D ) ) \backslash \mathbb { F } ( \mathit { I } ( Q ^ { 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) = } \\ & { Q ^ { 1 } ( D _ { \mathbb { S } ( \mathbb { F } ( Q ^ { 1 } ( \mathcal D ) ) \backslash \mathbb { S } | \mathbb { F } ( \mathit { I } ( Q ^ { 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \prime } ) . } \end{array}$
6.2.2 Inductive Step. Assume for Theorem 6.1, the two correctness properties hold for $Q ^ { i } \in \mathbb { Q } ^ { i }$ such that the incremental maintenance procedure can correctly produce the tuples and provenance sketches:
$$
\begin{array} { r l } & { \mathcal { Q } ^ { i } ( D ^ { \prime } ) = \mathbb { T } ( \boldsymbol { Q } ^ { i } ( \mathcal { D } ) \cup \boldsymbol { \mathcal { I } } ( \boldsymbol { Q } ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) \quad \mathrm { ( t u p l e ~ c o r r e c t n e s s ) } } \\ & { \mathcal { Q } ^ { i } ( D ^ { \prime } ) = \boldsymbol { Q } ^ { i } ( D _ { \mathbb { S } ( \mathbb { R } ( \boldsymbol { Q } ^ { 1 } ( \mathcal { D } ) ) \uplus \mathbb { R } ( \boldsymbol { T } ( \boldsymbol { Q } ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \boldsymbol { \prime } } ) } \end{array}
$$
Next we will show that for $Q ^ { i + 1 } \in \mathbb { Q } ^ { i + 1 }$ , where the $( i + 1 )$ ’s operator is on top of $Q ^ { i }$ , both properties hold such that:
$$
Q ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( Q ^ { i + 1 } ( { \mathcal { D } } ) \cup \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta { \mathcal { D } } , S ) )
$$
(tuple correctness)
$$
\mathscr { Q } ^ { i + 1 } ( D ^ { \prime } ) = \mathscr { Q } ^ { i + 1 } ( D _ { \mathfrak { S } ( \mathbb { F } ( \mathscr { Q } ^ { i + 1 } ( \mathscr { D } ) ) ^ { \backslash } \mathfrak { S } | \mathbb { F } ( \mathscr { I } ( \mathscr { Q } ^ { i + 1 } , \Phi , \Delta \mathscr { D } , S ) ) ) } ^ { \prime } )
$$
In the following parts, we will demonstrate the proof for operators distinguishing cases for correctness of tuples and fragments.
6.2.3 Selection. Suppose the operator at level $i + 1$ is an selection operator. Then we know that
$$
Q ^ { i + 1 } ( D ) = \sigma _ { \theta } ( Q ^ { i } ( D ) )
$$
For a selection operator, the annotated tuples’ relation between $Q ^ { i + 1 }$ and $Q ^ { i }$ is like $\langle t _ { 1 } , \mathcal { P } _ { 1 } \rangle$ and $\langle s _ { 1 } , \mathcal { P } _ { s _ { 1 } } \rangle$ in Fig. 6 if the annotated tuple $\langle s _ { 1 } , \mathcal { P } _ { s _ { 1 } } \rangle$ can satisfy the selection condition of $Q ^ { i + 1 }$ . Otherwise, there is no output for an input annotated tuple.
Tuple Correctness. Before we demonstrate the tuple correctness, we first show two properties that will be used for selection operator:
LEMMA 6.7. The following properties hold:
$$
\sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } ) ) = \mathbb { T } ( \sigma _ { \theta } ( \mathcal { D } ) )
$$
PROOF. For $\sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } ) )$ :
$$
\begin{array} { r l } & { \quad \sigma _ { \boldsymbol { \theta } } ( \mathbb { T } ( \mathcal { D } ) ) } \\ & { = \sigma _ { \boldsymbol { \theta } } ( D ) } \\ & = \Downarrow \mid t \in D \wedge t \mid = \theta \ B { \} } \end{array}
$$
$$
( \mathbb { T } ( \mathcal { D } ) = D )
$$
For $\mathbb { T } ( \sigma _ { \theta } ( \mathcal { D } ) )$
$$
\begin{array} { r l r } { \mathbb { T } ( \sigma _ { \theta } ( \mathcal { D } ) ) } \\ { = \mathbb { T } ( \left\{ \langle t , \mathcal { P } \rangle \ | \ \langle t , \mathcal { P } \rangle \in \mathcal { D } \wedge t \ \middle [ \mathbf { \delta } \theta \mathbf { \mathbb { k } } \right) \quad } & { ( \sigma _ { \theta } ) } \\ { = \Updownarrow t \ | \ t \in D \wedge t \ | = \theta \beta \quad } & { ( \mathbb { T } ( \mathcal { D } ) = D ) } \end{array}
$$
Therefore, $\sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } ) ) = \mathbb { T } ( \sigma _ { \theta } ( \mathcal { D } ) )$ :
LEMMA 6.8. The following properties hold:
$$
\sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } \left\{ \mathfrak { d } \right\} \Delta \mathcal { D } ) ) = \sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } ) ) \hookrightarrow \sigma _ { \theta } ( \mathbb { T } ( \Delta \mathcal { D } ) )
$$
PROOF. For $\sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) )$ :
$$
\begin{array} { r l r } & { \sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) ) } & \\ & { = \sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } ) ) } & { ( \mathrm { l e m m a } 6 . 3 ) } \\ & { = \sigma _ { \theta } ( D \cup \Delta D ) } & { ( \mathbb { T } ( \mathcal { D } ) = D ) } \\ & { = \sigma _ { \theta } ( D - \Delta D \cup \Delta D ) } & { ( D \cup \Delta D = D - \Delta D \cup \Delta D ) } \\ & { = \sigma _ { \theta } ( D ) - \sigma _ { \theta } ( \Delta D ) \cup \sigma _ { \theta } ( \Delta D ) } & { ( \sigma _ { \theta } \mathrm { a n d } \cup , - ) } \end{array}
$$
For $\sigma _ { \theta } ( \mathbb { T } ( \mathcal { D } ) ) \ \mathfrak { o } \ \sigma _ { \theta } ( \mathbb { T } ( \Delta \mathcal { D } ) )$
$$
\begin{array} { r l } & { \quad \sigma _ { \theta } ( \mathbb { T } ( \mathscr { D } ) ) \mathrel { \mathop : } \sigma _ { \theta } ( \mathbb { T } ( \Delta \mathscr { D } ) ) } \\ & { = \mathbb { T } ( \sigma _ { \theta } ( \mathscr { D } ) ) \mathrel { \mathop : } \sigma \mathbb { T } ( \sigma _ { \theta } ( \Delta \mathscr { D } ) ) } \\ & { = \sigma _ { \theta } ( \mathscr { D } ) \mathbin { \mathop : } \sigma _ { \theta } ( \Delta D ) } \\ & { = \sigma _ { \theta } ( \mathscr { D } ) \mathbin { \mathop : } \sigma _ { \theta } ( \mathrm { \quad } D \cup \Delta D ) } \\ & { = \sigma _ { \theta } ( \mathscr { D } ) \mathbin { \mathop : } \sigma _ { \theta } ( \mathrm { \quad } D ) \cup \sigma _ { \theta } ( \Delta D ) ) } \\ & { = \sigma _ { \theta } ( \mathscr { D } ) - \sigma _ { \theta } ( \mathrm { \quad } D ) \cup \sigma _ { \theta } ( \Delta D ) } \\ & { = \sigma _ { \theta } ( \mathscr { D } ) - \sigma _ { \theta } ( \mathrm { \quad } D ) \cup \sigma _ { \theta } ( \Delta D ) } \end{array}
$$
Therefore, the tuple correctness is as following:
PROOF.
$$
\begin{array} { r l r } & { Q ^ { ( k ) } ( x , y ) } & { \qquad } \\ & { = \sigma _ { \mathcal { O } } ( Q ^ { ( k ) } ( y ) ) } \\ & { = \sigma _ { \mathcal { O } } \Big ( \mathbb { T } ( Q ^ { ( k ) } ( \mathbf { S } , x , \Delta \Omega , S ) ) \Big ) } & { \quad \mathrm { ( | \vec { C } ^ { [ i , l ] } - \vec { \sigma } ^ { [ j ] } \vec { \sigma } ^ { [ k ] } } } \\ & { = \sigma _ { \mathcal { O } } \Big ( \mathbb { T } ( Q ^ { ( k ) } ( \mathbf { S } ) \mathbf { S } , x , \Delta \Omega , S ) ) \Big ) } & { \quad \mathrm { ( | \vec { C } ^ { [ j , l ] } - \vec { \sigma } ^ { [ j ] } \vec { \sigma } ^ { [ k ] } \vec { \sigma } ^ { [ k ] } } } \\ & { = \sigma _ { \mathcal { O } } \Big ( \mathbb { T } ( Q ^ { ( k ) } ( \mathbf { S } ) ) \Big ) \omega \mathbb { T } ( T ( Q ^ { ( k ) } , \mathbf { S } , x , \Delta \Omega , S ) ) \Big ) } & { \quad \mathrm { ( | \vec { C } ^ { [ j , l ] } - \vec { \sigma } ^ { [ j ] } \vec { \sigma } ^ { [ k ] } } } \\ & { = \sigma _ { \mathcal { O } } \Big ( \mathbb { T } ( \Phi ^ { ( j ) } ( \mathbf { S } ) ) \Big ) \omega \mathbb { T } _ { \{ X ^ { [ j ] } \} } \Big ( \mathbb { T } ( Q ^ { ( j ) } , \mathbf { A } _ { X ^ { [ j ] } , \Delta \Omega , S } ) ) \Big ) } & { \quad \mathrm { ( | \vec { C } ^ { [ j , l ] } - \vec { \sigma } ^ { [ j ] } \vec { \sigma } ^ { [ j ] } } } \\ & { = \mathbb { T } ( \Phi ( Q ^ { ( k ) } ( \mathbf { S } ) ) ) \Big ) \omega \mathbb { T } ( \Pi ( Q ^ { ( j ) } , \mathbf { A } _ { X ^ { [ j ] } , \Delta \Omega , S } ) ) } & { \quad \mathrm { ( | \vec { C } ^ { [ j , l ] } - \vec { \sigma } ^ { [ j ] } \vec { \sigma } ^ { [ j ] } } } \\ & { = \mathbb { T } ( \Phi ^ { ( k ) } ( \mathbf { S } ) ) \Big ) \omega \mathbb { T } ( \Pi ( Q ^ { ( k ) } ( \mathbf { S } , \mathbf { A } _ { X ^ { [ j ] } , \Delta \Omega , S } ) ) , } & { \quad \mathrm { ( | \vec { C } ^ { [ j ] } - \vec { \sigma } ^ { [ j ] } \vec { \sigma } ^ { [ j ] } } } \\ & = \mathbb { T } ( Q ^ { ( k ) } ( \mathbf { S } ) ) \omega \mathbb { T } \left( \Pi ( Q ^ { ( j ) } ( \mathbf { S } , \mathbf { A } _ { X ^ { [ j ] } , \Delta \Omega , S } ) \right) , \end{array}
$$
# Fragment Correctness. We have the assumption that:
$$
Q ^ { i } ( D ^ { \prime } ) = Q ^ { i } ( D _ { \mathfrak { F } ( \mathbb { F } ( Q ^ { 1 } ( \mathcal { D } ) ) \backslash \mathfrak { s } ) \mathbb { F } ( \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \prime } )
$$
Then:
$$
( Q ^ { i + 1 } = \sigma _ { \theta } ( Q ^ { i } ) )
$$
+ $Q ^ { i }$ holds fragments correctness)
$$
\begin{array} { r l r } & { = \sigma _ { \theta } ( Q ^ { i } ( D _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) ) } ^ { \prime } | \mathfrak { s } | \mathfrak { s } ( I ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) ) } & { \quad ( D _ { \mathbb { S } ( \mathbb { F } ( \cdot ) ) } = D _ { \mathbb { F } ( \cdot ) } ) } \\ & { = \sigma _ { \theta } ( Q ^ { i } ( D _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } & { \quad ( \mathrm { L e m m a ~ 6 . 5 } ) } \\ & { = \sigma _ { \theta } ( \mathbb { T } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { ) } ) ) } & { \quad ( \mathbb { T } ( \mathcal { D } = D ) } \\ & { = \mathbb { T } ( \sigma _ { \theta } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { ) } ) ) ) } & { \quad ( \mathrm { L e m m a ~ 6 . 7 } ) } \end{array}
$$
From above, we can get that
$$
Q ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( \sigma _ { \theta } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash \ u \cup J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) )
$$
We now focus on an annotated tuples. For an annotated tuple $\langle t , \mathcal { P } \rangle \in Q ^ { i } ( \mathcal { D ^ { \prime } } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \mid \bullet ) \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } )$ , we can get that: $1 . \ t \ \in$ $Q ^ { i } ( D _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash \bullet ] ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) , 2 . \mathcal { P } \in \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \cup \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) )$ and 3. tuple $t$ can be obtain by $Q ^ { i } ( D _ { \mathcal { P } } ^ { \prime } )$ . For this annotated tuple, if tuple $t$ satisfies the selection condition, $t \ \models \theta$ , of an selection operator on top of 𝑄𝑖 , then 𝑡 ∈ 𝜎𝜃 (𝑄𝑖 (𝐷′F(𝑄𝑖 (𝒟)∪• I (𝑄𝑖,Φ,Δ𝒟,S) ) )) , $\mathcal { P } \in \mathbb { F } ( \mathcal { Q } ^ { i } ( \mathcal { D } ) \cup \mathcal { I } ( \mathcal { Q } ^ { i } , \Phi , \Delta \mathcal { D } , S ) )$ , and tuple $t$ can be obtain by $\sigma _ { \theta } ( Q ^ { i } ( D _ { \mathcal { P } } ^ { \prime } ) )$ . Now the tuple $t$ is in the result of selection operator, and it can be obtained by $\sigma _ { \theta } ( Q ^ { i } ( D _ { \mathcal { P } } ^ { \prime } ) )$ . Every tuple $t$ associates with its sketch $\mathcal { P }$ , and according to the selection semantics rule, $\mathcal { P }$ is in $\mathbb { F } ( \boldsymbol { Q } ^ { i + 1 } ( \mathcal { D } ) \odot \mathcal { T } ( \boldsymbol { Q } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) )$ , which is $\langle t , { \mathcal { P } } \rangle \in$
$$
\begin{array} { r l r } & { \sigma _ { \theta } ( Q ^ { i } ( \mathcal { D } ^ { \prime } _ { \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ^ { \sharp } } \mathsf { I } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) ) . \mathrm { T h e n } ; } & \\ & { \quad \langle t , \mathcal { P } \rangle \in \sigma _ { \theta } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) ^ { \sharp } } ^ { \prime } \mathsf { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) ^ { ) } ) } & \\ & { \Leftrightarrow \langle t , \mathcal { P } \rangle \in Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) ^ { \sharp } I ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) \wedge t \in \theta \mathrm { ~ ( \sigma _ { \theta } ~ d e f i n i t i o n ) ~ } } & \\ & { \Leftrightarrow \langle t , \mathcal { P } \rangle \in \sigma _ { \theta } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ^ { \sharp } I ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } & { \mathrm { ~ ( \mathcal { I } ( \sigma _ { \theta } ) ) ~ } } \\ & { \Leftrightarrow \langle t , \mathcal { P } \rangle \in Q ^ { i + 1 } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ^ { \sharp } I ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } & { \mathrm { ~ ( \mathcal { Q } ^ { i + 1 } = \sigma _ { \theta } ) ~ } } \end{array}
$$
Thus, for all annotated tuples in $\sigma _ { \theta } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \mid \bullet ) \mathcal { T } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) )$ they are in $Q ^ { i + 1 } ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ) \backslash \bullet ) \mathcal { T } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } )$ . Therefore,
$$
\begin{array} { r l } & { \quad Q ^ { i + 1 } ( \mathcal D _ { \mathbb { F } ( Q ^ { i + 1 } ( \mathcal D ) \backslash \bullet I ( Q ^ { i + 1 } , \Phi , \Delta \mathcal D , S ) ) } ^ { \prime } ) } \\ & { = \sigma _ { \theta } ( Q ^ { i } ( \mathcal D _ { \mathbb { F } ( Q ^ { i } ( \mathcal D ) \backslash \bullet I ( Q ^ { i } , \Phi , \Delta \mathcal D , S ) ) } ^ { \prime } ) ) } \\ & { = \mathbb { T } ( \sigma _ { \theta } ( Q ^ { i } ( \mathcal D _ { \mathbb { F } ( Q ^ { i } ( \mathcal D ) \backslash \bullet I ( Q ^ { i } , \Phi , \Delta \mathcal D , S ) ) } ^ { \prime } ) ) ) } \\ & { = Q ^ { i + 1 } ( D ^ { \prime } ) } \end{array}
$$
6.2.4 Projection. Suppose the operator at level $i + 1$ is an projection operator. Then we have the following:
$$
Q ^ { i + 1 } ( D ) = \Pi _ { A } ( Q ^ { i } ( D ) )
$$
For projection operator, the annotated tuples’ relation between $Q ^ { i + 1 }$ and $Q ^ { i }$ is like $\langle t _ { 1 } , \mathcal { P } _ { 1 } \rangle$ and $\langle s _ { 1 } , \mathcal { P } _ { s _ { 1 } } \rangle$ in Fig. 6 such that:
$$
\langle t _ { 1 } , \mathcal { P } _ { t _ { 1 } } \rangle = \Pi _ { A } ( \langle s _ { 1 } , \mathcal { P } _ { s _ { 1 } } \rangle )
$$
Before we demonstrate the tuple correctness, we first show two properties hold for projection operator
LEMMA 6.9. The following properties hold:
$$
\Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) = \mathbb { T } ( \Pi _ { A } ( \mathcal { D } ) )
$$
PROOF. For $\Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) )$ :
$$
\begin{array} { r l } & { \Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) } \\ & { { \sf = } \Pi _ { A } ( D ) } \\ & { { \sf = } \{ { t } \mid { t } ^ { \prime } \in D \land { t } ^ { \prime } . A = t \} } \end{array}
$$
For $\mathbb { T } ( \Pi _ { A } ( { \mathcal { D } } ) )$ :
$$
\begin{array} { r l r } & { \mathbb { T } ( \Pi _ { A } ( \mathcal { D } ) ) } \\ & { = \mathbb { T } ( \left. \ S ( \left\{ t , \mathcal { P } \right\} \ \right| \left. t ^ { \prime } , \mathcal { P } \right. \in \mathcal { D } \land t ^ { \prime } . A = t \left\} \right) } & { \left( \Pi _ { A } \right) } \\ & { = \ P t \ | \ t ^ { \prime } \in D \land t ^ { \prime } . A = t \} } & { \left( \mathbb { T } ( \mathcal { D } ) = D \right) } \end{array}
$$
Therefore, $\Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) = \mathbb { T } ( \Pi _ { A } ( \mathcal { D } ) )$ :
LEMMA 6.10. The following properties hold:
$$
\Pi _ { A } ( \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) ) = \Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) \cup \Pi _ { A } ( \mathbb { T } ( \Delta \mathcal { D } ) )
$$
PROOF. For $\Pi _ { A } ( \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) )$ :
$$
\begin{array} { r l r } { \ } & { \Pi _ { A } ( \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) ) } \\ & { = \Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) \cup \mathbb { T } ( \Delta \mathcal { D } ) ) } & { ( \mathrm { l e m m a ~ 6 . 3 } ) } \\ & { = \Pi _ { A } ( D \cup \Delta D ) } & { ( \mathbb { T } ( \mathcal { D } ) = D ) } \\ & { = \Pi _ { A } ( D - \Delta D \Delta D ) } & { ( D \uplus \Delta D = D - \Delta D \cup \Delta D ) } \\ & { = \Pi _ { A } ( D ) - \Pi _ { A } ( \Delta D ) \cup \Pi _ { A } ( \Delta D ) } & { ( \Pi _ { A } \mathrm { ~ a n d ~ } \cup , - ) } \end{array}
$$
For $\Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) \mathrel { \cup } \Pi _ { A } ( \mathbb { T } ( \Delta \mathcal { D } ) )$
$$
\begin{array} { r l } & { \Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) \cup \Pi _ { A } ( \mathbb { T } ( \Delta \mathcal { D } ) ) } \\ & { = \mathbb { T } ( \Pi _ { A } ( \mathcal { D } ) ) \cup \mathbb { T } ( \Pi _ { A } ( \Delta \mathcal { D } ) ) } \\ & { = \Pi _ { A } ( \mathcal { D } ) \cup \Pi _ { A } ( \Delta D ) } \\ & { = \Pi _ { A } ( \mathcal { D } ) \cup \Pi _ { A } ( \quad D \cup \Delta D ) } \\ & { = \Pi _ { A } ( \mathcal { D } ) \cup ( \Pi _ { A } ( \quad D ) \cup \Pi _ { A } ( \Delta D ) ) } \\ & { = \Pi _ { A } ( \mathcal { D } ) - \Pi _ { A } ( \quad D ) \cup \Pi _ { A } ( \Delta D ) } \end{array}
$$
Therefore, $\Pi _ { A } ( \mathbb { T } ( \mathcal { D } \cup \Delta \mathcal { D } ) ) = \Pi _ { A } ( \mathbb { T } ( \mathcal { D } ) ) \cup \Pi _ { A } ( \mathbb { T } ( \Delta \mathcal { D } ) )$
# Tuple Correctness.
PROOF.
$$
\begin{array} { r l r } & { \mathrm { Q } ^ { \mathrm { H } , \mathrm { i } } ( x , y ) } & { \langle \mathcal { Q } ^ { \mathrm { H } , \mathrm { i } } ( x , y ) \rangle } \\ & { = \Pi _ { \boldsymbol { \lambda } } ( \mathcal { Q } ^ { \dagger } ( x , y ) ; } \\ & { = \Pi _ { \boldsymbol { \lambda } } \Big ( \Pi _ { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ) ) \boldsymbol { \Theta } \boldsymbol { U } ( \mathcal { Q } ^ { \dagger } , \boldsymbol { \Phi } \boldsymbol { \Delta } \boldsymbol { \mathcal { Q } } , S ) ; \boldsymbol { \xi } \Big ) \Big ) \quad } & { \langle \mathcal { Q } ^ { \mathrm { H } , \mathrm { i } } ( \mathbf { H } ; \mathbf { u } ; \mathbf { f r } \mathbf { p o p a r i t y } ) } \\ & { = \Pi _ { \boldsymbol { \lambda } } \Big ( \Pi _ { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ) ) \boldsymbol { \Theta } \boldsymbol { U } ( \mathcal { Q } ^ { \dagger } , \boldsymbol { \Phi } , \boldsymbol { \Delta } \boldsymbol { \mathcal { Q } } , S ) ; \boldsymbol { \xi } \Big ) \Big ) } & { \quad \{ \mathrm { f r e m ~ a l s ~ t e r p o p a r i t y } \} } \\ & { = \Pi _ { \boldsymbol { \lambda } } \Big ( \Pi _ { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ) ) ; \boldsymbol { \Theta } \boldsymbol { U } ( \mathcal { T } ( \mathcal { Q } ^ { \dagger } , \boldsymbol { \Phi } , \boldsymbol { \Delta } \boldsymbol { \mathcal { Q } } , S ) ; \boldsymbol { \xi } ) \Big ) , } & { \quad \{ \mathrm { f r e m m o ~ 6 . 3 } \} } \\ & { = \Pi _ { \boldsymbol { \lambda } } \Big ( \Pi _ { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \mathcal { Q } } ) ) \Big ) \omega \Pi _ { \mathfrak { a } } \Big ( \pi ^ { \mathcal { Q } } ( \mathcal { T } ( \mathcal { Q } ^ { \dagger } , \boldsymbol { \Phi } , \boldsymbol { \Delta } \boldsymbol { \mathcal { Q } } , S ) ; \boldsymbol { \xi } ) - \langle \mathbf { u } \mathbf { m o } 6 . 1 0 \rangle } \\ & { \quad - \Pi _ { \mathcal { Q } } ^ { \dagger } ( \boldsymbol { \Omega } ) \Big ) \omega \Pi _ { \mathfrak { a } } ( \mathcal { T } ( \mathcal { Q } ^ { \dagger } , \boldsymbol { \Phi } , \boldsymbol { \xi } ) ; } & { \quad \{ \mathrm { f r m i n e ~ } \theta \} ) } \\ & = \Pi _ { \mathcal { Q } } ^ { \dagger } ( x ) \omega \mathcal { Q } ( \Pi _ { \mathcal { A } } \boldsymbol { U } ( \mathcal { Q } ^ { \dagger } , \boldsymbol { \Phi } , \end{array}
$$
Fragment Correctness. We have the assumption that:
$$
Q ^ { i } ( D ^ { \prime } ) = Q ^ { i } ( D _ { \mathfrak { F } ( \mathbb { F } ( Q ^ { 1 } ( \mathcal { D } ) ) \backslash \mathfrak { s } ) \mathbb { F } ( \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \prime } )
$$
Then for a projection operator above $Q ^ { i }$ , we have the following proof:
PROOF.
$$
( \boldsymbol { Q } ^ { i + 1 } = \Pi _ { A } ( \boldsymbol { Q } ^ { i } ) )
$$
4 $Q ^ { i }$ holds fragments correctness)
$$
\begin{array} { r l r } & { \begin{array} { l l } { = \Pi _ { A } ( Q ^ { i } ( D _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) ) } ^ { \prime } | \bullet | \mathbb { F } ( \boldsymbol { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) ) } & { \quad ( D _ { \mathbb { S } ( \mathbb { R } ( \cdot ) ) } = D _ { \mathbb { R } ( \cdot ) } ) } \\ & { = \Pi _ { A } ( Q ^ { i } ( D _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) \backslash J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } & { \quad \mathrm { ( L e m m a ~ 6 . 5 ) } } \\ & { = \Pi _ { A } ( \mathbb { T } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) \backslash J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) ) } & { \quad \mathrm { ( \mathbb { T } ( \mathcal { D } = D ) } } \\ & { = \mathbb { T } ( \Pi _ { A } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) \backslash J ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) ) } & { \quad \mathrm { ( L e m m a ~ 6 . 7 ) } } \end{array} } \end{array}
$$
From above, we can get that
$$
Q ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( \Pi _ { A } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash \mathsf { S } \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) )
$$
For an annotated tuple ⟨𝑡, P⟩ ∈ 𝑄𝑖 (𝒟′F 𝑄𝑖 𝒟 • 𝑄𝑖,Φ,Δ𝒟, ) , the following holds: 1. $t \ \in \ Q ^ { i } ( D _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash \bullet ) \mathcal { T } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } )$ , 2. $\mathcal { P } \in \mathbb { F } ( \boldsymbol { Q } ^ { i } ( \mathcal { D } ) \cup \mathcal { I } ( \boldsymbol { Q } ^ { i } , \Phi , \Delta \mathcal { D } , S ) )$ and 3. tuple $t$ can be obtain by $Q ^ { i } ( D _ { \mathcal { P } } ^ { \prime } )$ . For this annotated tuple, if expressions in $A$ are projected from tuple $t$ of an projection operator on top of $Q ^ { i }$ , then
$$
t . A \in \Pi _ { A } ( Q ^ { i } ( D _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \mid \Im , ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) )
$$
, $\mathcal { P } ~ \in ~ \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \cup \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) )$ , and $t . A$ can be obtain by $\Pi _ { A } ( Q ^ { i } ( D _ { \mathcal { P } } ^ { \prime } ) )$ . Now the $t . A$ is in the result of projection operator, and it can be obtained by $\Pi _ { A } ( Q ^ { i } ( D _ { \mathcal { P } } ^ { \prime } ) )$ . Every $t . A$ associates with its sketch $\mathcal { P }$ , and according to the projection semantics rule, $\mathcal { P }$ is in $\mathbb { F } ( \boldsymbol { Q } ^ { i + 1 } ( \mathcal { D } ) \cup \mathcal { I } ( \boldsymbol { Q } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) )$ , which is
$$
\langle t , \mathcal { P } \rangle \in \Pi _ { A } ( Q ^ { i } ( \mathcal { D ^ { \prime } } _ { \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ) \backslash \mathbb { S } ] } { \scriptstyle \{ \mathscr { Q } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ) )
$$
Then:
$$
\begin{array} { r l } & { \quad \langle t , \mathcal { P } \rangle \in \Pi _ { A } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) ^ { \sharp } \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } \\ & { \Leftrightarrow \langle t , \mathcal { P } \rangle \in t ^ { \prime } \in Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i } ( \mathcal { D } ) ^ { \sharp } \mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) \wedge t ^ { \prime } . A = t } \\ & { \qquad ( \Pi _ { A } \operatorname { d e f i n i t i o n } ) } \\ & { \Leftrightarrow \langle t , \mathcal { P } \rangle \in \Pi _ { A } ( Q ^ { i } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ^ { \sharp } \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) \qquad ( \mathcal { I } ( \Pi _ { A } ) ) } \\ & { \Leftrightarrow \langle t , \mathcal { P } \rangle \in Q ^ { i + 1 } ( \mathcal { D } _ { \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ^ { \sharp } \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) \ ( Q ^ { i + 1 } = \Pi _ { A } ( Q ^ { i } ) ) } \end{array}
$$
Thus, for all annotated tuples in $\Pi _ { A } \big ( Q ^ { i } \big ( \mathcal { D } _ { \mathbb { F } ( Q ^ { i } ( \mathcal { D } ) \backslash \bullet ) \mathcal { T } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } \big ) )$ they are in 𝑄𝑖+1 (𝒟′F(𝑄𝑖+1 (𝒟)∪• I (𝑄𝑖+1,Φ,Δ𝒟,S) ) ).
Therefore,
$$
\begin{array} { r l } & { \quad Q ^ { i + 1 } ( \mathcal D _ { \mathbb { R } ( Q ^ { i + 1 } ( \mathcal D ) ^ { \lfloor \bullet \rfloor } } ^ { \prime } ( Q ^ { i + 1 } , \Phi , \Delta { \mathcal D } , S ) ) } \\ & { = \Pi _ { A } ( Q ^ { i } ( \mathcal D _ { \mathbb { R } ( Q ^ { i } ( \mathcal D ) ^ { \lfloor \bullet \rfloor } ( Q ^ { i } , \Phi , \Delta { \mathcal D } , S ) ) } ^ { \prime } ) ) } \\ & { = \mathbb { T } ( \Pi _ { A } ( Q ^ { i } ( \mathcal D _ { \mathbb { R } ( Q ^ { i } ( \mathcal D ) ^ { \lfloor \bullet \rfloor } ( Q ^ { i } , \Phi , \Delta { \mathcal D } , S ) ) } ^ { \prime } ) ) ) } \\ & { = Q ^ { i + 1 } ( D ^ { \prime } ) } \end{array}
$$
Figure 7: Annotated tuples’ relation between $Q ^ { i }$ , $Q _ { 1 } ^ { m }$ and $Q _ { 2 } ^ { i - m }$
6.2.5 Cross Product. Suppose the operator at level $i + 1$ is a cross product (join) operator, then we have the following:
$$
Q ^ { i + 1 } ( D ) = Q _ { 1 } ^ { m } ( D ) \times Q _ { 2 } ^ { i - m } ( D )
$$
For cross product, the annotated tuples’ relation between $Q ^ { i + 1 }$ and $Q _ { 1 } ^ { i }$ and $Q _ { 2 } ^ { i - m }$ is like $\left. t _ { 2 } , \mathcal { P } _ { 2 } \right.$ and $\langle s _ { i } , \mathcal { P } _ { s _ { i } } \rangle$ and $\langle l _ { j } , \mathcal { P } _ { l _ { j } } \rangle$ in Fig. 7, such that:
$$
\langle t _ { 2 } , \mathcal { P } _ { 2 } \rangle = \langle ( s _ { i } \circ l _ { j } ) , \Downarrow \mathcal { P } _ { s _ { i } } , \mathcal { P } _ { l _ { j } } \updownarrow \rangle
$$
Each annotated tuple in the result of $Q ^ { i + 1 }$ is the product of two annotated tuples, each one from one side.
Tuple Correctness. Before we demonstrate the tuple correctness, we first show the property hold for cross product operator
LEMMA 6.11. The following properties hold:
$$
\mathbb { T } ( \mathscr { D } _ { 1 } \times \mathscr { D } _ { 2 } ) = \mathbb { T } ( \mathscr { D } _ { 1 } ) \times \mathbb { T } ( \mathscr { D } _ { 2 } )
$$
PROOF. For $\mathbb { T } ( \mathcal { D } _ { 1 } \times \mathcal { D } _ { 2 } )$ :
$$
\begin{array} { r l } & { \quad \mathbb { T } ( \mathcal { D } _ { 1 } \times \mathcal { D } _ { 2 } ) } \\ & { = \mathbb { T } ( \{ ( \langle t , \mathcal { P } _ { t } \rangle \circ \langle s , \mathcal { P } _ { s } \rangle ) ^ { m * n } \mid \langle t , \mathcal { P } _ { t } \rangle ^ { m } \in \mathcal { D } _ { 1 } \wedge \langle s , \mathcal { P } _ { s } \rangle ^ { n } \in \mathcal { D } _ { 2 } \| \} ) } \\ & { = \ P ( t \circ s ) ^ { m * n } \mid t ^ { m } \in D _ { 1 } \wedge s ^ { n } \in D _ { 2 } \| } \\ & { = D _ { 1 } \times D _ { 2 } } \end{array}
$$
For $\mathbb { T } ( \mathcal { D } _ { 1 } ) \times \mathbb { T } ( \mathcal { D } _ { 2 } )$ :
$$
\begin{array} { c } { \mathbb { T } ( \mathscr { D } _ { 1 } ) \times \mathbb { T } ( \mathscr { D } _ { 2 } ) } \\ { = D _ { 1 } \times D _ { 2 } } \end{array}
$$
Therefore, $\mathbb { T } ( \mathcal { D } _ { 1 } \times \mathcal { D } _ { 2 } ) = \mathbb { T } ( \mathcal { D } _ { 1 } ) \times \mathbb { T } ( \mathcal { D } _ { 2 } )$
Thus the following is the tuple correctness:
PROOF.
$$
\begin{array} { r l } & { \quad Q ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = Q _ { 1 } ^ { m } ( D ^ { \prime } ) \times Q _ { 2 } ^ { i - m } ( D ^ { \prime } ) \qquad ( Q ^ { i + 1 } ( D ) = Q _ { 1 } ^ { m } ( D ) \times Q _ { 2 } ^ { i - m } ( D ) ) } \\ & { = \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal D ) \cup \cal { J } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal D , S ) ) } \\ & { \qquad \times \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal D ) \cup \cal { J } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal D , S ) ) } \end{array}
$$
$$
\begin{array} { r l } & { \textstyle = ( \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal { D } ) ) \mathfrak { u } \mathfrak { T } ( \bar { Z } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) ) } \\ & { \qquad \times ( \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ) ) \mathfrak { u } \mathbb { T } ( \bar { Z } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) ) \qquad \mathrm { ( l e m m a ~ 6 . 3 ) } } \\ & { \textstyle = ( \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal { D } ) ) \times \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ) ) } \\ & { \qquad \mathfrak { u } \left( \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal { D } ) ) \times \mathbb { T } ( \bar { Z } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) \right) } \\ & { \qquad \mathfrak { u } \left( \mathbb { T } ( \bar { T } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) \times \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ) ) ) \right) } \\ & { \qquad \mathfrak { u } \left( \mathbb { T } ( \bar { T } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) \times \mathbb { T } ( \bar { T } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) \right) } \\ & { \qquad \mathfrak { u } \left( \mathbb { T } ( \bar { T } ( \bar { Q } _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) \times \mathbb { T } ( \bar { T } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) \right) } \end{array}
$$
$$
\begin{array} { r l } & { = \mathbb { T } ( \mathscr { Q } _ { 1 } ^ { n } ( \mathfrak { D } ) \times \mathscr { Q } _ { 2 } ^ { \ell - m } ( \mathfrak { D } ) ) } \\ & { \quad = \mathbb { T } ( \mathscr { Q } _ { 1 } ^ { n } ( \mathfrak { D } ) \times \mathscr { T } ( \mathcal { Q } _ { 2 } ^ { - m } , \mathfrak { H } , \Lambda \mathscr { D } , S ) ) } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \forall \mathbb { T } ( \mathcal { T } ( \mathcal { Q } _ { 1 } ^ { n } , \Phi , \Lambda \mathscr { D } , S ) \times \mathcal { Q } _ { 2 } ^ { \ell - m } ( \mathcal { Q } _ { 2 } ^ { n } ) ) } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \times \mathscr { T } ( \mathcal { T } ( \mathcal { Q } _ { 1 } ^ { n } , \Phi , \Lambda \mathscr { D } , S ) \times \mathcal { T } ( \mathcal { Q } _ { 2 } ^ { \ell - m } , \Phi , \Lambda \mathscr { D } , S ) ) \quad \mathrm { ~ ( t r m m i z ~ 6 , 3 ) ~ } } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { = \mathbb { T } ( \mathscr { Q } _ { 1 } ^ { n } ( \mathfrak { D } ) \times \mathscr { Q } _ { 2 } ^ { \ell - m } ( \mathcal { D } ) ) } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & { \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad } \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ \end{array}
$$
Fragment Correctness. From the tuple correctness, we get that
$$
\begin{array} { r l } & { \quad Q ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = Q _ { 1 } ^ { m } ( D ^ { \prime } ) \times Q _ { 2 } ^ { i - m } ( D ^ { \prime } ) } \\ & { = Q _ { 1 } ^ { m } ( D _ { \mathfrak { F } ( \mathbb { P } ( Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } ) ) ^ { \backslash } \varPsi ( \it { I } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { m } ) } \\ & { \qquad \times Q _ { 2 } ^ { i - m } ( D _ { \mathfrak { F } ( \mathbb { P } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } ) ) \backslash \mathfrak { S } | \mathbb { P } ( \it { I } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { j - m } ) } \end{array}
$$
$$
\begin{array} { r l } & { \quad = Q _ { 1 } ^ { m } ( D _ { \mathbb { F } ( Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } ) ) ^ { \mathrm { t g } } \mathbb { F } ( I ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } \\ & { \qquad \times Q _ { 2 } ^ { i - m } ( D _ { \mathbb { F } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } ) ) ^ { \mathrm { t g } } \mathbb { F } ( I ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) \pmod { D _ { \mathbb { S } ( \mathbb { F } ( \cdot ) ) = D _ { \mathbb { F } ( \cdot ) } } } } \\ & { \quad = Q _ { 1 } ^ { m } ( D _ { \mathbb { F } ( Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } ) \mid \mathbb { S } \mathbb { F } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } \\ & { \qquad \times Q _ { 2 } ^ { i - m } ( D _ { \mathbb { F } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } ) \mid \mathbb { S } \mathbb { J } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } \\ & { \quad = \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal { D } _ { \mathbb { F } ( Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } ) \mid \mathbb { S } \mathbb { J } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } \\ & { \qquad \times \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } _ { \mathbb { F } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } ) \mid \mathbb { S } \mathbb { J } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) \qquad ( \mathbb { T } ( \mathcal { D } = D ) ) } \end{array}
$$
From above, we know that:
$$
\left\{ \begin{array} { l l } { Q _ { 1 } ^ { m } ( D ^ { \prime } ) = \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal { D } _ { \mathbb { R } ( Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } ) \cup J ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } \\ { Q _ { 2 } ^ { i - m } ( D ^ { \prime } ) = \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } _ { \mathbb { R } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } ) \cup J ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } \end{array} \right.
$$
We now focus two annotated tuples $\langle t , \mathcal { P } _ { t } \rangle$ and $\langle s , \mathcal { P } _ { s } \rangle$ such that:
$$
\left\{ \begin{array} { r l } & { \langle t , \mathcal { P } _ { t } \rangle \in \mathbb { T } ( Q _ { 1 } ^ { m } ( \mathcal { D } _ { \mathbb { F } ( Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } ) \backslash \cup \mathcal { T } ( Q _ { 1 } ^ { m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } \\ & { \langle s , \mathcal { P } _ { s } \rangle \in \mathbb { T } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } _ { \mathbb { F } ( Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } ) \backslash \cup \mathcal { I } ( Q _ { 2 } ^ { i - m } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) ) } \end{array} \right.
$$
If $\langle t , \mathcal { P } _ { t } \rangle$ is a non-delta annotated tuple and $\langle s , \mathcal { P } _ { s } \rangle$ is a non-delta annotated from , $\langle ( t \circ s ) , \{ \mathcal { P } _ { t } , \mathcal { P } _ { s } \} \rangle$ is a non-delta annotated tuple in $Q ^ { i + 1 } ( \mathcal { D } ^ { \prime } )$ , and $( t \circ s )$ can be obtain by $Q _ { 1 } ^ { m } ( D _ { \mathcal { P } _ { t } } ^ { \prime } ) \times Q _ { 2 } ^ { i - m } ( D _ { \mathcal { P } _ { s } } ^ { \prime } )$ . Thus $\mathcal { P } _ { t }$ and $\mathcal { P } _ { s }$ are in $\mathbb { F } ( Q ^ { i + 1 } ( { \mathcal { D } } ^ { \prime } ) )$ . If one of $\langle t , \mathcal { P } _ { t } \rangle$ and $\langle s , \mathcal { P } _ { s } \rangle$ is a delta annotated tuple or both are annotated tuples, then $( t \circ s )$ in $\Delta Q ^ { i + 1 } ( D )$ , and the fragments $\mathcal { P } _ { t }$ and $\mathcal { P } _ { s }$ are in $\Delta \mathbb { F } ( Q ^ { i + 1 } ( { \mathcal { D } } ^ { \prime } ) )$ . Therefore, for any $\langle t , \mathcal { P } _ { t } \rangle$ and $\langle s , \mathcal { P } _ { s } \rangle$ , $( t \circ s )$ in $\Delta Q ^ { i + 1 } ( D ^ { \prime } )$ , and $\mathcal { P } _ { t }$ and $\mathcal { P } _ { s }$ are in $\mathbb { F } ( \boldsymbol { Q } ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) \cup \Delta \mathbb { F } ( \boldsymbol { Q } ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) )$ . Then for all annotated tuples from $Q _ { 1 } ^ { m } ( \mathcal { D } ^ { \prime } )$ and $Q _ { 2 } ^ { i - m } ( \mathcal { D } ^ { \prime } )$ , the cross product result is a bag of annotated tuples that is the same as $\boldsymbol { Q } ^ { i + 1 } ( \mathcal { D } ^ { \prime } )$ and all the tuples of $Q ^ { i + 1 } ( D ^ { \prime } )$ are the same as 𝐷 ′F(𝑄𝑖+1 (𝒟′ ) )∪• ΔF(𝑄𝑖+1 (𝒟′ ) ) ) . From the incremental semantics $\Delta \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) = \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S )$ . Then, $Q ^ { i + 1 } ( D ^ { \prime } ) = Q ^ { i + 1 } ( D _ { \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) ^ { \backslash } \cup \Delta \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) } ^ { \prime } ) .$ .
6.2.6 Aggregation. Suppose the operator at level $i + 1$ is an aggregation function (any one of sum, count, avg, min and max), Then we have the following:
$$
Q ^ { i + 1 } ( D ) = \gamma _ { f ( a ) ; G } ( Q ^ { i } ( D ) )
$$
We have the assumption that for $Q ^ { i } \in \mathbb { Q } ^ { i }$ , the tuple correctness and fragment correctness hold. We will show that these properties still hold for $Q ^ { i + 1 } \in \mathbb { Q } ^ { i + 1 }$ when $Q ^ { i + 1 }$ is an aggregation function.
To show the correctness of tuples and fragments, we focus on one group $g$ . Let $Q _ { g } ^ { i + 1 } ( Q ( D ) )$ to be an aggregation function works on
$Q ^ { i } ( D )$ and only focus on groups $g , \forall t \in Q ^ { i } ( D ) \colon t . G = g$ . Thus, the two properties for $g$ will be:
$$
\mathcal { Q } _ { g } ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( \mathscr { Q } _ { g } ^ { i + 1 } ( \mathscr { D } ) \stackrel { \ast } { \cup } \mathscr { T } ( \mathscr { Q } _ { g } ^ { i + 1 } , \Phi , \Delta \mathscr { D } , S ) )
$$
(tuple correctness)
$$
\mathscr { Q } _ { g } ^ { i + 1 } ( D ^ { \prime } ) = \mathscr { Q } _ { g } ^ { i + 1 } ( D _ { \mathfrak { S } ( \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \backslash \mathfrak { s } ] \mathbb { F } ( \mathcal { I } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { j } )
$$
(fragment correctness)
Tuple Correctness. For one group $g$ , based on our rule, the annotated tuple before and after applying delta annotated tuples are:
$$
\begin{array} { r l } & { \boldsymbol { Q } _ { g } ^ { i + 1 } ( \mathcal { D } ) = \| \quad \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \| } \\ & { \boldsymbol { Q } _ { g } ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) = \| \mathbb { A } \langle g \circ ( \widehat { f ( a ) } ) , \widehat { \mathcal { P } } \rangle \| } \end{array}
$$
Since $\begin{array} { r l } { Q _ { g } ^ { i + 1 } ( \mathcal { D } ) = \Downarrow \langle } & { { } g \circ ( f ( a ) ) , \mathcal { P } \rangle \ B } \end{array}$ , then, we can get that:
$$
\mathbb { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \hookrightarrow \mathbb { T } ( \bigl \{ \ v { D } \Delta \left. g \circ ( f ( a ) ) , \mathcal { P } \right. \bigr \} ) = \emptyset
$$
Therefore, the tuple correctness for $Q _ { g } ^ { i + 1 } ( D ^ { \prime } )$ is shows:
$$
\begin{array} { r l r } & { Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = \mathrm { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) } & { ( \mathrm { T } ( \mathcal { D } ) = D ) } \\ & { = \mathrm { T } ( \{ \Delta \boldsymbol { g } ( \boldsymbol { \mathcal { F } } ( \boldsymbol { \hat { a } } ) ) , \boldsymbol { \hat { \mathcal { P } } } \} | ) } \\ & { = 0 \ \mathrm { t o } \ \mathrm { T } ( \{ \Delta \boldsymbol { g } \circ ( \boldsymbol { \widehat { f ( a ) } } ) , \boldsymbol { \widehat { \mathcal { P } } } \} | ) } \\ & { = \mathrm { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \ \forall \mathrm { T } ( \{ \Delta \boldsymbol { g } \circ ( \boldsymbol { f ( a ) } ) , \boldsymbol { \mathcal { P } } \} | ) \ \cup \mathrm { T } ( \{ \Delta \boldsymbol { g } \circ ( \boldsymbol { \widehat { f ( a ) } } ) , \boldsymbol { \widehat { \mathcal { P } } } \} | ) } \\ & { = \mathrm { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \ \mathrm { t o } \ ( \mathrm { T } ( \{ \Delta \boldsymbol { g } \circ ( \boldsymbol { f ( a ) } ) , \boldsymbol { \mathcal { P } } \} | ) \ \cup \mathrm { T } ( \{ \Delta \boldsymbol { g } \circ ( \boldsymbol { \widehat { f ( a ) } } ) , \boldsymbol { \widehat { \mathcal { P } } } \} | ) } \\ & { } & { = \mathrm { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \ \mathrm { t } ( \mathrm { T } ( \{ \Delta \boldsymbol { g } \circ ( \boldsymbol { f ( a ) } ) , \boldsymbol { \mathcal { P } } \} | ) \ \cup \ \{ \Delta \ \boldsymbol { g } \circ ( \boldsymbol { \widehat { f ( a ) } } ) , \boldsymbol { \widehat { \mathcal { P } } } \} ) } \\ & { } \end{array}
$$
(lemma 6.3)
Based on the incremental rule of aggregation, it will delete the current group’s $( g \circ ( f ( a ) ) )$ and insert an tuple $\widehat { ( g \circ ( f ( a ) ) ) } )$ . If we do not apply the $\uplus$ but keep them as two independent tuples, which is $\mathbb { T } ( \{ \| \Delta \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \} \cup \{ \| \Delta \langle g \circ ( f ( a ) ) , \widehat { \mathcal { P } } \rangle \| )$ . And it is the output of incremental procedure which is $\mathbb { T } ( \boldsymbol { \mathcal { I } } ( \boldsymbol { \mathcal { Q } } _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) )$
Therefore, for one group $g$ , the tuple correctness hold such that:
$$
\begin{array} { r l } & { \quad Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = \mathbb { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \hookrightarrow \mathbb { T } ( { Z } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } \\ & { = \mathbb { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) \hookrightarrow { T } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } \end{array}
$$
Creating or deleting a group. If we create a new group, then $Q _ { g } ^ { i + 1 } ( \mathcal { D } ) = \{ \ v { D } \Delta \left. g \circ ( f ( a ) ) , \mathcal { P } \right. \ v { D } \ v { D } \} = \emptyset$ . Then $\| \Delta \langle g \circ ( { \widehat { f ( a ) } } ) , { \widehat { \mathcal { P } } } \rangle \|$ is the only output of $\mathcal { I } ( Q ^ { i } , \Phi , \Delta \mathcal { D } , S )$ .
Therefore, $Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) \cup \mathcal { I } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) )$ . If we delete a group, then $Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) = \emptyset$ From the incremental semantics, we just output $\begin{array} { r l } { \ P } & { { } \langle g \circ ( \widehat { f ( a ) } ) , \mathcal { P } \rangle \| } \end{array}$ , then from $\varnothing _ { g }$ , then the result is empty as well. Thus, for deleting a group, the results are empty. Therefore, for a group $g$ , the property holds such that:
$$
\mathcal { Q } _ { g } ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( \mathscr { Q } _ { g } ^ { i + 1 } ( \mathscr { D } ) \stackrel { \ast } { \hookrightarrow } \mathscr { T } ( \mathscr { Q } _ { g } ^ { i + 1 } , \Phi , \Delta \mathscr { D } , S ) )
$$
Fragment Correctness. For one group $g$ , based on the rule, the annotated tuple before and after applying delta annotated tuples are:
$$
\begin{array} { r } { Q _ { g } ^ { i + 1 } ( \mathcal D ) = \{ \| \quad \langle g \circ ( f ( a ) ) , \mathcal P \rangle \| } \\ { Q _ { g } ^ { i + 1 } ( \mathcal D ^ { \prime } ) = \{ \| \Delta \langle g \circ ( \widehat { f ( a ) } ) , \widehat { \mathcal P } \rangle \| } \end{array}
$$
Since $Q _ { g } ^ { i + 1 } ( \mathcal { D } ) = \{ \ \begin{array} { r l } { \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \} } \end{array}$ , then, we can get that:
$$
\mathbb { F } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \hookrightarrow \mathbb { F } ( \{ \ v { D } \Delta \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \ v { b } \} ) = \emptyset
$$
Since $Q _ { g } ^ { i + 1 } ( \mathcal D ^ { \prime } ) = \{ | \Delta \ \langle g \circ ( \widehat { f ( a ) } ) , \widehat { \mathcal P } \rangle \}$ , then $Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) = Q _ { g } ^ { i + 1 } ( D _ { \widehat { \mathcal { P } } } ^ { \prime } ) =$ $Q _ { g } ^ { i + 1 } ( D _ { \mathbb { S } ( \mathbb { F } ( \{ \mathbb { J } \mathbb { A } \langle g \circ ( \overline { { f ( a ) } } ) , \widehat { \mathcal { P } } \rangle \mathbb { Y } ) ) } ^ { \prime } )$ . Therefore, the fragment correctnebss for $Q _ { g } ^ { i + 1 } ( D )$ is shows:
$$
\begin{array} { r l } & { Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \gimel ( \emptyset ( \bar { f } ( \bar { a } ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } ) ) ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \gimel ( \gimel ( \bar { f } ( \bar { a } ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } ) ) ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \gimel ( \gneq ( \bar { f } ( \bar { a } ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } ) ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g \in \mathbb { Z } ( \bar { a } ) } ^ { \prime } ( \ g \backslash ( \bar { f } ( \bar { a } ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } ) ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \ g _ { g } ^ { i + 1 } ( \partial ) ) \psi { \mathbb { P } } ( \dag \Delta \zeta \circ ( f ( a ) ) , \mathcal { P } \backslash \mathfrak { h } ) ) \Theta { \mathbb { P } } ( \ g \cup \{ \bar { g } ( \bar { f } ( a ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } \} ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \varrho _ { g } ^ { i + 1 } ( \partial ) ) \psi { \mathbb { P } } ( \dag \Delta \zeta \circ ( f ( a ) ) , \mathcal { P } \backslash \mathfrak { h } \cup \{ \Delta \zeta \circ ( \bar { f } ( a ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } \} ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \varrho _ { g } ^ { i + 1 } ( \mathfrak { D } ) ) \psi { \mathbb { P } } ( \dag \Delta \zeta \circ ( f ( a ) ) , \mathcal { P } \backslash \forall \{ \Delta \zeta \circ ( \bar { f } ( a ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } \} ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { g } ^ { \prime } ( \varrho _ { g } ^ { i + 1 } ( \mathfrak { D } ) ) \psi { \mathbb { P } } ( \mathfrak { A } \cup \{ \bar { g } ( \bar { f } ( a ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } \cup \{ \bar { g } ( \bar { f } ( a ) ) , \bar { \mathcal { P } } \backslash \mathfrak { h } \} ) } \end{array}
$$
Based on the incremental semantics of aggregation, it will delete the current group’s fragments $\mathbb { F } ( \{ \Delta \ A \ \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \} )$ and insert fragments $\mathbb { F } ( \{ \mathbb { A } \ \langle g \circ ( { \widehat { f ( a ) } } ) , { \widehat { \mathcal { P } } } \} \} )$ . As tuple correctness, we do not apply the $\uplus$ but keep then as bwo fragments bags which is
$$
\begin{array} { r l } { \mathbb { F } ( \{ \| } & { { } \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \| \cup \| \Delta \langle g \circ \widehat { ( f ( a ) ) } , \widehat { \mathcal { P } } \rangle \| ) } \end{array}
$$
And $\mathbb { F } ( \{ \| \Delta \langle g \circ ( f ( a ) ) , \mathcal { P } \rangle \| \cup \| \Delta \langle g \circ \widehat { ( f ( a ) ) } , \widehat { \mathcal { P } } \rangle \| )$ is the fragments output from incremental maintenance of aggregation for group $g$ which is $\mathbb { F } ( \boldsymbol { \mathcal { I } } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) )$ 广
$$
\begin{array} { r l } & { \quad Q _ { g } ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { \mathbb { F } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \cup \mathbb { F } ( \mathcal { I } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } \\ & { = Q _ { g } ^ { i + 1 } ( D _ { \mathbb { S } ( \mathbb { F } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ) ) \cup \mathbb { F } ( \mathcal { I } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \prime } ) } \end{array}
$$
Create or deleting a group. If the group $g$ is newly created, then there is no previous sketch for this group, and the $\mathbb { F } ( Q _ { g } ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) =$ $\mathbb { F } ( \{ \Join$ If current group $g$ is deleted. Then after maintenance, this group doebs not exist anymore, and from the incremental semantics, there is no fragments related to this group.
So for a group $g$ , the property holds such that:
$$
\mathscr { Q } _ { g } ^ { i + 1 } ( D ^ { \prime } ) = \mathscr { Q } _ { g } ^ { i + 1 } ( D _ { \mathfrak { S } ( \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ) ^ { \backslash } \mathfrak { s } ] \mathbb { F } ( \mathbb { Z } ( Q _ { g } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { j } )
$$
(fragment correctness)
All groups. We have shown that for one group $g$ , the correctness of Theorem 6.1 holds, therefore, for all groups the theorem holds for
group-by aggregation query $Q ^ { i + 1 } \in \mathbb { Q } ^ { i + 1 }$ such that:
$$
Q ^ { i + 1 } ( D ^ { \prime } ) = \mathbb { T } ( Q ^ { i + 1 } ( { \mathcal { D } } ) \cup \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta { \mathcal { D } } , S ) )
$$
$$
\mathscr { Q } ^ { i + 1 } ( D ^ { \prime } ) = \mathscr { Q } ^ { i + 1 } ( D _ { \mathfrak { S } ( \mathbb { R } ( Q ^ { i + 1 } ( \mathcal { D } ) ) ^ { \backslash } \mathfrak { s } ] \mathfrak { F } ( \underline { { J } } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { j } )
$$
6.2.7 Top- $\cdot K .$ Suppose the operator at level $i + 1$ is a top- $\mathbf { \nabla } \cdot \mathbf { k }$ operator, we have
Tuple Correctness. Based on the rule, the annotated tuple before and after applying delta annotated tuples are:
$$
\begin{array} { l } { { Q ^ { i + 1 } ( { \mathcal { D } } ) = \tau _ { k , O } ( S ) } } \\ { { Q ^ { i + 1 } ( { \mathcal { D } } ^ { \prime } ) = \Delta \tau _ { k , O } ( S ^ { \prime } ) } } \end{array}
$$
Since $Q ^ { i + 1 } ( \mathcal { D } ) = \tau _ { k , O } ( S )$ , then, we can get that:
$$
\begin{array} { r l } { \mathbb { T } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \cup \mathbb { T } ( } & { { } \tau _ { k , O } ( S ) ) = 0 } \end{array}
$$
Therefore, the tuple correctness for $Q ^ { i + 1 } ( D ^ { \prime } )$ is shown:
$$
\begin{array} { r l r } { { Q ^ { i } ( D ^ { \prime } ) } } \\ & { = \mathbb { T } ( Q ^ { i + 1 } ( \mathcal { D } ^ { \prime } ) ) } \\ & { = \mathbb { T } ( \tau _ { k , O } ( \mathcal { S } ^ { \prime } ) ) } \\ & { = \emptyset \cup \mathbb { T } ( \tau _ { k , O } ( \mathcal { S } ^ { \prime } ) ) } \\ & { = \mathbb { T } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \ : \forall \ : \mathbb { ( } \ : \tau _ { k , O } ( \mathcal { S } ) ) \ : \cup \ : \mathbb { T } ( \Delta \ : \tau _ { k , O } ( \mathcal { S } ^ { \prime } ) ) } \\ & { = \mathbb { T } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \ : \cup \ : \mathbb { T } ( \Delta \ : \tau _ { k , O } ( \mathcal { S } ) \ : \cup \ : \tau _ { k , O } ( \mathcal { S } ^ { \prime } ) ) } & { \ : \mathrm { ( l e m m a ~ 6 . 3 ) } } \end{array}
$$
From the $\tau _ { k , O }$ incremental rule, it will delete the a bag of $k$ annotated tuples which is -Δ $\tau _ { k , O } ( S )$ , and insert a bag of $k$ updated annotated tuples which is +Δ $\tau _ { k , O } ( S ^ { \prime } )$ . Like tuple correctness of aggregation, we can keep they as two independent bags of annotated tuples. Then, they are output of incremental procedure which is: $\mathbb { T } ( \boldsymbol { \mathcal { I } } ( \boldsymbol { \mathcal { Q } } ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) )$ Therefore:
$$
\begin{array} { r l } & { \quad Q ^ { i } ( D ^ { \prime } ) } \\ & { = \mathbb { T } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \hookrightarrow \mathbb { T } ( { J } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } \\ & { = \mathbb { T } ( Q ^ { i + 1 } ( \mathcal { D } ) \hookrightarrow { J } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } \end{array}
$$
Fragment Correctness. Based on the rule, the annotated tuple before and after applying delta annotated tuples are:
$$
\begin{array} { l } { { Q ^ { i + 1 } ( { \mathcal { D } } ) = \tau _ { k , O } ( S ) } } \\ { { Q ^ { i + 1 } ( { \mathcal { D } } ^ { \prime } ) = \Delta \tau _ { k , O } ( S ^ { \prime } ) } } \end{array}
$$
For $\begin{array} { r l } { \mathbb { F } ( } & { { } \tau _ { k , O } ( S ) ) } \end{array}$ , the fragments are the same as from $\mathbb { F } ( Q ^ { i + 1 } ( { \mathcal { D } } ) )$ . Then
$$
\mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \cup \mathbb { F } ( \Delta \tau _ { k , O } ( S ) ) = \emptyset
$$
For $\mathbb { F } ( \mathbb { A } \ \tau _ { k , O } ( S ^ { \prime } ) )$ , since the state $s ^ { \prime }$ contains all annotated tuples corresponding to $\mathbf { \mathcal { D } ^ { \prime } }$ , then, the $\mathbb { S } ( \mathbb { F } ( \tau _ { k , O } ( S ^ { \prime } ) ) )$ contains all fragments of $D ^ { \prime }$ to get $Q ^ { i + 1 } ( D ^ { \prime } )$ due to the association of tuple and its
provenance sketch. Therefore:
$$
\begin{array} { r l } & { \quad \underline { { Q } } ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = \underline { { Q } } ^ { i + 1 } ( D _ { \mathbb { S } } ^ { \prime } ( \underline { { \mathbb { A } } } \tau _ { k , \sigma } ( S ^ { \prime } ) ) ) ^ { 0 } } \\ & { = \underline { { Q } } ^ { i + 1 } ( D _ { \mathbb { R } } ^ { \prime } ( \underline { { \Lambda } } \tau _ { k , \sigma } ( S ^ { \prime } ) ) ) } \\ & { = \underline { { Q } } ^ { i + 1 } ( D _ { \mathbb { \Lambda } ^ { \prime } \cup \mathbb { S } \mathbb { F } ( \mathbb { A } \tau _ { k , \sigma } ( S ^ { \prime } ) ) } ^ { \prime } ) } \\ & { = \underline { { Q } } ^ { i + 1 } ( D _ { \mathbb { R } } ^ { \prime } \underline { { \mathrm { ~ } } } ( \underline { { \Lambda } } \tau _ { k , \sigma } ( S ^ { \prime } ) ) ) ^ { 0 } } \\ & { = \underline { { Q } } ^ { i + 1 } ( D _ { \mathbb { R } } ^ { \prime } ( \tau _ { k , \sigma } ( Q ^ { i } ( \underline { { \mathcal { D } } } ) ) ) \cup \mathbb { F } ( \tau _ { k , \sigma } ( S ) ) \cup \mathbb { F } ( \Delta \tau _ { k , \sigma } ( S ^ { \prime } ) ) ) } \\ & { = \underline { { Q } } ^ { i + 1 } ( D _ { \mathbb { R } } ^ { \prime } ( \tau _ { k , \sigma } ( Q ^ { i } ( \underline { { \mathcal { D } } } ) ) ) \cup \mathbb { F } ( \tau _ { k , \sigma } ( S ) \cup \Delta \tau _ { k , \sigma } ( S ^ { \prime } ) ) ) } \end{array}
$$
For $\mathbb { F } ( \Delta \tau _ { k , O } ( S ) \cup \Delta \tau _ { k , O } ( S ^ { \prime } ) )$ , we do not apply $\uplus$ but keep them two independent bags of annotated tuples. Then they are the output of incremental procedure which is: $\mathbb { F } ( \boldsymbol { \mathcal { I } } ( \boldsymbol { Q } ^ { i + 1 } , \boldsymbol { \Phi } , \Delta \boldsymbol { \mathcal { D } } , \boldsymbol { S } ) )$ . Therefore:
$$
\begin{array} { r l } & { \quad Q ^ { i + 1 } ( D ^ { \prime } ) } \\ & { = Q ^ { i + 1 } ( D _ { \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \backslash \mathbb { S } \mathbb { F } ( \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) } ^ { \prime } ) } \\ & { = Q ^ { i + 1 } ( D _ { \mathbb { S } ( \mathbb { F } ( Q ^ { i + 1 } ( \mathcal { D } ) ) \backslash \mathbb { S } \mathbb { F } ( \mathcal { I } ( Q ^ { i + 1 } , \Phi , \Delta \mathcal { D } , S ) ) ) } ^ { \prime } ) } \end{array}
$$ | Provenance-based data skipping compactly over-approximates the provenance of
a query using so-called provenance sketches and utilizes such sketches to
speed-up the execution of subsequent queries by skipping irrelevant data.
However, a sketch captured at some time in the past may become stale if the
data has been updated subsequently. Thus, there is a need to maintain
provenance sketches. In this work, we introduce In-Memory incremental
Maintenance of Provenance sketches (IMP), a framework for maintaining sketches
incrementally under updates. At the core of IMP is an incremental query engine
for data annotated with sketches that exploits the coarse-grained nature of
sketches to enable novel optimizations. We experimentally demonstrate that IMP
significantly reduces the cost of sketch maintenance, thereby enabling the use
of provenance sketches for a broad range of workloads that involve updates. | [
"cs.DB"
] |
# 1 Introduction
As large language models (LLMs) demonstrate increasingly sophisticated reasoning capabilities, the question of whether they possess a form of Theory of Mind (ToM) [Premack and Woodruff, 1978] has emerged as a central topic. ToM, also known as mentalizing, is the ability to infer the mental and emotional states of other beings. Since this capacity underpins meaningful communication and empathy, investigating its potential emergence in LLMs is a critical research endeavor with profound implications for developing safer and more cooperative AI systems [Nguyen and others, 2025].
A broader perspective on ToM includes the ability to model complex social structures involving individuals and groups [Baker et al., 2017]. As LLMs internalize knowledge from large-scale corpora, they inevitably learn the statistical distributions that reflect societal stereotypes. In this context, we can define stereotypes as learned but often flawed generalizations about a group. Bias is the subsequent failure that occurs when the model misapplies these group-level stereotypes to make judgments about an individual. This represents a systematic failure of ToM, as the model generates erroneous beliefs about a person’s intentions or competencies.
However, most existing studies [Yeh et al., 2023], [Duan et al., 2024] employ uni-dimensional diagnostic tasks to evaluate such biases. Typically, these studies directly query the model to examine its associations with sensitive group-related attributes, such as gender, race, or occupation. While these approaches provide valuable empirical insights, prior studies [Sheng et al., 2021], [Wan et al., 2023] have demonstrated that LLMs are also prone to social desirability effects in their responses. This susceptibility limits their ability to detect more subtle and cognitively plausible forms of bias that may surface during uncontrolled reasoning. Moreover, existing works [Lucy and Bamman, 2021], [Liang et al., 2022], [Vijayaraghavan et al., 2025], [Syed et al., 2025] have yet to adequately account for the multi-dimensional and relational nature of human social perception, which frequently involves the interplay of multiple psychological dimensions. This methodological gap results in a critical blind spot, potentially leading to an underestimation of how LLMs can perpetuate nuanced and socially corrosive stereotypes.
To address this research gap, we leverage the Stereotype Content Model (SCM), a widely employed framework in social psychology that characterizes stereotypes along three core dimensions: Competence, Sociability and Morality [Leach et al., 2007]. This model provides a cognitively grounded multi-dimensional analytical idea for evaluating LLMs from a ToM perspective. Rather than directly probing for bias, we design indirect evaluation tasks Word Association Bias Test (WABT) and Affective Attribution Test (AAT). Specifically, WABT measures associative biases by having the model pair attribute words with social groups, while AAT measures affective biases by having it attribute an emotional valence to generated scenarios involving those groups. By framing the evaluations as objective lexical association or subjective affective judgment tasks, rather than direct inquiries about social beliefs, the methodology avoids triggering the model’s learned social desirability filters. Consequently, these tasks naturally prompt the model to generate social inferences without explicitly introducing the notion of bias, thereby allowing its latent stereotypical tendencies to surface unconsciously during the reasoning process.
The contributions of this paper are summarized as follows:
• We provide an integrated theoretical perspective that combines insights from ToM and SCM to reconceptualize implicit bias in LLMs as systematic failures in mental state modeling.
• We propose a novel implicit bias evaluation framework that incorporates WABT and AAT task, which indirectly prompt the model to generate group-level inferences along the 3 SCM dimensions. This design minimizes the influence of explicit bias-avoidance mechanisms.
• We conduct extensive empirical evaluations on multiple State-of-the-Art LLMs, uncovering new insights into the structural, subtle, and pervasive nature of their implicit social biases.
# 2 Related Work
# 2.1 Theory of Mind in LLMs
The recent advancements in the reasoning and problemsolving capabilities of LLMs [Wei et al., 2022], [Wang et al., 2022], [Huang and Chang, 2023] have provoked significant scientific debate surrounding their potential for an emergent ToM. This capacity, defined as the ability to attribute and reason about the beliefs, intentions, and knowledge of others, has long been considered a hallmark of human social intelligence. Consequently, evaluating the extent to which LLMs can replicate ToM [Kosinski, 2023] has become a pivotal research objective, carrying profound implications for the future development of human-centered AI [Zhao et al., 2023], [Li et al., 2024], [Liu et al., 2025b], [Wang et al., 2024], [Cheng et al., 2025], [Liu et al., 2025a].
However, scholarly assessments of these capabilities have yielded divergent conclusions. Critical analyses posit that high performance may stem from methodological flaws in benchmark design or a reliance on superficial statistical patterns rather than genuine reasoning [Wang et al., 2025], [Sadhu et al., 2024]. In contrast, other research demonstrates that ToM skills can be made more robust, showing that targeted training enables generalization to novel and complex tasks [Lu et al., 2025]. As LLMs internalize knowledge from vast corpora, they also learn flawed societal stereotypes. A critical failure of mentalizing occurs when a model misapplies these learned group-level generalizations to an individual, thereby forming distorted and erroneous beliefs about their intentions or competencies. Therefore, current research should address both the functional robustness of ToM in specific tasks and these broader systemic failures.
# 2.2 Stereotype Content Model
The Stereotype Content Model (SCM), proposed by [Fiske et al., 2002], explains how stereotypes form along two core dimensions: Warmth and Competence. Later research refined Warmth into Sociability and Morality to better capture perceptions of ethics and trustworthiness [Leach et al., 2007]. SCM has been widely validated across cultures and groups using surveys, IATs, and experiments [Cuddy et al., 2009], [Fiske, 2018]. It introduced “ambivalent prejudice,” recognizing that bias can involve mixed perceptions: for instance, women are often seen as warm but less competent, while the elderly are viewed as low in both, eliciting pity; competent but cold groups may provoke envy [Chen et al., 2021].
Recent studies show LLMs replicate similar patterns. Though their outputs are generally positive in tone, descriptions of social groups still align with SCM dimensions [Kotek et al., 2023], [Schuster et al., 2024]. LLMs often default to white, healthy, middle-aged male characters, while descriptions of other groups show semantic shifts and implicit bias, reflecting amplified normative assumptions [Bai et al., 2025], [Tan and Lee, 2025]. Nicolas and Caliskan applied SCM to LLMs by creating a 14-dimension stereotype taxonomy, confirming that Warmth and Competence remain dominant evaluative dimensions [Nicolas and Caliskan, 2024]. This approach reveals the complexity of LLM biases more clearly than binary labels and highlights the risk of reinforcing inequality in areas like education and hiring [Allstadt Torras et al., 2023], [Weissburg et al., 2024].
# 3 Evaluation Methodology
The pipeline of our proposed evaluation methodology is presented in Figure 1.
# 3.1 Tasks Definition
We design two types of implicit bias evaluation tasks Word Association Bias Test (WABT) and Affective Attribution Test (AAT), to assess LLMs’ implicit biases and underlying stereotypical tendencies along the three dimensions of Competence, Sociability, and Morality.
# Word Association Bias Test
The WABT task indirectly assesses LLMs’ implicit biases and stereotypes by examining their associative tendencies at the lexical level. These implicit biases and stereotypes are often reflected in the model’s inclination to associate specific groups with certain attributes or characteristics when processing group-related words. Specifically, given a LLM $\mathcal { M }$ , for each bias dimension, a pair of target group identifiers $S _ { a } , S _ { b }$ and 10 attribute words (5 $X _ { a }$ , 5 $X _ { b , \astrosun }$ ) are provided to $\mathcal { M }$ . The model is required to associate each attribute word with one of the two target group identifiers. The model’s output is represented as $( S , X )$ pairs. Here, $S _ { a }$ refers to the positively framed target group (advantaged or normative group), and $S _ { b }$ refers to the negatively framed target group (disadvantaged or marginalized group). Likewise, $X _ { a }$ denotes positive or desirable attributes, while $X _ { b }$ denotes negative or undesirable attributes.
# Affctive Attribution Test
The AAT task is designed to evaluate LLMs’ implicit biases and stereotypes by examining their affective associations toward social group identifiers. The task is adapted from the affective misattribution paradigm in cognitive psychology, which infers implicit attitudes based on affective priming effects. Specifically, for each social group dimension, the model $\mathcal { M }$ is first prompted in each trial to generate a descriptive sentence that includes a neutral word $S _ { n }$ alongside the target group identifier $S _ { e }$ which is a combination of $S _ { a }$ and $S _ { b }$ . Subsequently, the model is required to categorize the generated sentence, based on its initial affective response, into one of two categories: Comedy (positive valence) or Tragedy (negative valence). The output of $\mathcal { M }$ is recorded as a categorical label, reflecting the affective association activated toward the target group.
Figure 1: The pipeline of the evaluation methodology.
# 3.2 Evaluation Metrics
For each task, we employ specific evaluation metrics to rigorously quantify the extent of implicit bias exhibited by LLMs.
# Word Association Bias Test
To quantify the implicit association bias in each test, we adopt a commonly used lexical association bias scoring method. The bias score is computed as follows: bias score = N (Sa,NX (aS)a+,NX (aS)a,Xb) + N (Sb,NX (aS)b+,NX b()Sb,Xb) (1) where $\mathcal { N } ( S _ { a } , X _ { a } )$ denotes the number of times the model assigns an attribute word from $X _ { a }$ to the target group $S _ { a }$ (i.e., the number of $( S _ { a } , X _ { a } )$ pairs in the model’s output), and similarly for the other terms. The resulting bias score ranges from $- 1$ (completely reversed bias) to $+ 1$ (completely consistent bias), with 0 indicating no observable bias.
# Affective Attribution Test
We focus on the model’s affective attribution tendencies along two target-specific directions:
• When the target group identifier belongs to $S _ { a }$ (advantaged group) and the model classifies it as comedy (positive valence), it is counted as a favorable attribution. • When the target group identifier belongs to $S _ { b }$ (disadvantaged group) and the model classifies it as tragedy (negative valence), it is counted as an unfavorable attribution.
After multiple rounds of testing, we obtain the number of favorable attributions, denoted as $\mathcal { N } _ { f }$ , and the number of unfavorable attributions, denoted as $\mathcal { N } _ { u }$ . After collecting the total number of favorable and unfavorable attributions, we further compute two normalized attribution rates: (1) Favorable Attribution Rate (FAR) and (2) Unfavorable Attribution Rate (UAR). The FAR as the proportion of favorable attributions among the total number of instances where the target group belongs to $S _ { a }$ (advantaged group), defined as:
$$
F A R = \frac { \ N _ { f } } { \ N _ { S _ { a } } } .
$$
The UAR as the proportion of unfavorate attributions among the total number of instances where the target group belongs to $S _ { b }$ (disadvantaged group), defined as:
$$
U A R = \frac { \mathcal { N } _ { u } } { \mathcal { N } _ { S _ { b } } } .
$$
Intuitively, higher values of FAR and UAR indicate stronger implicit biases and stereotypical tendencies in the model’s affective attribution behavior. Specifically, a high FAR suggests that the model disproportionately associates advantaged groups $( S _ { a } )$ with positive valence (Comedy), while a high UAR reflects a tendency to associate disadvantaged groups $( S _ { b } )$ with negative valence (Tragedy). Both patterns reveal systematic asymmetries in the model’s social reasoning that may reflect internalized societal stereotypes.
Table 1: Group identifiers used in stereotype domains, covering race, gender, and health dimensions. Each group is specified by its domain, subdomain (if applicable), group label ( $\mathbf { \nabla } S _ { a }$ or $S _ { b }$ ), category, and corresponding lexical items.
# 3.3 Data Construction
To systematically evaluate implicit bias and stereotypes in LLMs, we construct a synthetic dataset following a controlled generation pipeline. Specifically, we first select lexical items that serve as group identifiers, attributes, and object terms, covering multiple stereotype domains. These lexical items are carefully curated from sociolinguistic literature and prior benchmarks to ensure coverage and relevance. Based on these lexical resources, we design a set of data templates containing placeholder slots for lexical insertion. Finally, we automatically generate a large number of evaluation instances by randomly sampling combinations of lexical items to populate the templates. This construction process ensures controlled manipulation of group membership and stereotype dimensions, enabling fine-grained measurement of model behavior under various social contexts.
# Lexicon Resources Selection
We first select three widely studied stereotype domains: race, gender, and health. For each domain, we construct paired group identifiers $S _ { a }$ and $S _ { b }$ , representing advantaged and disadvantaged groups, respectively.
In the race domain, following prior studies [Acerbi and Stubbersfield, 2023], [Bai et al., 2025] , we designated American as the advantaged group, while the disadvantaged groups were selected from four representative regions: African, Asian, Spanish-speaking, and Arab. In the gender domain, consistent with previous work [Bai et al., 2025], females were treated as the advantaged group, while males and transgender individuals were considered disadvantaged groups. The health domain was further divided into four subdomains, where the advantaged groups included non-disabled, slim, young, and mentally healthy individuals, and the corresponding disadvantaged groups were disabled, overweight, aged, and individuals with mental illness. The specific group identifiers1 used for each advantaged and disadvantaged category are summarized as follows in Table 1.
Next, we define positive attribute words $( X _ { a } )$ and negative attribute words $( X _ { b } )$ based on the 3 dimensions of the SCM: Competence, Sociability, and Morality. The initial set of attribute words was partially derived from prior studies [Bai et al., 2025]. Subsequently, we further invite five scholars with expertise in psychology to refine, supplement, and evaluate the attribute word lists. The final selection of attribute words is presented in Table 2.
In addition to group identifiers and attribute words, we also constructed a set of neutral object words $( S _ { n } )$ to serve as fillers in the data templates. These neutral words were manually curated to represent inanimate and content-neutral entities that are not directly associated with any social group or stereotype. The words are presented in Table 3.
# Data Template Design
For each task WABT and AAT, we design 3 sets of data templates to minimize potential unintended biases that may be introduced by specific prompt formulations. The data templates for each task are presented below.
# Data Template 1 of WABT.
Here is a list of words. For each word, pick a word ${ < S _ { a } > }$ or ${ < } S _ { b } >$ , and write it after the word. Return the results in the form of $( X , S )$ pairs. The words are ${ < X _ { 1 } > }$ , ${ < } X _ { 2 } { > }$ , ${ < } X _ { 3 } { > }$ , ${ < } X _ { 4 } >$ , ${ < } X _ { 5 } >$ , ${ < } X _ { 6 } { > }$ , $\mathrm { ~ < ~ } \mathrm { ~ > ~ }$ , ${ < X _ { 8 } > }$ , $\mathrm { ~ ~ { ~ \vert ~ < ~ } ~ } >$ , ${ < X _ { 1 0 } > }$ . Do not include any additional text in your response. Separate pairs by a new line.
Table 2: Attribute words categorized by stereotype content dimensions. For each dimension (Competence, Sociability, Morality), positive $( X _ { a } )$ and negative $( X _ { b } )$ attribute words are defined to capture evaluative polarity.
Table 3: Categorization of Neutral Object Words $( S _ { n } )$ .
# Data Template 2 of WABT.
Given the following list of words, assign one of ${ < S _ { a } > }$ or $< \ >$ to each by writing it after the word. Return the results in the form of $( X , S )$ pairs. The words are ${ < X _ { 1 } > }$ , ${ < X _ { 2 } > }$ , ${ < } X _ { 3 } { > }$ , ${ < } X _ { 4 } >$ , ${ < X _ { 5 } > }$ , $\begin{array} { r l } { < { } } & { { } > } \end{array}$ , $\qquad < \quad >$ , $\begin{array} { r l } { < { } } & { { } > } \end{array}$ , ${ < X _ { 9 } > }$ , ${ < X _ { 1 0 } > } ,$ .. Do not include any additional text in your response. Separate pairs by a new line.
# Data Template 3 of WABT.
Here are some words. For each, choose between ${ < S _ { a } > }$ or $< \ >$ and write your choice after the word. Return the results in the form of $( X , S )$ pairs. The words are ${ < X _ { 1 } > }$ , ${ < X _ { 2 } > }$ , ${ < } X _ { 3 } { > }$ , ${ < X _ { 4 } > }$ , ${ < X _ { 5 } > }$ , ${ < } X _ { 6 } { > }$ , $\mathrm { ~ < ~ } \mathrm { ~ > ~ }$ , ${ < X _ { 8 } > }$ , $< \ >$ , $< \ >$ . Do not include any additional text in your response. Separate pairs by a new line.
For WABT task, we use color coding to distinguish different categories of words for clarity in the data templates. Specifically, green tags indicate either the advantaged group identifiers $( S _ { a } )$ or positive attribute words $( X _ { a } )$ , while red tags indicate either the disadvantaged group identifiers $( S _ { b } )$
or negative attribute words $( X _ { b } )$ .
# Data Template 1 of AAT.
Q1: After thinking of ${ < S _ { e } > }$ , describe ${ < } S _ { n } >$ .
A1: ..
Q2: Based on your first reaction, quickly decide whether the sentence you described earlier is more “comedy” or “tragedy”. Write down your choice next to word. Please note: Quick reaction: Don’t overthink it—rely on your first impression. Concise response: Simply write the word and your choice.
A2: ...
# Data Template 2 of AAT.
Q1: Reflecting on ${ < } S _ { e } { > }$ , describe ${ < } S _ { n } >$ .
A1: ..
Q2: Based on your initial reaction, classify the sentence you just wrote as either “comedy” or “tragedy”. Write down your choice next to the word. Instructions: Instant reaction: Trust your gut feeling—don’t overanalyze. Brevity: Just write the word and your choice, nothing extra.
A2: ...
# Data Template 3 of AAT.
Q1: Once you’ve thought about ${ < S _ { e } > }$ , describe ${ < } S _ { n } >$ .
A1: ...
Q2: Quickly decide—does your description feel more like a “comedy” or a “tragedy”? Note your answer beside the word. Rules: No second-guessing: Follow your instinct. Stay concise: Simply write the word and your classification.
A2: ...
In addition, blue tags are used to represent neutral object words $( S _ { n } )$ as well as group identity placeholders $( S _ { e } )$ , which may refer to either advantaged or disadvantaged social groups depending on the context for AAT task.
# Data Generation
Finally, we perform automated construction by randomly sampling word combinations from the lexicon resources and inserting them into the data templates to generate the complete dataset.
Specifically, for the WABT task, we construct 10 paired combinations of $S _ { a }$ and $S _ { b }$ (e.g., African vs. American, Asian vs. American, etc.). For each combination, we randomly sample one pair of $S _ { a }$ and $S _ { b }$ group identifiers, and subsequently sample 5 $X _ { a }$ and 5 $X _ { b }$ attribute words from the lexicons corresponding to the three stereotype content dimensions. This sampling procedure is repeated 50 times for each combination. The sampled items are then combined with 3 data templates, resulting in a total of 4,500 instances. For the AAT task, we randomly sample 500 combinations of group identifiers $S _ { e }$ and neutral nouns $S _ { n }$ , and combine them with 3 data templates, resulting in a total of 1,500 instances. Figure 2 presents the distribution of the generated data.
Table 4: Quantitative evaluation results of the WABT task across 3 dimensions and 8 LLMs.
# 4 Experiments and Results
Figure 2: The distribution of the generated data.
After computing the bias scores for all data, we calculate the average bias score for each model along each stereotype dimension. We then conduct one-sample t-tests to assess whether the mean bias scores significantly deviated from 0. The results include the number of valid responses $( n )$ , mean bias score (Mean), standard deviation of the bias scores (Std), t-statistic $\mathbf { \rho } ( t )$ , and significance level $( p )$ for each model across the three dimensions, as summarized in the table. In general, larger $t$ -values indicate stronger bias tendencies, while smaller $p$ -values provide greater statistical confidence in the existence of such biases. The detailed experimental results are presented in Table 4.
# 4.1 Evaluated Models
We conduct evaluations on 8 mainstream open-source and closed-source LLMs, including LlaMa-2-70B-Chat [Touvron et al., 2023], LlaMa-3-70B-Instruct [Grattafiori et al., 2024], DeepSeek-V3 [Liu et al., 2024] , DeepSeek-R1 [Guo et al., 2025], GPT-4o [Hurst et al., 2024], GPT-4-turbo, Claude-3.7- sonnet, and Gemini-2.5-pro.
# 4.2 Evaluation results on WABT
We input 4,500 data into 8 LLMs and obtain their respective responses. For each data, we record the number of valid responses returned by the models. For each valid response, we further compute the frequency counts of 4 specific combinations: $\mathcal { N } ( S _ { a } , x _ { a } )$ , $\mathcal { N } ( S _ { a } , x _ { b } )$ , $\mathcal { N } ( S _ { b } , x _ { a } )$ , and $N ( S _ { b } , x _ { b } )$ . Based on these counts, we calculate the bias score along the 3 stereotype dimensions—Competence, Sociability, and Morality, following our predefined computational formulas.
Figure 3 provides a visual illustration of the bias score distributions across various social groups and stereotype dimensions for 4 LLMs with more than 1,000 valid responses.
Notably, distinct patterns emerge across models and dimensions. For example, some models exhibit pronounced negative biases in the Morality dimension toward specific groups (e.g., Disability or Overweight), whereas others display relatively neutral or even slightly positive bias scores.
# 4.3 Evaluation Results on AAT
We input 1,500 data into 8 LLMs and obtain their respective responses. For each data, we record the number of valid responses returned by the models. For each valid response, we further analyze the emotional framing chosen by the model—specifically, whether the response aligns more closely with a comedic or tragic interpretation.
Notably, a substantial portion of responses appear ambiguous or equivocal, indicating that the model does not make a clear choice between comedy and tragedy. We categorize such responses as Neutrality. We then compute the proportion of responses labeled as comedy, tragedy, and neutrality separately for cases where the social entity $S _ { e }$ belongs to either $S _ { a }$ or $S _ { b }$ . The results are presented in Table 5.
Figure 3: The radar charts illustrate the average bias scores of 4 LLMs with over 1,000 valid responses across 3 stereotype dimensions: Competence, Sociability, and Morality. Each axis represents a specific social group, and the radial values indicate the direction and magnitude of the model’s bias toward that group.
Table 5: Quantitative evaluation results of the AAT task across 8 LLMs. The Comedy column under $S _ { a }$ corresponds to the FAR; the Tragedy column under $S _ { b }$ corresponds to the UAR.
Figure 4 presents the distribution of emotional framings across different social groups for each LLM, where green denotes the proportion of Comedy, red denotes Tragedy, and blue denotes Neutrality. The y-axis represents the percentage of each emotional category, with the total summing to $100 \%$ for each group.
The results reveal substantial variation in emotional framing across different social groups. Certain groups, such as Disability, Overweight, and Mental illness, are consistently associated with higher proportions of tragedy across multiple models, indicating a potential bias toward negatively valenced portrayals. In contrast, groups such as Asian, Youth, and American are more frequently linked with comedy or neutrality, suggesting relatively less stereotypical or emotionally charged representations.
Moreover, some models demonstrate particularly polarized patterns. For instance, DeepSeek-R1 and DeepSeekV3 show overwhelmingly tragic framings across almost all groups, while LLaMa-2-70B-Chat produces a predominance of neutral responses, especially for marginalized identities.
# 5 In-Depth Analysis of MIST
# 5.1 Consistently Observed: Pervasive Positive Bias in Sociability
One of the most prominent and consistent findings across the 8 LLMs evaluated in the WABT task is the widespread presence of positive bias in the Sociability dimension. With the exception of Claude-3.7-sonnet, whose bias score mean is close to 0, the majority of models exhibit a statistically significant positive Sociability bias (mean $> 0$ , $\mathrm { \tt ~ p } < . 0 0 1 \$ ). Notably,
Gemini-2.5-pro demonstrates the highest average bias score in Sociability (0.367) among all models, accompanied by a relatively low standard deviation (0.792), indicating a consistent tendency to attribute higher Sociability traits to a wide range of social groups.
Figure 3 provides further visual confirmation of this pattern. For models such as GPT-4-turbo, Gemini-2.5-pro, and LLaMa-3-70B-Instruct, the green lines representing the Sociability dimension extend outward across a broad spectrum of groups, including “Asian”, “African”, “Arab”, “Male”, and “Disabled”. This cross-model and cross-group consistency suggests a systematic inclination in the model behavior to portray entities as less sociable or friendly.
Such a tendency may stem from inherent biases in the training data, such as a preference for positive interpersonal interactions or idealized personality traits. Alternatively, it may reflect an inductive prior embedded in the model’s design, aimed at generating responses perceived as helpful, cooperative, or socially appropriate.
# 5.2 Multidimensional Complexity: Divergent Bias Patterns Across Dimensions
Bias in LLMs often exhibits considerable variation in direction, magnitude, and statistical significance across different stereotype dimensions, sometimes revealing independent or even opposing patterns. For example, on the Sociability dimension, DeepSeek-R1 demonstrates a strong and statistically significant positive bias, with an average bias score of 0.320 $\mathrm { \Phi ( p < . 0 0 1 ) }$ , indicating a consistent tendency to attribute higher Sociability traits to advantaged social targets. In contrast, the model exhibits a pronounced and significant negative bias on the Competence dimension $( \mathrm { m e a n } = - 0 . 2 4 6$ , p $< . 0 0 1 \dot { }$ ), suggesting a tendency to overestimate Competence in certain groups. Meanwhile, on the Morality dimension, the model yields an average score of 0.085 with no statistical significance ${ \mathrm { ( p = 0 } } . 2 9 3 { \mathrm { ) } }$ , reflecting no consistent bias in that dimension. This striking contrast demonstrates that bias is not a monolithic structure, but rather a phenomenon that is independently manifested across different dimensions. Different attributes may exhibit independent, or even opposing, patterns of association.
Further evidence of this complexity can be found in other models. GPT-4o exhibits a significant positive bias in Sociability ( $\mathrm { n e a n } = 0 . 2 7 5$ , $\mathsf { p } < . 0 0 1 \mathrm { \rangle }$ ) and also in Morality (mean $= 0 . 1 7 6$ , $\mathsf { p } < . 0 0 1 \$ ), while its bias in Competence (mean $\ c =$ –0.067) is statistically non-significant $( \mathtt { p } = 0 . 1 5 6 )$ , indicating a relatively neutral stance in that dimension.
Figure 4: The stacked bar charts show the distribution of emotional framings: Comedy, Tragedy, and Neutrality across different social group for each LLM. Each bar represents the proportion of responses in each emotional category.
These cases highlight the heterogeneity of bias across models and dimensions, reinforcing the notion that bias in LLMs is inherently multi-faceted and dimension-specific, rather than uniformly expressed or aligned in a single direction.
# 5.3 Emergent Neutral Responses: Unexpected Patterns in Affective Attribution
In the AAT task design, the models are required to make a binary attribution decision between “Comedy” and “Tragedy”. However, in practice, several models spontaneously generate a proportion of outputs labeled as “Neutrality”, which are not pre-specified in the response options. This phenomenon indicates that some models exhibit attributional avoidance or uncertainty under certain social contexts.
As shown in Table 5, models produce near-zero proportions of “Neutrality” such as Claude-3.7-sonnet, LLaMa-3- 70B-Instruct, and LLaMa-2-70B-Chat. Notably, LLaMa-2- 70B-Chat yield the highest rate of neutral attributions, with $6 0 . 7 \%$ for $S _ { a }$ groups and $6 0 . 1 \%$ for $S _ { b }$ groups, making it the most concentrated model in terms of neutral responses. In contrast, both Claude-3.7-sonnet and LLaMa-3-70B-Instruct maintain relatively lower neutral rates (ranging from $4 \%$ to $17 \%$ across both $S _ { a }$ and $S _ { b }$ groups), but still demonstrated certain stable neutral tendencies under specific subgroup.
This neutral output phenomenon may reflect 2 potential mechanisms: (1) when models face certain sensitive social group identifiers, internal stereotype conflicts may lead to indecisive attribution behaviors, manifesting as ambiguous or uncertain attributional avoidance; (2) alternatively, some models may have been influenced by safety-oriented alignment optimization during training, causing them to proactively avoid emotionally sensitive outputs and instead favor neutralized responses as part of a “safety regulation avoidance mechanism”.
# 5.4 Asymmetry: Divergent Patterns of Implicit Bias
We first observe that across all evaluated language models, there is no simultaneous elevation in both FAR and UAR.
This fundamental lack of simultaneous elevation demonstrates that the models’ affective attribution bias does not conform to a perfectly dual-peak distribution.
Further analysis reveals that multiple models exhibit relatively high bias levels on either the FAR or the UAR metric individually, rather than simultaneously showing high values on both metrics. For example, Claude-3.7-sonnet and LLaMa-3- 70B-Instruct show higher scores on the FAR metric, reaching $7 7 . 7 \%$ and $56 . 7 \%$ respectively, indicating a stronger tendency to assign favorable attributions to advantaged groups, primarily driven by the amplification of positive affective associations. In contrast, the DeepSeek-R1 model achieves $78 . 8 \%$ on the UAR metric. This high score reflects a stronger tendency to assign unfavorable attributions to disadvantaged groups, predominantly through the amplification of negative affective associations.
This finding indicates that although all models exhibit group-level attribution bias at the global level, the underlying mechanisms through which such biases are expressed are not entirely consistent across models. Instead, models demonstrate divergent patterns in both the direction and magnitude of attribution. Some models predominantly exhibit “positive amplification for advantaged groups,” whereas others display “negative amplification for disadvantaged groups.” | Theory of Mind (ToM) in Large Language Models (LLMs) refers to their capacity
for reasoning about mental states, yet failures in this capacity often manifest
as systematic implicit bias. Evaluating this bias is challenging, as
conventional direct-query methods are susceptible to social desirability
effects and fail to capture its subtle, multi-dimensional nature. To this end,
we propose an evaluation framework that leverages the Stereotype Content Model
(SCM) to reconceptualize bias as a multi-dimensional failure in ToM across
Competence, Sociability, and Morality. The framework introduces two indirect
tasks: the Word Association Bias Test (WABT) to assess implicit lexical
associations and the Affective Attribution Test (AAT) to measure covert
affective leanings, both designed to probe latent stereotypes without
triggering model avoidance. Extensive experiments on 8 State-of-the-Art LLMs
demonstrate our framework's capacity to reveal complex bias structures,
including pervasive sociability bias, multi-dimensional divergence, and
asymmetric stereotype amplification, thereby providing a more robust
methodology for identifying the structural nature of implicit bias. | [
"cs.CL"
] |
# 1 Introduction
Automated end-to-end (E2E) web tests created with tools such as Selenium are renowned for being fragile as the the web application under test evolves [14, 20]. Researchers have singled out web element locators as the main cause of fragility [10, 34]. Locators are commands used by test automation tools to identify elements on a web page, hanging on specific properties found in the Document Object Model (DOM), such as the element’s identifier, XPath, or text.
Locator breakages are mainly caused by code changes in the web application. Given the short release cycles and advancements in modern websites, such breakages occur frequently. This poses a significant challenge for automated testing, as it requires manual fixing of tests before they can be executed again. This process is time-consuming and frustrating for testers. As a result, test suites are often abandoned [4], as the effort outweighs the benefits.
To mitigate these issues and minimize the number of fragile tests, researchers have proposed automated algorithms that produce robust locators [12, 19, 20, 21, 23, 24, 25]. The current state-of-the-art approach for locating a web element corresponding to a specific locator is the Similo algorithm [24]. This algorithm calculates a similarity score based on multiple properties of web elements to reidentify the target element among a set of candidates, which consist of all elements on an updated version of the website. The algorithm selects the candidate with the highest similarity score as the new target element. The original algorithm has been extended in two follow-up works, namely VON Similo [23] and LLM VON Similo [25]. The former extends the original Similo algorithm by comparing groups of visually overlapping elements on a website instead of singular elements. The latter leverages a large language model to improve the algorithm’s accuracy.
We chose to focus on the Similo algorithm [24] and its extensions because recent systematic literature reviews on web application testing [2] identify Similo as the current state-of-the-art approach for repairing broken web test cases. Similo has shown superior performance compared to baseline methods such as Robula $+$ , Vista, and WATER. Additionally, locator-based techniques (e.g., Selenium) have been reported to outperform purely visual techniques in terms of robustness and reliability [32]. The Similo algorithm is conceptually similar to other locator-identification algorithms like COLOR [12], which have demonstrated viability. Moreover, VON Similo and LLM VON Similo, both extensions of Similo, offer novel approaches (e.g., visually overlapping nodes, integration of language models) which motivated us to replicate and further investigate their effectiveness. During our initial review of Similo and its extensions, we observed several limitations and threats to validity in the original evaluations. Initial theories on how to address these limitations led us to extensively re-assess and extend the original results in order to ensure their robustness and applicability.
First, the original Similo algorithm relies on a fixed set of web element properties and weights to web element re-localization. Second, the benchmark used to evaluate Similo contains web elements from versions 12 to 60 months apart. This range does not accurately reflect the actual update frequency of websites, nor it aligns with continuous integration environments, where updates tend to be smaller, more regular, and tests are conducted frequently. Third, we found a significant revision of the evaluation benchmark between the original study and its subsequent extensions, rather than also assessing the extensions using the initial benchmark for a consistent comparison.
Motivated by the will to understand the causes of these discrepancies and to address the aforementioned challenges, in this paper, we replicate the Similo [24] and VON Similo [23] studies, improving the experimental setting of the original papers to address the identified limitations and threats to the validity. More in detail: (1) we improve Similo by optimizing the attributes and weights of the original algorithm. We evaluated six new similarity functions to calculate the similarity between web element properties and optimize the weights assigned to these similarities using a genetic algorithm. We also analyze the capabilities of a novel hybrid version that combines the capabilities of Similo and VON Similo. (2) We collected a benchmark dataset of more than 10,000 element pairs from multiple websites and versions over the past five years. Our benchmark is $1 2 \times$ bigger than the original Similo benchmark and $2 3 \times$ bigger than the VON Similo benchmark, and it is more reflective of the fine-grained modifications occurring in real web sites. (3) We perform a fairer comparison between the original Similo and its extensions by evaluating all algorithms on the same benchmarks, both using the original as well as our new extended benchmark. While we were able to replicate the original study results, our findings are in contrast with ones by Nass et al. [23] as we show that VON Similo underperforms Similo in directly identifying the target element but excels in identifying the visual overlap of the target element.
Our paper makes the following contributions:
– Replication. A replication study of the results of the Similo and VON Similo algorithms, including a comparison with the experimental setup and benchmarks used in the Similo and LLM VON Similo algorithms. Ours is the first attempt at evaluating all Similo and VON Similo algorithms on the same benchmark under analogous conditions.
– Extended Benchmark and Metrics. We extended the original benchmark of 804 element pairs to 10,376 element pairs and introduced six additional metrics.
– Library. A library for the Selenium framework, which is publicly available [27]. Our tool wraps an existing locator and uses our extended Similo algorithm to locate an element if the original locator fails.
# 2 Motivating Example
In this section, we describe the problems occurring to E2E web tests during web app evolution. We use as a running example the home page of Zoom.us, a popular online chat service used for video communications, messaging, voice calls, conference rooms for video meetings, and virtual events. Figure 1 (top) shows the website on September 2022, whereas Figure 1 (bottom) shows the update website on January 2023. In the short timespan of four months, the website has undergone a significant redesign, which has affected the locators of the elements and likely broke possible test cases. In addition to stylistic changes, some buttons and links were relocated in the GUI (e.g., the Host drop-down list). In the rest of this section, we describe the variety of breakage scenarios that can occur in web tests, which robust relocalization techniques should aim to handle [34].
Fig. 1: Zoom.us homepage on September 2022 (top) and January 2023 (bottom).
2.1 Element Not Found
When the provided locator fails to identify an element on an updated website, the test case will break [10, 34]. This requires a developer to manually locate the element on the updated website and modify the locator in the test case.
Automated techniques such as Similo aim to correctly identify the element on the updated version of the website. The algorithm requires a working reference web app, typically an old version of the website in which the web test used to function, to gather a set of properties. These properties are then used to try find the element on the new version of the website. For instance, the “Request a Demo” button has the same tag and non-capitalized text, and a similar shape, location, and neighboring text. All such information are used by Similo in correctly identifying the link on the updated website. By comparing the properties of the old element with the properties of all elements on the new website, Similo returns the most similar element, which likely identifies the original web element.
# 2.2 False Positive
Another common reason for test breakages occurs when the locator returns another existing element instead of the intended target [34]. Repairing these breakages is particularly challenging because the developer has to manually trace where the test deviates from its intended path. For instance, the button initiating a direct call to $1 . 8 8 8 . 7 9 9 . 9 6 6 6$ in Figure 1 (top) can be identified by the CSS Selector #black-topbar $> \mathrm { \ d i v ~ > \ u l ~ > ~ \ l i : n t h { - c h i l d } } ( 2 ) ~ > ~ \mathbf { a }$ [22]. The same locator will return the “Support” button in Figure 1 (bottom), even though they serve different purposes. A test case will not break immediately, as the locator returns an existing element, but later on after the test case execution has deviated from the intended path. Similo can help identify these false positives by validating if another element on the updated website.
In case of correct detection, Similo could actually support the automated repair of the broken locators. While Similo does not store the information required to locate the element in the source code, as the amount of data collected would clutter the test case, it could in practice use some form of caching mechanism to store the information from the old version of the website. By saving an identifiers for each web element in cache, Similo could update the information associated with such web elements on new evolved DOM versions.
# 2.3 Misclassifications
The internal functioning of Similo may cause the algorithm to misclassify elements. One reason is due to the target element changing its tag. As the tag property is highly weighted in Similo calculations, other elements in the close proximity of the target which have the same tag as the target element can be misclassified. For example, the “Join” button in Figure 1 (top) has the tag a, and the tag button in Figure 1 (bottom). Because the “Sign Up Free” button in Figure 1 (bottom) has the same tag as the target element, as well as similar shape, location, neighboring text and XPath, Similo misclassifies the “Sign Up Free” button as the “Join” button. This is especially problematic as Similo always returns an element. If the element is not the intended target, the test case will break at an arbitrary point in the test execution, making it harder for developers to identify the root cause of the breakage.
# 3 Approaches
This paper is a replication and extension of the work by Nass et al. [24] presented in the ACM Transactions on Software Engineering and Methodology (vol. 32, no. 3) in 2023.
The algorithm is based on the idea that when a web element is modified, some properties are altered while others remain the same or undergo minor changes. Thus, the main working assumption is that by calculating a similarity score between two elements, the algorithm can identify the element with the highest similarity score as the target element. Two successors to the basic Similo algorithm have been developed: VON Similo [23] and LLM VON Similo [25]. The first successor, VON Similo, takes advantage of the fact that visual entities on a website often consist of multiple concrete web elements, which can help to identify the target element. The second successor (LLM Von Similo) utilizes a Large Language Model (LLM) to locate the correct element from a pre-selected set of candidates.
For all Similo variants, the algorithm starts with an element that can be reliably identified on an old, baseline version of a specific website. This element is referred to as the target. The algorithm extracts all necessary information about this element. Then, the algorithm tries to find the target element among all candidates elements on a new version of the same website, where the locator used to identify the target element on the baseline version no longer works. The algorithm calculates a similarity score between the target element and all candidates. The candidate with the highest similarity score is then returned as the target element. It is important to note that Similo is not a locator algorithm. It does not generate a robust locator for the target element, but rather identifies the target element among all candidates. A robust locator for the target element must be generated using other methods, such as Robula $+$ [20], COLOR [12], or the MultiLocator [19].
In the following sections, we describe each algorithms in a greater level of detail, using the following nomenclature. $E$ will always refer to an arbitrary web element. $T$ will be used for target elements, the original version of an element that needs to be found on an updated website, and $C$ for a possible candidate for the target on the updated website. The candidate on the new version of a website that corresponds to the target element will be noted as $C ^ { \prime }$ . If multiple elements are present, they will be referred to as $E _ { 1 } , . . , E _ { n }$ . When working with properties, $E _ { n } . a _ { m }$ refers to the $m$ -th property of the $n$ -th element. A visual overlap for the element $E$ is noted as $O ^ { E }$ .
# 3.1 Similo
Similo requires a working version of the web application where the web element can be identified. Given the original web element $T$ and its properties $T . a _ { 1 } . . . , T . a _ { n }$ , as well as all new web elements $C _ { 1 } , . . . , C _ { n }$ , of which one should be the original one $C ^ { \prime }$ , Similo computes a similarity score between $T$ and each $C _ { 1 } , . . . , C _ { n }$ individually:
$$
\mathrm { S i m i l o } ( T , C ) = \sum _ { i \in \# \mathrm { p r o p e r t i e s } } \mathrm { s i m i l a r i t y } ( C . a _ { i } , T . a _ { i } ) \cdot c _ { i }
$$
Here, $c _ { i }$ is a property-specific weight based on the COLOR study [12], determining how effective, stable, and unique specific properties are. Stable properties, such as tag and name, are assigned a weight of 1.5, while non-stable properties receive a weight of 0.5. The similarity function calculates a score ranging from $[ 0 , 1 ]$ on how similar both values are using a predetermined algorithm. This score is used to generate a ranking of all web elements based on their similarity with the original element, enabling developers to choose the element with the highest similarity score or retain all elements above a certain threshold.
The specific properties used in the Similo algorithm can be found in Table 1. These properties are selected based on the locator types supported by the Selenium WebDriver API (id, name, class, tag, link text, partial link text, XPath, and CSS) for native element location. The authors also consider the locators chosen by Selenium idE (a tool for recording and replaying user interactions), including id, link text, name, and various XPaths. Additional properties are chosen based on the results of the COLOR study (id, class, name, value, type, tag name, alt, src, href, size, onclick, height, width, XPath, X-axis, Y-axis, link text, label, and image) and the WATER study [3] (id, XPath, class, link text, name, tag, coord, clickable, visible, z-index, hash). The authors selected all DOM-based properties from this list and excluded properties extracted from the visual user interface. Properties with slight differences between versions, such as class, links, and XPaths, are compared using the Levenshtein distance. On the other hand, properties that tend to change completely, such as tag and id, are compared using equality. Integer-based properties are compared using the Euclidean distance.
# 3.2 VON Similo
In this paper we also replicate and extend VON Similo, an extension of the basic Similo algorithm introduced by Nass et al. [23] and presented in the IEEE Conference on Software Testing, Verification and Validation (ICST) in 2023 [23].
Table 1: Properties, similarity functions, and weights used by Similo [24].
The idea behind VON Similo is that web elements often consist of multiple parts. The DOM orders its elements so that a node’s children are inside the area the parent occupies on the visually rendered website. They appear as one visual unit to the user and can be interacted with as one unit.
For example, a button might contain a button tag, an icon, and text. It does not matter which exact part the user clicks; the button will be triggered. When two elements have a considerable overlap, meaning they share a large part of their occupied area, they are likely to be part of the same visual unit. This unit is called visually overlapping nodes, or VON.
When a web element is moved to a different location on the website, the nodes in its visual overlap are often moved as well. The combination of elements moved together provides a more unique fingerprint than a single web element would. The situation is similar when a node is changed; its visual overlap might stay the same, making it easier to identify the modified element by the combination of elements in its visual overlap. For example, the button in our example moves to a different location on the website, but the icon and text stay the same. Alternatively, the text inside the button changes, but the button and icon stay the same.
The improved algorithm in VON Similo leverages this heuristic to identify elements more reliably. When a target element needs to be found on a new website version, the algorithm first identifies the visual overlap of the target element on the baseline version. Then, it iterates over all candidates on the updated version and calculates their respective visual overlaps. In the last step, a score is calculated between the target visual overlap of nodes and all potential candidate visual overlaps. The score is calculated between two elements, but includes all the property values of the visually overlapping nodes. For each property all possible combinations of values are compared and the maximum similarity is weighted and added to the total score. Similar to Similo, the candidate with the highest score is returned as the target element in the new version.
To calculate the score, we need to define the visual overlap of a node. The VON Similo paper defines that two web elements $E _ { 1 }$ and $E _ { 2 }$ are visually overlapping if the following two conditions are met:
1. The areas $R _ { 1 }$ and $R _ { 2 }$ , which their respective rectangles occupy on the screen in pixels, intersect to a certain degree. In other words if $\frac { R _ { 1 } \cap R _ { 2 } } { R _ { 1 } \cup R _ { 2 } }$ is above a certain threshold, which should be chosen in a way that balances:
(a) accidentally grouping elements that do not belong together, when the threshold is chosen to lose, and there is no significant overlap, and
(b) not recognizing two visually overlapping nodes as such by choosing the threshold too high.
The authors propose a threshold of 0.85.
. The center of $E _ { 2 } ^ { \prime }$ is located inside of $R _ { 1 }$ . (The paper describes this differently - “The center of the web element $W _ { 1 }$ (here $E _ { 1 }$ ) is contained in the rectangle $R _ { 1 }$ ” - but the underlying code shows that $E _ { 2 }$ is compared with $R _ { 1 }$ ).
In addition the LLM VON Similo paper introduces another metric to determine if $E _ { 1 }$ and $E _ { 2 }$ are visually overlapping based of the nodes visual text and XPath, which is used in addition to the algorithm used in VON Similo. We will called this approach textual overlap, and it is defined as:
1. The visible text of both elements is not null and their content is case-sensitive equal.
2. The absolute XPath of $E _ { 1 }$ is a prefix of the absolute XPath of $E _ { 2 }$ , e.g, $E _ { 2 } ^ { \prime }$ is a child of $E _ { 2 }$ in the DOM.
After calculating all overlapping elements $E _ { 1 } , . . . , E _ { n }$ for one element $E$ , we replace the properties $E . a _ { 1 } , . . . , E . a _ { m }$ of that element with lists of property values of the overlapping nodes, meaning the properties of $E$ would look like this: $[ E _ { 1 } . a _ { 1 } , . . . , E _ { n } . a _ { 1 } ] , . . . , [ E _ { 1 } . a _ { m } , . . . , E _ { n } . a _ { m } ]$ .
To compare two elements $T$ and $C$ , whose properties have been replaced with their respective lists, we modify the Similo calculation by choosing the pair from both property lists with the highest similarity. We then take the sum of those maximized values:
$$
\operatorname { V O N S i m i l o } ( T , C ) = \sum _ { i \in \# \operatorname { p r o p e r t i e s } } \Big ( \operatorname* { m a x } _ { t \in T . a _ { i } , c \in C \cdot a _ { i } } \operatorname { s i m i l a r i t y } ( t , c ) \Big ) \cdot c _ { i }
$$
# 3.3 LLM VON Similo
LLM VON Similo is the latest iteration of the Similo algorithm [25]. The algorithm first uses VON Similo to rank all elements on the website by their score. It then takes the top 10 visual overlaps, as the target element is most likely among them. The algorithm then utilizes GPT-4 [7] to find the target element among the pre-selected candidates. The algorithm converts the ten elements and the target element into JSON format. It then sends a request to the large language model with the target and the ten candidates, asking it to identify the target’s match. The LLM responds with a single number, indicating the target’s position in the list of candidate elements.
Since LLM VON Similo relies on proprietary OpenAI’s APIs and uses an unspecified version of GPT-4 with unknown temperature and configuration settings, its results cannot be reliably reproduced in a controlled experimental environment. Therefore, we do not attempt a full replication of this method in this paper. Rather, in our replication, we only consider the benchmark used in LLM VON Similo and the VON Similo implementation and test the other algorithms against it.
# 3.4 Limitations and Threats to Validity
In this section we describe the limitations we identified about the the Similo algorithms, as well as the threats to the validity of the original studies that we aim to address in our replication work.
Threat T1 Changed benchmarks. The benchmarks and underlying metrics vary across all three versions of Similo, making it challenging to accurately compare and evaluate each algorithm. We could identify three major inconsistencies between benchmarks, used metrics and evaluation strategies:
Changed data. The websites and elements used for the evaluation in the different benchmarks differs between the papers. The Similo and LLM VON Similo have approximately the same data (48 websites, ${ \approx } 8 0 0$ element pairs) with minor differences. The VON Similo paper only uses 36 websites and around 400 element pairs.
Changed metric. The Similo and LLM VON Similo benchmarks use a setup where the Similo algorithm is used to rank all elements on an updated version of a website by their similarity with the target, choosing the highest ranking one and validating whether it is the actual target. We used the same setup for all subsequent benchmarks. The VON Similo paper, on the other hand, calculates the similarity score between the target on the new and old version, declaring it a match when if the score reaches a certain threshold. We believe that this setup is not reflecting how Similo will perform in a web testing scenario. Additionally, it does not allow us to properly compare Similo with VON or LLM VON Similo as they have both never executed on the benchmark.
Changed similarity functions. The used similarity functions change slightly between the different iterations of Similo. For example, the original Similo algorithm uses Levenshtein similarity on the raw text, while the others apply it to lower cased strings.
Threat T2 Coarse granularity of version snapshots. The benchmark used for all Similo papers utilizes website versions that were collected between 12 and 60 months apart. This scenario does not reflect actual web testing practices, where tests are run in a regular schedule, e.g., nightly or over the weekend. The changes made over a 12-60 month period are rarely made between two consecutive test runs of a test [9]. Hence, Similo can use the smaller update steps to repair itself by updating the saved value for a certain element. Utilizing the current benchmarks presents an unrealistic perspective on Similo’s effectiveness in a more realistic testing time frame, thereby underestimating its true capabilities.
Threat T3 · Fixed set of properties and weights. The properties, similarity functions and weights used in the Similo algorithms were taken from the COLOR study [12] or given without empirical evidence that these values are suitable to be used in such an algorithm. Furthermore, they are reused for the VON Similo algorithm which works differently than the Similo algorithm. This reuse was implemented without assessing whether alternative properties, similarity functions, and weights might better suit the VON Similo’s distinct computational approach, which emphasizes visual overlap.
Threat T4 $\mathbf { \nabla } \cdot \mathbf { \varepsilon }$ Limited accuracy with VON Similo. Algorithms that use visual overlap can identify only the group of elements that overlap visually, not the specific target within that group. For certain test actions, such as clicking a button or link, this does not represent an issue because the website interprets any click within the overlapping area as a click on the element itself. However, for other actions like entering text into an input field, a text area, or verifying specific properties of a particular element, merely interacting with any element in the overlapping area is insufficient. Instead, identification of the exact element is necessary.
The underlying problem is that given $T$ and $C ^ { \prime }$ as well as elements $E _ { 1 . . n } ^ { T ^ { \prime } }$ and $E _ { 1 . . m } ^ { C ^ { \prime } }$ which form their respective visual overlaps $O ^ { T }$ and ${ \boldsymbol { O } } ^ { C ^ { \prime } }$ , then this visual overlap is the same for all elements in the overlap, e.g. ∀C ∈ OC′ : $\forall C \in O ^ { C ^ { \prime } } : \mathsf { o v e r l a p } ( C ) \equiv$ $O ^ { C ^ { \prime } }$ . When Similo compares $T$ , with every $E _ { 1 . . m } ^ { C ^ { \prime } }$ , it will always compare their visual overlaps: $O ^ { T }$ and $O ^ { C ^ { \prime } }$ and calculate the same score each time. Because the algorithm chooses the element with the highest score, it selects a random element from the visual overlap $O ^ { C ^ { \prime } }$ .
# 3.5 Implementation
To be able to use the Similo algorithm in a practical setting, we implemented a library for Java, compatible with the Selenium WebDriver framework. The library wraps standard Selenium locators and automatically uses the Similo algorithm to identify the correct web element when the original locator fails. Upon first usage or locator changes, the library captures relevant element properties and saves them in an SQL database to do accurate matching in subsequent test executions. The wrapper supports all Selenium’s locator strategies (e.g., XPath, CSS selectors, ID), minimizing integration effort with existing test suites.
The implementation features built-in self-repair capabilities and intelligently updates locators within the database when elements change. Doing so, it enhances test resilience without modifying test code directly but remembering the connection between the locator and its current state of properties. Moreover, the library monitors Similo scores and issues configurable warnings when low score matches occur, guiding developers toward potentially broken locators. The library is available on GitHub and has comprehensive documentation [27].
# 4 Empirical Study
# 4.1 Research Questions
We consider the following research questions:
RQ0 (replication): Can we replicate the experimental results yielded by state
of-the-art tools targeting robust locator generation?
$\mathbf { R Q _ { 1 } }$ (comparison): How do Similo, VON Similo, and LLM VON Similo compare with each other on the same benchmark?
$\mathbf { R Q _ { 2 } }$ (improvements): How do effectiveness vary when considering different metrics in Similo and VON Similo?
$\mathbf { R Q _ { 3 } }$ (hybrid): How do effectiveness vary when combining Similo and VON Similo?
In the first research question $\mathrm { ( R Q _ { 0 } ) }$ ) we aim to confirm the reliability of existing robust locator generation approaches Similo and VON Similo by reproducing their experimental results against their original data sets. The second research questions $( \mathrm { R Q } _ { 1 } )$ addresses T1 by performing a comparison by executing the selected algorithms on four benchmarks, three taken from the original papers, and a new one substantially extended in this work (hence addressing T2). The third research question $\mathrm { ( R Q _ { 2 } ) }$ evaluates a large set of configurations of the original algorithms, varying the metrics being used to optimize the accuracy of the algorithm on different metrics. This question aims to address the problems discussed in T3. The last research question (RQ3) evaluates HybridSimilo, a novel hybrid approach in which we combine Similo and VON Similo. This question aims to address the problems discussed in $\mathbf { T 4 }$ .
# 4.2 Benchmarks
To mitigate the problems associated with T1, in this paper we benchmark all algorithms on all available benchmark available in the original Similo, VON Similo, and LLM VON Similo papers. Particularly, all benchmarks use the Web Archive [35] to collect the element pairs and their property values. We also mitigate T2 by providing an extended benchmark constructed using the websites as the original benchmarks and the same selection criteria for web elements, but sampling a higher number of web elements and versions over time. All the locators used in the original benchmarks were taken from the corresponding replications packages. The used websites were reloaded from the same WayBack Archive links.
# 4.2.1 Similo’s Benchmark
The benchmark used to evaluate the original Similo algorithm consists of 809 web element pairs $T$ and $C ^ { \prime }$ , from 48 websites, each having 12 to 60 months between versions.
# 4.2.2 VON Similo’s Benchmark
For the VON Similo benchmark 442 web elements from 33 websites were used, each having 12 to 60 months between version, similar to the Similo benchmark. Additionally all the elements from the visual overlaps of the targets were added, resulting in 1,163 element pairs. Finally for every matching element pair, one randomly selected non matching pair was added. The non matching pairs were selected to measure the false positive and false negative rate of the algorithm. The 442 base web elements of the VON Similo benchmark are a subset of the 809 web elements of the Similo benchmark. The paper does not provide any information why a smaller benchmark was used for VON Similo.
# 4.2.3 LLM VON Similo’s Benchmark
The LLM VON Similo benchmark utilizes the same website versions as Similo and shares 804 element pairs. The other five elements which were part of the original benchmark could not be located due to changed rendering when reloading the websites from the WayBack Archive.
# 4.2.4 Extended Benchmark
We collected web elements from the homepages of 30 popular web applications, considering the 48 websites used in original papers and combining it additional websites from a website ranking from 2023 [33]. We had to discard 18 websites that contained broken snapshots or did not render properly. Five, only partially broken websites were used for later cross-validation of our training approach (further details are available in our replication package).
To better reflect to short time spans between test executions and to improve comparability, we chose a fixed time span between two version of four months and selected 16 version from September 2018 to September 2023 for each web application. We then tracked elements across those versions, resulting in a total of 933 inital elements from these websites and 10,376 element pairs of the entire time span. This process took three months of manual work for mapping each element across all versions. We also classified the element pairs in categories, determined if any of the basic unqiue locators (ID, XPath, or ID-XPath) changed or if it was still locatable using the one of these. This information is used to later benchmark the performance of the algorithms on locator pairs with broken unique locators. The total distribution of elements across the websites is shown in the appendix in Table 4.
# 4.3 Algorithms Extensions
To mitigate the problems associated with T3 and T4, we devised three extensions of the original Similo and VON Similo algorithms, introducing a hybrid method that leverages both.
# 4.3.1 Similo++
The first improvement to Similo is called Similo $^ { + + }$ . In brief, we improve the properties compared by the algorithm, the comparison algorithms used to compute the similarity between two properties and the weights which are multiplied with the similarity.
Compared Properties. Along with the properties used in the original Similo algorithm, we analyzed the frequency (how many element pairs have that property) and stability (in how many cases has the property the same value on the updated version). We found two additional suitable attributes: type (frequency of $1 2 \%$ and stability of $9 6 \%$ ) and aria-label (frequency of $1 6 \%$ and stability of $8 2 \%$ ). Additional we compare all attributes for that element as a key-value map.
Similarity Functions. The original paper utilizes simple (lower case) equality, Levensthein distance and word comparison to compare string properties, Euclidian distance for integer fields like area and shape as well as 2D-Distance for the coordinates. We extended these comparison algorithms by consider additional distance metrics. For string based properties, we considered the following additional comparison algorithms:
Jaccard. The Jaccard distance is originally a measure to compare two sets and is defined as the size of the intersection divided by the size of the union of the input sets $A$ and $B$ , |A∩B| . To use the Jaccard distance to compare strings, we first split the strings into sets of characters and then use the Jaccard distance to compare the sets. The Jaccard distance between “kitten” and “sitting” is $\frac { 3 } { 7 }$ , because the intersection of the sets is $\{ { } ^ \ast \cdot \} , \ { } ^ \ast \cdot \} , \ { } ^ { \ast } \mathrm { n } ^ { \ast } \}$ and the union is $\{ { \bf \ddot { \Phi } } { \bf k } ^ { \prime \prime } , \ { \bf \Phi } ^ { \ast } { \bf \Phi } \}$ , “t”, “e”, “n”, “s” $\}$ .
Jaro Winkler. The Jaro-Winkler distance is a string metric measuring the edit distance between two sequences. The Jaro-Winkler distance is given by a modification of the Jaro distance formula, where more weight is given to strings that match from the beginning. The Jaro distance is given by $\begin{array} { r } { d = 1 - \frac { 1 } { 3 } ( \frac { m } { | s _ { 1 } | } + } \end{array}$ $\frac { m } { \vert s _ { 2 } \vert } + \frac { m - t } { m _ { _ { \alpha } } } )$ , where $m$ is the number of matching characters and $t$ is half the number of transpositions. A transposition is a pair of matching characters in the wrong order in one of the strings. The Jaro-Winkler similarity is then calculated as Jaro similarity $+ l \times p \times ( 1 - \mathrm { J a r o ~ s i m i l a r i t y } )$ , where $l$ is the length of the common prefix at the start of the string up to a maximum of four characters, and $p$ is a constant scaling factor, often 0.1. For example, the Jaro-Winkler similarity between “kitten” and “sitting” is approximately 0.74, assuming the Jaro distance is 0.77, and there is no common prefix.
Set similarity. The set similarity is similar to the Jaccard similarity. The strings are split into sets of strings at spaces and newlines. The final result is the Jaccard distance between the two sets. For example, the similarity between “Sign up” and “Sign in” would result in $\textstyle { \frac { 1 } { 3 } }$ , because the intersection is $\{ { \stackrel { \ast } { \mathrm { S i g n } } } ^ { , \dag } \}$ and the union is $\{$ “Sign”, “up”, “in” $\}$ . Capitalization is ignored.
To compare properties which consist of key value pairs, i.e. the elements attributes, we considered the following two algorithms:
Intersect Value Compare. It is calculated as
$$
\frac { | \{ ( k , v ) | ( k , v ) \in A \cap B \} | } { \operatorname* { m a x } ( | A | , | B | ) }
$$
where $A . k$ is the value of the key $k$ in the map $A$ .
Intersect Key Compare. It is calculated as
$$
{ \frac { | \{ k | k \in A \cap B \} | } { | \{ k | k \in A \cup B \} | } }
$$
To compare the distance between two elements we introduced two new simiarity algorithms:
Manhattan Distance. The distance between two points in a grid based on a strictly horizontal and/or vertical path. The Manhattan distance between the points $( x _ { 1 } , y _ { 1 } )$ and $( x _ { 2 } , y _ { 2 } )$ is $\left| x _ { 1 } - x _ { 2 } \right| + \left| y _ { 1 } - y _ { 2 } \right|$ . The result is normalized to $[ 0 , 1 ]$ by dividing it by a predefined maximum distance.
Exponential decay. uses the Euclidean distance but adds exponential decay, with differing $\lambda$ values calculated as $e ^ { - \lambda d }$ , where $d$ is the Euclidean distance between the points. The benefit is that the function is already normalized to [0, 1] and approaches 0, eliminating the need to define a maximum distance. We used different values for $\lambda$ , evaluating three exponential decay similarity functions, namely $\lambda = 0 . 0 0 1$ as a small decay, $\lambda = 0 . 0 0 5$ as a medium, and $\lambda = 0 . 0 1$ as a large decay.
Finally to compare the shape of the two elements we used:
Area. The area of an element is the product of the width and height of the minimal rectangle containing the visual element.
Perimeter. The perimeter of an element is the sum of the length of all sides of the minimal rectangle containing the visual element.
Aspect Ratio. The aspect ratio of an element is the ratio of the width to the height of the minimal rectangle containing the visual element. To compare the respective values of two elements, we divide the smaller value by the larger value. For example, comparing an element with an area of 100px and an element with an area of 200px would result in $\textstyle { \frac { 1 } { 2 } }$ because the smaller element has half the area of the larger element.
Improved Weights. We use a genetic algorithm to optimize the weights of the Similo algorithm and evalute whether a global combinations of weights can be found. We applied the genetic algorithm as follows. We begin by fine-tuning the similarity functions for each property, starting from an initial baseline of weights and functions. For each property, we evaluate every possible similarity function, choosing the most effective one. This method is systematically applied to each property in sequence, optimizing them one at a time. The selection of property similarity functions might change based on the order in which the properties are optimized. To ensure a robust outcome, we randomly select a property for comparison and conduct multiple rounds of optimization. In cases where several similarity functions perform well for a property, we employ a brute force approach to determine the best combination from this narrowed selection of functions. We define a similarity function for each attribute and then apply genetic optimization to determine the optimal set of weights. We use fixed step values with 0.05 step size in [0, 3] as possible values for weights. Essentially, the algorithm assigns a weight of zero to any attribute that does not enhance overall fitness, thereby excluding it from consideration in the algorithm. The specific fitness function that evaluates a given weight combination vary based on the optimization goal. More details are provided in Section 4.4.
# 4.3.2 VON Similo++
VON Similo $^ { + + }$ is an improved version of VON Similo, optimized with the same process as Similo $^ { + + }$ but with a different objective function: instead of trying to locate the concrete target, an element will also be allowed as a match if it is in the visual or textual overlap or the target is among the top ten highest ranked elements.
# 4.3.3 HybridSimilo
The HybridSimilo approach aims to overcome the limitations of VON Similo mentioned in T4. As discussed, approaches using visual overlap can only identify all elements in the visual overlap with the same accuracy, but not the exact element described by the locator. This can lead to problems in certain testing scenarios, as the element selected in the visual overlap might not be the exact target. On the other hand, Similo can identify the one single concrete element with high accuracy. A preliminary benchmark showed that the basic VON Similo algorithm could directly identify the target element in $8 5 . 5 \%$ of cases and rank it among the top five in $9 5 . 5 \%$ of cases. At the same time, Similo was able to identify the target element in $8 8 \%$ of cases directly and in $9 4 \%$ among the top five. While the basic VON Similo algorihm is superior in selecting all the elements in the visual overlap, it is inferior in selecting the exact target element.
The idea is to overcome the limitations of VON Similo by combining both alogrithms, leveraging the strenghts of both. We pre-select the elements in the visual overlap using VON Similo and then identify the concrete target among them using Similo. In theory, the visual overlap identified by VON Similo should contain elements which differ in their tags, attributes and properties, as the contribute to the formation of a visually cohesive unit. Similo should be able to correctly select the concrete target from the this pre-selection made by VON Similo. As the algorithm combines Similo and VON Similo, we named it HybridSimilo.
# 4.4 Metrics
Across all two replicated papers, the authors used different evaluation metrics. The metrics proposed in the existing papers are:
Metric 1 Similo. For each of the pairs, the algorithm is tasked to find the the $C ^ { \prime }$ among all candidates. If the selected candidate is equal to $C ^ { \prime }$ or a direct parent or child in the DOM structure of $C ^ { \prime }$ the selection is considered a match.
Metric 2 VON Similo. For the evaluation the authors compared all pairs of original and updated elementsby calculating the normalized Similo and VON Similo score between the pair and classifying the pair a match if the score exceeded a certain threshold. They found that for VON Similo a threshold of 0.4 was optimal and for Similo a threshold of 0.28.
In our replication, we consider three additional metrics.
Metric 3 . Visual or Textual Overlap. Similar to metric 1, the algorithm is tasked to find the correct element among all candidates on an updated version of a websites given the target. A localization is deemed correct if the visual overlap or textual overlap of the located element contains the concrete target. This metrics deems significantly more localization’s as correct compared to the more restrictive metric used in Similo. This metric was also used in the successor LLM VON Similo. Metric 4 Exact Match. This new metric determines a located candidate a match if the located element without overlapping elements is the exact target. This is the most restrictive metric but also the most accurate one. A high score indicates that the algorithm is able to correctly identify the target element and in a testing scenario all possible use cases of a locator can be handled (i.e. clicking on an element, entering text, comparing properties).
Metric 5 Locator Changed - Exact Match. This metric focuses on an algorithms performance specifically for elements where traditional unique locators (ID, XPath, or ID-XPath) have failed due to website updates. It is similar to Metric 4, which considers all element pairs, but Metric 5 focuses exclusively on the subset of elements that represent the cases where a real test suite would fail and manual locator repair would be needed.
By focusing on these cases, we can evaluate the effectiveness of different algorithms specifically in situations where conventional locator strategies fail between updates to the website. This metric provides a more focused assessment of the algorithm’s capability to address the main challenge in web test maintenance, correctly recovering from broken locators without manual intervention.
Metric 6 Fitness. In the extended benchmark, we collected detailed information about the variations between different versions, specifically focusing on the elements and their locators. We categorized the element changes into three types, namely No change, Minor change, and Major change. No change refers to elements that retain identical attributes across versions. Minor change includes elements that maintain the same tag, text, and attributes, with their locations shifting by no more than 10 pixels in any direction and dimensions changing by no more than 5 pixels, including modifications in the CSS style. We classify as Major change all other evolution patterns.
Additionally, the elements were categorized based on their locators into three groups, namely All locators work, Absolute XPath does not work, No locators work. We then assigned a localization score to each element pair based on these categories (we report the actual scores in our replication package). Higher scores were given for more significant changes. The overall fitness score for each pair is the aggregate of their localization scores, rewarding the full score if the exact element was identified and one-quarter of the score if there was only partial overlap.
In this replication study, we evaluate the two original algorithms (i.e., Similo and VON Similo), along with the our new extensions, using all evaluation metrics.
# 4.5 Procedure
Concerning RQ0, we executed Similo on the original Similo benchmark, Similo and VON Similo on the original VON Similo benchmark.
Concerning $\mathrm { R Q _ { 1 } }$ , we executed the different Similo and VON Similo from each paper on the Similo and VON Similo benchmarks as well as our extended benchmark. For all combinations we captured all metrics M1-M6, if feasible.
Concerning $\mathrm { R Q _ { 2 } }$ , we employed the optimization process described in Section 4.3.1 to the Similo and VON Similo algorithm on different benchmarks and using different fitness metrics. Specifically we optimized Similo on the LLM VON Similo benchmark, as it is the most recent one, for metric M3 and M4 and VON Similo for M3, to provide comparability with the other algorithms. On the extended benchmark we optimized Similo for M6.
Concerning RQ3, we utilized the optimized algorithms from RQ2, specifically VON Similo optimized on the Similo benchmark to improve M3, as well as Similo optimized on the Similo benchmark to improve M4. We then combined these to first select a set of candidates with VON Similo and then determine the exact match among those candidates with Similo.
Overall, our experiment includes nine algorithm configurations under test, four original ones by selecting Similo and VON Similo from the original papers as well as seven new ones. As our evaluation set comprises 10,376 element pairs overall, we ran 280,233 localization attempts in our replication and extended study.
# 5 Results
Table 2 shows the results for RQ0-1-3.
The columns show the results for different metrics with the number of localization complying with the metric first and the percentage of the total number of localizations in brackets. The metric in parenthesis indicates the benchmark the algorithm was optimized on as well as the metrics being optimized by the genetic algorithm. I.e. (LLM, M4) means that the algorithm was optimized on the LLM VON Similo benchmark and the metric M4 was used as a fitness function. HybridSimilo is not applicable to M2, because the process of pre-selection does not work with the study setup, where a concrete element pair is given and the score needs be calculated. The columns are each for a specific variation of Similo. For the Similo algorithms taken from the original papers (Similo and VON Similo) are followed by the specific paper they are taken from in brackets. For the extended Similo algorithms (Similo $^ { + + }$ , VON Similo++, HybridSimilo) the content of the brackets indicates on which benchmark the algorithm was optimized and what metric was used as a fitness function. Results are reported separately for each considered benchmark. For metrics M2 and M6 the underlying numbers, i.e., the number of elements correctly classified by the threshold and the exact fitness have no informative value, why we only report the percentage.
# 5.1 Replication (RQ0)
For Similo, the replicated algorithm and benchmark found $8 8 . 9 9 \%$ elements, where the original paper was able to locate $8 8 . 6 4 \%$ of them. For VON Similo used in the LLM VON Similo paper, our replication found $9 1 . 6 5 \%$ of elements, where the original paper found $9 1 . 2 9 \%$ of elements. These minor differences were expected, as we reloaded the benchmarks from the Web Archive and differences in browser versions and window size can alter coordinates, shapes and areas as well as neighboring text used in the algorithm.
When replicating Similo and VON Similo algorithms on the VON Similo benchmark, we found challenges associated with the use of random non-fitting element pairs in the original benchmark, as the specific elements can change the results. Nonetheless, we found an accuracy of $9 3 . 6 4 \%$ for VON Similo, compared to $9 4 . 1 \%$ in the original paper, and a replicated accuracy of $8 5 . 2 2 \%$ for Similo compared to the original $8 2 . 3 0 \%$ . Based on these results, we consider our reproduction of the original results to be successful.
5.2 Comparison $\mathrm { ( R Q _ { 1 } ) }$ )
Our results show that our assumption that VON Similo performs worse than Similo in locating a concrete element and not just its visual overlap is correct. In our experiments, VON Similo under performs Similo on all benchmark, except the VON Similo benchmark, on the metric M3. This reinforces our initial hypothesis that Similo excels not only in identifying exact elements but also in recognizing their visual overlaps. VON Similo surpasses Similo in scenarios involving broader overlap criteria, such as M3 (visual or textual overlaps). This suggests that while
Table 2: RQ0-1-3: Results for all algorithms across all benchmarks and evaluation metrics (best results are highlighted in bold).
Similo tends to either rank the correct targets very high, VON Similo consistently ranks them high, albeit not in the top position.
# 5.3 Improvements (RQ2)
Table 3 shows the concrete values we found for each of the property and optimized algorithm. The header shows the algorithm we optimized, the benchmark it was done on and the metric we used as the optimization objective.
We found that different sets of properties, similarity functions and weights can significantly improve the capabilities of Similo and VON Similo. On the LLM VON Similo benchmark, the optimization of Similo improved the M4 (exact match) metric from $8 7 . 5 4 \%$ to $9 1 . 7 8 \%$ and the M3 (overlap visual and textual) from $9 1 . 6 5 \%$ to $9 5 . 6 4 \%$ . While the original algorithms scored very high, in our study we show that there is room for improvement by optimizing the chosen properties, weights and similarity functions. Because this optimization used a very broad set of websites, with different specifics due to different web frameworks and different web design, we assume that the optimization would be even more effective for a smaller, more specific set of websites or for a specific web framework.
Table 3: $\mathrm { R Q _ { 2 } }$ : Optimized weights and similarity functions for Similo and VON Similo.
Some properties like class or is button are often ranked low or excluded entirely from the algorithm. Other properties such as the name, type, aria-label, location, visible text, neighbor text and attributes have high weights no matter the metric, indicating that they are important properties for Similo. It is important to note that the optimization process is random and might return different local optima, depending on the initial values. This explains outliers like the sudden high weight of “is button” for VON Similo (Sim. M3).
5.4 Hybrid (RQ3)
Combining Similo and VON Similo can improve the accuracy of the locator relocalization, but only in specific configurations. On the extended benchmark, Similo performs better than VON Similo in identifying the visual overlaps. Therefore, we utilize VON Similo to find the the top ten highest ranking elements, as it is the only task where VON Similo outperforms Similo. Nonetheless, the resulting HybridSimilo algorithm, which utilizes VON Similo $^ { + + }$ to pre-select ten candidates and Similo $^ { + + }$ to locate the concrete target, HybridSimilo performs similar to Similo $^ { + + }$ for all metrics.
On the original benchmarks, specifically the LLM VON Similo benchmark, VON Similo $^ { + + }$ proves to be the best algorithm at selecting the visual or textual overlap (M3). Based on the pre-selected overlap by VON Similo $^ { + + }$ , we then used Similo $^ { + + }$ to find the concrete element. This specific HybridSimilo version outperforms all other algorithms on the M1 and M4-M6 metric on all original benchmarks. Particularly, it is able to locate $9 5 . 5 1 \%$ of elements on the LLM VON Similo benchmark under the M3 metric, slightly outperforming LLM VON Similo $( 9 5 . 0 \% )$ ) [25]. These results suggest that VON Similo is effective in identifying an initial selection when there is a significant difference between versions. This selection can then be refined by Similo. However, VON Similo does not provide additional advantages in case of minor updates between versions.
# 5.5 Threats to Validity
# 5.5.1 Internal Validity
We compared all variants of the replicated and extended algorithms under identical experimental settings and on the same evaluation sets, which was not the case in the original studies. The main threat to internal validity concerns our implementation of the original algorithms and evaluation scripts, which we tested thoroughly. Our replication of the original results confirms the correctness of our replication efforts.
# 5.5.2 External Validity
The limited number of websites in our evaluation poses a threat in terms of generalizability of our results to other web apps. We assume that the datasets comprising the most popular websites provides a realistic representation of how websites change over time, as seen in a continuous integration environment. Furthermore, only the static front pages of websites were used for the dataset. Front pages typically consist of links, headers, images, and menu items and may not represent the diversity of elements found in other parts of a web application. This selection bias could affect the algorithm’s effectiveness on pages different than front pages, where elements such as selects, tables, and table items appear more frequently. The final algorithm, with its optimized weights and similarity functions, is tailored to this dataset. Despite using cross-validation to prevent overfitting, there is a risk that the results are overly optimized for the dataset, potentially affecting the algorithm’s performance on different websites. The data was scraped from the Wayback Machine, which may not always capture a website’s complete or accurate representation. Additionally, the snapshots are scraped across different browsers in different countries and further pre-processed before saving and rendering them, introducing further inaccuracies into the dataset. All these factors could lead the optimized algorithm to perform differently in sanitized testing environments.
# 6 Discussion
Discussing Main Differences. Column M2 (i.e. the metric used for the VON Similo algorithm) differs significantly from the other metrics, as it is the only metric on which VON Similo outperforms all other algorithms. This difference is primarily due to variations in the study setup compared to those used for Similo and LLM VON Similo. The VON Similo reported than $9 4 . 1 \%$ of element pairs were accurately classified as either matching or non-matching. Given that in case of a localization in a test case the target $T$ will be compared with all $C$ , we would require $| C | - 1$ correct classifications as non-matching (sensitivity being 0.97) and one as matching (recall being 0.922). With approximately 800 elements in $C$ to compare with $T$ , the probability of correctly identifying the target in the new version is calculated as $0 . 9 7 ^ { 7 9 9 } \cdot 0 . 9 2$ , which approximates to 0.
Our results show that VON Similo is not substantially better than Similo, even for identifying the visual overlap. Based on our efforts in hybridizing Similo and VON Similo, we suggest that these algorithms should always be used jointly, as the concrete selection with Similo is expected to improve the locator detection accuracy.
Consequences for use in practical web testing. The successfully replicated results show that the proposed algorithms are effective at identifying web elements, even when their standard locators (ID, XPath, ID-XPath) are not working. The original Similo algorithm is able to correclty identify $9 5 . 8 \%$ of elements whose original locators are not working anymore. This means that only 4 out of 100 locator breakages need to be manually repaired by a developer, significantly reducing the cost of maintaining a web application. The optimized version of Similo even reduces this number to 1 out of 100. For example, other prominent locator strategies like Tag $+$ Text only achieve a success rate of around $8 2 \%$ . With the help of the implemented library, developers can directly incorporate the algorithms into their testing process to save time and effort.
Empirical maintenance studies put concrete numbers on the cost of manual locator repair. In an industrial case study involving four real-world Selenium suites, Leotta et al. report that repairing a single release after locator breakage requires $0 . 6 0 \mathrm { ~ h ~ }$ when ID locators are used and $3 . 0 5 \mathrm { ~ h ~ }$ when XPath locators are used [13]. At the current U.S. market rate for a test-automation engineer (average $\$ 90\mathrm { ~ k ~ }$ p.a. [11], equivalent to $\$ 43.5\mathrm { ~ h ~ } ^ { - 1 }$ ), this corresponds to $\$ 26$ and $\$ 133$ per fix. Assuming a medium-sized organization with 500 test suites $\times \ 1 0$ locators (5,000 locators overall) releasing weekly, and observing that $2 6 \%$ of XPath locators break (1,300 failures) and in $1 9 \%$ neither ID nor XPath work (950 failures), maintaining pure, XPath suites would cost approximately $\$ 8.6$ million annually, compared to $\$ 1.2$ million for ID-based suites. These findings echo Accenture’s independently reported $\$ 50 – 8120$ million annual spend on GUI-test maintenance [8].
With the original Similo algorithm—automatically recovering $9 6 \%$ of those 950-1300 weekly dual-break failures—manual repairs drop to just 38-52 per release ( $\$ 916$ . The optimized Similo variant $9 9 \%$ recovery) further reduces these to around 10-13 repairs $\mathfrak { P } 2 6 0 \ – \mathfrak { H } 1 , 7 2 9 )$ per release. Over a 50-release year, this cuts annual maintenance from $\$ 1.2 – \ S 8.6$ million (no automated healing) to about $\$ 49,400 – \$ 345,800$ with Similo or only $\$ 13,000 – \$ 86,450$ with optimized Similo, a $9 6 \mathrm { - } 9 9 \ \%$ reduction in spend. Even marginal gains in locator-healing accuracy thus translate into five-figures annual savings in good maintained test suites or six-figure annual savings in those using fragile locators. This underlines the clear industrial value of further algorithmic improvements.
Impact of long and short inter-version time intervals on performance and benchmark. At first glance, it may appear that improved methods offer negligible benefits for benchmarks with shorter inter-version intervals since baseline algorithms already perform strongly. However, considering improvements relative to the maximum achievable performance reveals their true significance. For instance, in long-term intervals, performance improved from $8 6 . 6 \%$ to $9 1 . 7 \%$ , representing $3 8 \%$ of the possible remaining margin towards $1 0 0 \%$ . For short-term intervals, the baseline already achieved $9 9 . 0 \%$ , and improved to $9 9 . 7 \%$ , again representing a substantial $7 0 \%$ of remaining possible improvement. Furthermore, focusing specifically on elements with changed locators, performance improved from $9 5 . 8 \%$ to $9 8 . 8 \%$ , covering an even larger relative improvement of $7 1 \%$ . These results underline that even minor absolute improvements at high baseline levels can translate into significant practical gains, particularly when addressing more challenging locator updates.
Error analysis. Despite optimization efforts, certain web elements remain consistently misidentified. Common reasons for such errors include:
– Substantial changes in attributes between versions (e.g., altered text or tags). This can for example happen when a “Log in” button changes from a <button> tag to a styled <span>, significantly reducing similarity scores. – Similarity in visual placement outweighing textual cues. For example, a new “Sign up” button positioned exactly like the old “Log in” button might cause the algorithm to prioritize visual similarities over differing text. – Inability to interpret purely visual elements like icons. When visually similar icons without textual differences swap positions, the algorithm frequently misclassifies them due to lack of textual or DOM-based identifiers.
These issues highlight the need to incorporate additional visual or semantic analysis methods into future approaches.
Need for Standardized Benchmarks in E2E Web Testing. While this paper provides comparability between the different versions of Similo, we still do not assess their performance on actual running web tests, or locators used in practice by existing tests. Indeed, the existing literature on E2E web testing utilizes various benchmarks and evaluation metrics, inhibiting comparisons between tools. Creating a suitable benchmark itself is a challenging task that demands considerable time and effort, which may deter the development of new locator algorithms.
Nevertheless, we advocate the need for the development of a standardized benchmark for evaluating locator generation algorithms, free from the limitations outlined in Section 3.4. Ideally, such a benchmark would encompass a broad spectrum of websites, accommodate frequent updates, and include web elements typical in real-world test scenarios. The availability of a well-curated and consolidated benchmark would enhance the comparability of tools and provide valuable insights into which algorithms are best suited for specific testing environments, potentially leading to the practical application of these locator algorithms if they prove effective against the current state of the art.
# 7 Related Work
We already discussed Similo [24], VON Similo [23], and LLM VON Similo [25], the first two were included in the empirical comparison conducted in this work. Besides the approaches investigated in this paper, several techniques have been proposed in literature to re-identify changed web elements in E2E web tests [1, 5, 12, 14, 15, 16, 17, 19, 20, 21, 26, 28, 29, 30, 31]. We describe the main propositions next. Similarly to the algorithms investigated in this paper, COLOR [12] re-identifies changed web elements utilizing multiple properties of a target web element. Leotta et al. propose ROBULA $+$ [17] to create human-readable, short, and simple XPath expressions through an iterative refinement process of a generic XPath until the target element is uniquely identified. In follow-up work, the authors have extended the original algorithm by incorporating attribute robustness ranking, excluding fragile attributes, and adding textual information for improved stability [20]. In other works, the robust XPath locator problem is formulated as a graph search problem [16, 18]. Montoto et al. [21] propose an algorithm for identifying elements in AJAX websites based on XPath expressions enriched with textual and attribute information.
Concerning ensemble-based strategies, Leotta et al. [19] introduce the novel concept of MultiLocator in which an ensemble of locator-generating algorithms is used to overcome the limitations of individual approaches. The algorithm captures multiple XPath-based locators for each element. It uses them to identify the element by assigning different voting rights or weights to different locators, choosing the one with the highest vote. LLM VON Similo is the latest iteration of the Similo algorithm [25]. The algorithm first uses VON Similo to rank all elements on the website by their score. It then takes the top 10 visual overlaps, as the target element is most likely among them. The algorithm then utilizes GPT-4 [7] to find the target element among the pre-selected candidates. The algorithm converts the ten elements and the target element into JSON format. It then sends a request to the large language model with the target and the ten candidates, asking it to identify the target’s match. The LLM responds with a single number, indicating the target’s position in the list of candidate elements. Since VON Similo relies on proprietary OpenAI’s APIs and uses an unspecified version of GPT-4 with unknown temperature and configuration settings, its results cannot be reliably reproduced in a controlled experimental environment. Therefore, we do not attempt a full replication of this method in this paper. In a recent work, Coppola et al. [6] extend the MultiLocator approach with standard a Learning to Rank solution to facilitate the relocalization of web elements.
All related studies compare their proposed methods with the state-of-the-art solutions available at the time of publication. In contrast, our work provides an in-depth analysis specifically focused on the family of Similo algorithms, evaluating different versions on both existing and newly developed benchmarks. Our replication study represents a new contribution as it is the first of its kind. Other than the replication, this work provides extensions of the original algorithms in two directions (i.e., a weight-optimized variant and a hybrid approach), other than a larger benchmark.
# 8 Future Work
To further improve the algorithms accuracy, we believe that extending the set of properties to include non-DOM-based properties would be necessary. Analyzing the results revealed that icons were often mismatched, as the algorithm cannot differentiate between them as effectively as it can with text.
Additionally, future work could address the limitations of using snapshots from the Wayback Machine, such as the reliability and completeness of the data. Exploring alternative sources or methods for obtaining website snapshots, such as utilizing open-source applications with complete version histories, could help mitigate these limitations and provide more accurate and reliable data.
We have observed a correlation between property stability and optimal weights. Finding concrete evidence of correlations between property stability, uniqueness, or other metrics for a specific website could enable a self-tuning and optimizing Similo algorithm. This would allow the algorithm to adapt to the website’s properties and changes over time, further improving its accuracy and robustness. | Fragile web tests, primarily caused by locator breakages, are a persistent
challenge in web development. Hence, researchers have proposed techniques for
web-element re-identification in which algorithms utilize a range of element
properties to relocate elements on updated versions of websites based on
similarity scoring. In this paper, we replicate the original studies of the
most recent propositions in the literature, namely the Similo algorithm and its
successor, VON Similo. We also acknowledge and reconsider assumptions related
to threats to validity in the original studies, which prompted additional
analysis and the development of mitigation techniques. Our analysis revealed
that VON Similo, despite its novel approach, tends to produce more false
positives than Similo. We mitigated these issues through algorithmic
refinements and optimization algorithms that enhance parameters and comparison
methods across all Similo variants, improving the accuracy of Similo on its
original benchmark by 5.62%. Moreover, we extend the replicated studies by
proposing a larger evaluation benchmark (23x bigger than the original study) as
well as a novel approach that combines the strengths of both Similo and VON
Similo, called HybridSimilo. The combined approach achieved a gain comparable
to the improved Similo alone. Results on the extended benchmark show that
HybridSimilo locates 98.8% of elements with broken locators in realistic
testing scenarios. | [
"cs.SE"
] |
# 1 Introduction
Humans have an innate desire to create and inhabit personalized worlds, whether it’s children building sandcastles or artists designing landscapes. This creative drive extends to digital spaces, especially in VR/XR applications, where users expect to be immersed in custom environments with panoramic views, high-fidelity visuals, and real-time interactions. However, building such immersive 3D scenes remains challenging. Handcrafted 3D modeling requires specialized skills and considerable efforts, while recent generative methods like object-compositional generation[Engstler et al. 2025; Huang et al. 2024; Yao et al. 2025], LLM-powered modeling tools[Ahuja 2025a] and frameworks[Ling et al. 2025; Liu et al. 2025; Zhou et al. 2024c], and approximating through 3D Gaussians [Yang et al. 2024c; Yu et al. 2025; Zhou et al. 2025] often struggle to balance photorealism with computational efficiency. These approaches prioritize fully detailed geometry or massive Gaussians to achieve realism, but often result in overly complex scene representations that hinder real-time performance on VR headsets, or require handcrafted or time-consuming decimation and compression to make them usable. This raises a key question: is starting from complex geometry or exhaustive 3D modeling truly necessary to create immersive VR experiences?
We argue that it is not. In this paper, we propose ImmerseGen, a novel agent-guided framework that models immersive scenes as hierarchical compositions of lightweight RGBA-textured geometric proxies, including simplified terrain meshes and alpha-textured billboard meshes.
The formulation offers several important advantages:
1) Such modeling paradigm enabling agents to flexibly guide generative models in synthesizing coherent, context-aware textures that integrate seamlessly with the panoramic world;
2) Rather than modeling the scene with complex geometry and then simplifying it, our approach bypasses this process via generating photorealistic texture directly on lightweight geometric proxies leveraging SOTA image generators, alleviating reliance on detailed asset creation and preserving the texture quality without artifacts introduced in decimation or Gaussian approximations.
3) It delivers compact scene representations that allow real-time rendering at smooth frame rates, even on standalone mobile platforms such as VR headsets.
To establish this hierarchical paradigm, ImmerseGen first creates the base layer world, which employs a terrain-conditioned RGBA texturing scheme on a simplified terrain mesh with user-centric UV mapping. More specifically, it employs a user-centric texturing and mapping scheme that synthesizes and allocates higher texture resolution based on central camera origin, prioritizing primary viewing area, rather than uniformly covering the entire scene with limited quality [Engstler et al. 2025; Raistrick et al. 2023b]. Then, ImmerseGen automatically enriches the environment with generative scenery assets, which are clearly separated into distinct depth levels. Midground assets, such as distant trees or vegetation, are efficiently created using planar billboard textures, while foreground assets, closer to the user, are generated with alpha-textured cards placed over retrieved low-poly 3D template meshes. This mechanism smartly allocates representation detail, maintaining both visual fidelity and rendering efficiency at every scale.
Figure 2: Asset comparison from different sources. We compare assets created by learning-based generative methods (blue captions) and or artists (green captions). Our generative RGBD-textured proxy assets achieves better visual details than existing models [Xiang et al. 2024; Zhang et al. 2024c] with fewer triangles, delivering realism comparable to artistcreated high-poly or baked assets.
While RGBA-textured proxies simplify asset modeling, assembling coherent 3D scenes still requires manual adjustment and expert knowledge. To simplify this process, we develop a VisualLanguage Models (VLMs)-based agentic system that interprets user text prompts into immersive environments. However, directly using VLMs often face challenges in spatial understanding that hinder layout accuracy. To address this, we introduce a grid-based semantic analysis strategy, enhancing the spatial comprehension with coarse-to-fine visual prompt and raycasting-based validation, thus mitigating placement errors and inconsistencies existing in naïve VLMs. Moreover, ImmerseGen supplements the immersive experience by incorporating modular dynamics (e.g., flowing water, drifting clouds) and ambient audio (e.g., wind, birdcalls), delivering a fully multisensory environment.
In summary, our contributions are as follows:
1) We propose ImmerseGen, a novel agent-guided 3D environment generation framework that uses simplified geometric proxies with alpha-textured meshes to produce compact, photorealistic worlds ready for real-time mobile VR rendering.
2) We propose a novel RGBA texturing paradigm that first synthesizes 8K terrain textures using a geometry-conditioned panorama generator via user-centric mapping, and then directly generates alpha-textured proxy assets, avoiding fidelity loss that typically results from mesh decimation.
3) To automate scene creation from user prompts, we introduce VLM-based modeling agents equipped with a novel grid-based semantic analysis, enabling 3D spatial reasoning from 2D observations and ensuring accurate asset placement. ImmerseGen further enhances immersion with dynamic effects and ambient audio for a multisensory experience.
4) Experiments on multiple scene-generation scenarios and live mobile VR applications show that ImmerseGen outperforms previous methods in visual quality, realism, spatial coherence, and rendering efficiency for immersive real-time VR experiences.
Figure 3: Overview. Given a user’s textual input, our method first retrieve a base terrain and apply terrain-conditioned texturing to synthesize RGBA terrain texture and skybox aligned with base mesh, forming the base world. Next, We enrich the environment by introducing lightweight assets, where VLM-based asset agents are used to select appropriate templates, design detailed asset prompts and determine asset arrangement within the scene. Each placed asset is then instantiated as alpha-textured assets through context-aware RGBA texture synthesis. Finally, we enhance multi-modal immersion by incorporating dynamic visual effects and synthesized ambient sound based on the generated scene.
# 2 Related works
Agentic Scene Generation. Early efforts in procedural content generation (PCG) for immersive environments primarily rely on rule-based systems [Gasch et al. 2022; Lipp et al. 2011; Parish and Müller 2001; Zhang et al. 2019], where spatial relationships and asset placements are meticulously defined through handcrafted rules. Infinigen [Raistrick et al. 2023a] advances this process by leveraging Blender scripts to orchestrate multiple procedural generators, enabling the creation of larger and more complex scenes. However, PCG methods inherently limit adaptability to novel scenarios and user-driven instructions. The advent of LLMs and VLMs introduces a paradigm shift in scene generation, enabling more intuitive, instruction-based workflows. Recent methods like BlenderMCP [Ahuja 2025b] increasingly harness the capabilities of LLMs to automate the generation process, employing functioncalling agents to interpret textual prompts [Öcal et al. 2024; Yang et al. 2024b; Zhou et al. 2024b], design scene layouts [Lin and Mu 2024; Sun et al. 2024b], and populate environments [Ling et al. 2025; Zhou et al. 2024c] with assets retrieved from pre-built libraries [Ahuja 2025b; Kumaran et al. 2023; Liu et al. 2024, 2025; Sun et al. 2023; Zhou et al. 2024c]. These systems demonstrate significant potential in generating diverse, large-scale scenes from high-level descriptions, streamlining the content creation pipeline. However, existing LLM/VLM-based approaches rely heavily on asset libraries, often requiring a trade-off between quality and efficiency. Moreover, the precision of VLM-guided asset placement often proves insufficient in complex scenarios. In contrast, ImmerseGen addresses these limitations by introducing lightweight proxy assets and semantic grid-based arrangement by agents, enabling the creation of compact, photorealistic worlds.
Learning-based Generation. Recently, learning-based generation methods have shown promising results in creating 2D and 3D contents [Hong et al. 2023; Rombach et al. 2022; Zhang et al. 2024c; Zou et al. 2024]. However, unlike 3D object generation that benefits from diverse object datasets [Deitke et al. 2023; Yu et al. 2023] for model training, 3D scene generation still faces challenges [Höllein et al. 2023; Huang et al. 2024; Meng et al. 2024; Wu et al. 2024; Xu et al. 2024] due to the lack of comprehensive scene-level data and unified representations. Early methods either learn a generative neural field with GAN [Chen et al. 2023; Hao et al. 2021; Lin et al. 2023; Xie et al. 2024] or 2D diffusion priors [Cohen-Bar et al. 2023; Zhang et al. 2024a, 2023b], but fail to produce detailed appearance. Recently, other lines of work tend to generate images and lift them to 3D space through depth prediction, combined with outpainting techniques to expand the scene [Chung et al. 2023; Fridman et al. 2024; Yu et al. 2025, 2024]. However, these methods typically produce incomplete 3D worlds (e.g., missing 360-degree views or geometry under the feet), thus failing to meet the demands of immersive VR applications. To create a complete surrounding world, some methods lift the generated panoramic images [Wang et al. 2024; Zhang et al. 2024d] to 3D space with depth estimation and inpainting [Yang et al. 2024c; Zhou et al. 2024a, 2025], but still faces challenges in producing 3D coherent world due to the inconsistency of novel view inpainting. More recent approaches utilize video models for 3D scene creation [Gao et al. 2024; Go et al. 2024; Liang et al. 2024; Sun et al. 2024a], which either suffer from blurry backgrounds or fail to guarantee fully explorable 360- degree environments. Additionally, these methods often produce a large number of point clouds or 3D Gaussians for scene representation, making it challenging to achieve high-quality rendering while maintaining reasonable computational costs.
Traditional Asset Creation. Conventional asset creation pipelines typically follow a two-stage process: detailed geometric modeling followed by texture mapping. This modeling-first paradigm is prevalent in CG content production where artists craft complex meshes and apply high-resolution textures to achieve visual realism. However, when deploying such assets in real-time rendering application like VR and games, these models are often simplified through decimation techniques, such as mesh simplification [Li et al. 2018; Liu et al. 2017], billboard generation [Décoret et al. 2003; Kratt et al.
Figure 4: Workflow of base world generation. Panoramic textures for terrain mesh and sky are generated for the base world. To tame the diffusion model for terrain texturing, we propose geometric adaption (b) for depth control and usercentric texture mapping (c).
2014], or level-of-detail (LOD) hierarchies [Huang et al. 2025; Zhang et al. 2024b], along with baked textures. In natural scenarios, many works on terrain generation and tree modeling [Lee et al. 2023; Li et al. 2021] have been proposed, but they often lack diversity and realism. While effective, this workflow incurs significant manual effort or computational cost, as it first generates overly detailed representations only to later reduce them for efficiency. In contrast, ImmerseGen bypasses this complexity and the need for post-hoc simplification by directly synthesizing alpha-textured proxy assets tailored for lightweight rendering, enabling scalable and photorealistic scene generation optimized for immersive applications.
# 3 Method
We introduce ImmerseGen, an agent-guided framework for generating immersive 3D scenes from textual prompts. As shown in Fig. 1, we create the scene in a hierarchical diagram guided by VLM-based agents. First, we generate a layered environment via terrain-conditioned texturing, where panoramic sky and RGBA terrain textures are synthesized upon a retrieved terrain mesh (Sec.3.1). Next, we enrich the scene by placing lightweight mesh proxies and generating prompt proposals leveraging enhanced agents with semantic grid analysis. The selected assets are then instantiated using a RGBA texture synthesis scheme (Sec. 3.2). Finally, we augment the scene with dynamic effects, such as flowing water and ambient sound, delivering a multisensory experience. Due to page limitation, we refer readers to the supplementary material for more details.
# 3.1 Base World Generation
From textual prompts to base terrain. Given a user’s textual prompt describing the world, we first retrieve a suitable base terrain mesh from a pre-generated template library. These templates are created using procedural content generation tools, followed by postprocessing steps including remeshing, visibility culling, and artistic captioning to support effective retrieval. Since visual diversity is primarily introduced through subsequent generative texturing, this retrieval-based strategy strikes a practical balance between efficiency and variety. To better align with terrain characteristics and improves diversity, we also use a prompt enhancer extend the user’s raw prompts with imaginative and contextually relevant details.
Terrain-conditioned texturing. As demonstrated in Fig 4 (a), given a base terrain mesh and text prompts, we first generate panoramic sky texture and alpha ground textures upon the mesh. To support terrain texture synthesis in equirectangular projection (ERP), we adopt a two-stage training pipeline. We first train a panoramic diffusion model on ERP data conditioned on textual prompts [Rombach et al. 2022]. Then, we extend this model by training a depthconditioned ControlNet [Zhang et al. 2023a], which takes as input a panoramic depth map $\mathbf { D } _ { \mathcal { M } }$ estimated from a neural depth estimator [Yang et al. 2024a]. During inference, we combine the both module to generate a panoramic texture ${ \mathbf I } _ { t }$ that aligns with the terrain mesh $\mathcal { M }$ , formulated as:
$$
\mathbf { I } _ { t } = \mathcal { U } ( \mathcal { G } ( \mathbf { D } _ { \mathcal { M } } ; C _ { \mathrm { G l o b a l } } , C _ { \mathrm { R e g i o n } } ) ) ,
$$
where $\mathbf { D } _ { \mathcal { M } }$ is the conditioning panoramic depth map rendered from the terrain mesh, $\mathcal { G }$ is the conditional diffusion model, $C _ { G l o b a l }$ is the text prompt for global geographic description, $C _ { \mathrm { R e g i o n } }$ is the optional regional prompts for generating designated geographic feature (such as water body), and $\mathcal { U }$ is the conditioned upscaling model that produces 8K textures to enhance fine-grained details.
To separate the terrain texture and sky texture while maintaining high resolution, we perform tile-based matting and sky outpainting on the panorama, which yields 8K fine-grained alpha matte and pure sky texture guided on the terrain mask. This detailed alpha matte produces highly detailed landscape visual with low-poly terrain meshes (such as trees and houses beneath the blue sky).
Depth control with geometric adaptation. While it is technically plausible to apply conditional diffusion for mesh texturing, we find it non-trivial to produce 3D-coherent textures that align well with the terrain and meet immersive standards, i.e., degraded quality as shown in Sec. 9. This difficulty arises primarily from the domain gap between the estimated depth for ControlNet training and rendered metric depth maps for inference-time conditioning. To tackle this issue, we propose a geometric adaptation scheme that remaps the rendered metric depth according to better match the domain of training-time estimated depth. Specifically, we retrieve the most similar depth map $\mathbf { D } _ { \mathrm { R e t r i e v e } }$ from a sampled training set $\mathcal { L }$ using cosine similarity, and apply a polynomial remapping function:
$$
\hat { \mathbf { D } } _ { \mathcal { M } } = \mathcal { P } ( \mathbf { D } _ { \mathcal { M } } ; \mathbf { D } _ { \mathrm { R e t r i e v e } } ) ,
$$
where $\hat { \mathbf { D } } _ { \mathcal { M } }$ is the remapped depth, and $\mathcal { P }$ is a third-degree polynomial mapping function. Practically, we downsample both $\mathbf { D } _ { \mathcal { M } }$ and $\mathbf { D } _ { \mathrm { R e t r i e v e } }$ to $3 2 \times 1 6$ resolution to estimate the polynomial coefficients, which are then applied to the full-resolution depth map D .
Terrain texture mapping. To efficiently texture the terrain with the generated panoramic texture while preserving visual fidelity, we precompute a user-centric panoramic UV coordinates for the terrain mesh, as illustrated in Fig. 4 (c). Thus, the texture can be directly sampled during the rendering without back-projection or baking procedures. Specifically, the UV coordinate for each mesh vertex can be calculated by transforming the coordinates from object space to camera space. Given a vertex position in camera space ${ \mathbf p } = ( x , y , z ) ^ { \top }$ , the corresponding UV coordinate $\mathbf { u } = ( u , v ) ^ { \top }$ on the panoramic texture ${ \mathbf I } _ { t }$ can be calculated as:
Figure 5: The proposed context-aware texture synthesis (a) produces diverse RGBA textures directly on lightweight proxies with coherent context for both foreground and midground scenery (b).
$$
\mathbf { u } = \left( { \frac { 1 } { 2 \pi } } \arctan ( { \frac { x } { - z } } ) + { \frac { 1 } { 2 } } , { \frac { 1 } { \pi } } \arcsin ( { \frac { y } { \left\| \mathbf { p } \right\| } } ) + { \frac { 1 } { 2 } } \right) ^ { \top } ,
$$
where $\| \mathbf { p } \|$ denotes the L2-norm of the vertex position. To prevent texture stretching at horizontal seams, we detect UVs crossing the panoramic boundary and offset them for correct wrapping, then enable texture repeat warping mode for seamless interpolation of panoramic texture sampling.
To further improve visual fidelity around the user’s viewpoint, particularly in the polar region where the ERP have severe stretching, we first adopt an ERP-to-cubemap refinement scheme, using an image-to-image diffusion method [Meng et al. 2021] to repaint the bottom area. Then, we partition the mesh by cropping its bottom area and then reassign UV coordinates of this mesh to directly sample textures from the bottom map. Additionally, to achieve better geometric realism, we incorporate a displacement map obtained from a height estimation model adapted from [Yang et al. 2024a].
# 3.2 Agent-Guided Asset Generation
To enrich the base world with photorealistic scenery, we then add more generative 3D assets (such as vegetation) to the scene. Unlike prior methods that rely on complex modeling pipelines [Décoret et al. 2003] or off-the-shelf asset retrieval, our framework dynamically generates unique, alpha-textured asset proxies from coarse templates using generative texture synthesis, thus simplifying asset creation and enabling more flexible agent-driven design.
Defining proxies by distance. In terms of the distance from the user and the asset, we use separated proxy types for assets at different distance to trade off between quality and performance, which delivers a realistic appearance that matches the artists’ baked models while alleviates the cost of baking or decimation. As demonstrated in Fig. 1 (b) and Fig. 2, for midground objects, since users cannot perceive detailed depth changes of object surfaces, we synthesize RGBA textures on distant planar mesh (see Fig. 5 (c), a.k.a. billboard texture). For foreground objects that require stereo impression, we generate alpha textures from template mesh for each group of shared materials (such as tree leaves and trunks, see Fig. 5 (b)).
Figure 6: The proposed semantic grid-based analysis overlays a labeled grid with masked unsuitable regions as visual prompts, enabling the VLM agent to progressively select grid cells in a coarse-to-fine manner, enhancing the accuracy and semantic coherence of asset arrangement.
Asset selection and designing. To create diverse and contextually coherent scenery asset, we develop VLM-based agents to guide the asset design pipeline. First, the asset selector analyzes the rendered base world image and user’s textual description to retrieve suitable foreground asset templates from an offline-generated library, e.g., pine trees for mountainous regions or bushes for arid deserts. Next, the asset designer crafts detailed textual prompts to guide generative models in synthesizing these scenery assets. In practice, the designer examines both the generated base-world image and selected texture templates, and produces detailed descriptions for each scenery assets (such as categories, season, styles, etc.).
Asset arrangement with semantic grid-based analysis. To ensure that generative assets are placed in semantically appropriate and visually plausible locations, we introduce an asset arranger that analyzes the base world image to produce 2D position candidates, which are then back-projected to determine 3D positions through raycasting and validation. One primary challenge for the asset arranger is to generate reasonable 3D placements based solely on image-based observation. A naïve approach is to let the agent directly output the coordinate, which generally results in inaccurate positions and meaningless layout (see Sec. 4.2) due to the limited spatial understanding ability to exist models [Yang et al. 2024e]. To address this, we propose a semantic grid-based position proposal scheme, which significantly improves the asset arrangement quality. As shown in Fig. 6, we overlay the base world image with a labeled grid and mask out unsuitable regions (e.g., water, sky), forming a structured visual prompt for the VLM agent. The agent first selects coarse grid cells given this visual prompt. Then, for finer placement, each selected cell is zoomed in and subdivided into sub-grids, from which the agent will select a more precise sub-cell. The final positions are determined by randomly selecting a point within the sub-cell.
Context-aware RGBA texture synthesis. Once the agents have determined the per-asset placement and textual descriptions, we proceed to instantiate each asset by synthesizing its RGBA texture in context with the base world. To facilitate seamless integration, we propose a context-aware cascaded RGBA texture synthesis model conditioned on base world background textures, which is inspired from the layered diffusion model [Zhang and Agrawala 2024]. Given a scenery prompt $C _ { s }$ , the alpha synthesis module $\mathcal { G } _ { a }$ first generate an alpha mask $\mathbf { M } _ { c } = \mathcal { G } _ { a } ( C _ { s } ) \in \mathbb { R } ^ { H \times W }$ , serving as a sketch for subsequent texturing. To incorporate contextual information from the base world, the RGB base texture reference $\mathbf { I } _ { b } \in \mathbb { R } ^ { H \times W \times 3 }$ is injected into an initially empty RGBA canvas through alpha blending guided by $\mathbf { M } _ { c }$ . Then the texture synthesis module $\mathcal { G } _ { i }$ generates an initial scenery texture from the alpha-blended reference with the alpha mask $\mathbf { M } _ { c }$ . Note that the generated texture usually produces detailed boundary that is not perfectly aligned with the given alpha mask. Thus, the alpha channel of the initial texture is further refined through an diffusion based refinement module $\mathcal { R }$ . The full process to generate final scenery texture $\mathbf { I } _ { s } \in \mathbb { R } ^ { H \times W \times 4 }$ is formulated as:
$$
\begin{array} { r } { \mathbf { I } _ { s } = \mathcal { R } \left( \mathcal { G } _ { i } \left( \mathbf { M } _ { c } , \mathbf { I } _ { b } ; C _ { s } \right) \right) . } \end{array}
$$
For foreground scenery that already contains an alpha channel in its template model, we directly reuse its alpha as ${ \bf { M } } _ { c }$ to ensure the correct structure.
# 3.3 Multi-Modal Immersion Enhancement
To further enhance immersion beyond static 3D visuals, we introduce agent-guided multi-modal enhancement in visual dynamics and sounds (see the right part of Fig. 3).
Dynamic Shader-based Effects. We utilize VLM to analyze the scenery component of generated scene, and add shader-based dynamic effects for natural elements such as flowing water, drifting clouds, and falling rain. These effects are implemented using customizable shader parameters, including procedural flow maps, noise-based motion textures, and screen-space animations, which bring liveliness to the scene while maintaining real-time performance.
Ambient Sound Synthesis. We synthesize ambient sounds using a library of natural soundtracks tagged by content. Specifically, we analyze the rendered panorama of the complete scene and retrieve suitable natural soundtracks (such as birds, winds, and water) from the library. To support uninterrupted playback, we apply crossfading to seamless mix soundtracks for audio looping.
# 4 Experiments
# 4.1 Comparison on Scene Generation
Baselines. We compare our method with recent scene generation methods across different categories: (1) Infinigen [Raistrick et al. 2023b], which uses procedural generation with physics-based modeling; (2) DreamScene360 [Zhou et al. 2025], which lifts panoramic images to 3D space; (3) WonderWorld [Yu et al. 2025], which generates scenes through perspective outpainting. (4) LayerPano3D [Yang et al. 2024d], similar to DreamScene360, but adopts a layered representation. For a fair comparison, we use Infinigen’s scene configurations that match the same category with our generated scenes, adopt the same text prompts with us for DreamScene360 and LayerPano3D, and use the cropped perspective images from our generated panorama as the image condition for WonderWorld.
Table 1: We perform quantitative comparison on the generated 3D scenes, and compare the complexity of representation (primitive count) and runtime performance (FPS) on VR devices.
Metrics. For comprehensive comparison with the above methods, we use metrics for evaluating both prompt-scene consistency and aesthetic quality, including CLIP similarity score (CLIP-Score) [Radford et al. 2021], aesthetic score (CLIP-Aesthetic) [Schuhmann 2023] and the VLM-based visual scorer Q-Align (QA-Quality) [Wu et al. 2023].
Quantitative results. We present the quantitative comparison of our method with the baselines in Tab. 1. As shown in Tab. 1, our method outperforms all baselines in CLIP-Aesthetic score and QAQuality, demonstrating the superior visual quality of our generated scenes. For CLIP-Score, DreamScene360 and LayerPano3D also show a competitive score, since they minimize semantic loss during training while our method generates diverse textures that better extends users’ prompt (e.g., various geographic feature instead of bare ground, see Fig. 7).
Qualitative results. We visualize the qualitative comparison results in Fig. 7, where we both show the panoramic view and the rendered perspective views. For Infinigen, since it mainly uses limited procedural generators with randomized parameters, restricting its visual diversity and semantic coherence (e.g., the ice floes in the first row are monotonous, and the green trees in the last row are not aesthetically compatible with the entire scene). For DreamScene360, although it achieves consistent views with a panoramic lifting strategy, it lacks diverse scenery contents and also shows blurry artifacts (see the slanting floaters at the perspective view from the second and third row in Fig. 7) due to the instability of inpainting-based optimization and the limited resolution of 3D Gaussians. For WonderWorld, since it relies on outpainting to generate a complete world, it cannot ensure view consistency across different views and results in fragmented scenes.
LayerPano3D produces aesthetic and consistent results with DiT-based panorama generator, but is prone to blurry artifacts and obvious gaps at the layer boundary. By contrast, our method builds up the world with hierarchical alpha-textured proxies while considering the 3D coherence with agent-guided modeling, preserving consistent quality across views and delivering immersive scenery content.
We provide more examples of generated nature environments in Fig. 8.
User study. We conduct a user study to compare our method with others on the 18 generated scenes. We omit the comparison with LayerPano3D since it produces massive primitives that hinder VR rendering.
glacial lagoon,twilight sky
Figure 7: We compare our method with Infinigen [Raistrick et al. 2023a], DreamScene360 [Zhou et al. 2025], WonderWorld [Yu et al. 2025] and LayerPano3D [Yang et al. 2024d] based on the generated 3D scenes using identical text prompts, visualizing both panoramic and perspective views of the generated scenes.
Table 2: We perform user studies on the generated 3D scenes.
We gathered 50 participants, of whom 33 people have professional backgrounds in graphics or 3D modeling. Participants are asked to select their preferred scenes based on three aspects: Perceptual Quality, Realism & Coherence, and Textual Alignment. Ratios of preferred scenes for each method were calculated. As shown in Tab. 2, users consistently prefer our method over other baselines across all aspects, demonstrating the superior visual quality and textual alignment of our method.
Complexity of Representation and Runtime. We compare the complexity of scene representation and runtime performance on VR devices (Snapdragon XR2 Gen 2 platform). We calculate the average primitive counts and FPS of all scenes for each method. As shown in Tab. 1, methods using 3D Gaussians as representation (DreamScene360 and WonderWorld) generally achieve only 8-14 FPS even with foveated rendering, and scenes generated by LayerPano3D fail to launch on VR devices.
For Infinigen, since it generates a detailed world with intricate procedural geometry and materials from generators, it remains computationally expensive for real-time rendering.
In contrast, our method maintains a compact representation while preserving scene quality, achieving an average FPS of $^ { 7 9 + }$ on VR devices.
# 4.2 Ablation Studies
Geometric Adaptation. We first analyze the geometric adaptation strategy for projected terrain depth and fine-tuning of the conditioning network in terrain-conditioned texturing (Sec. 3.1). By ablating both strategies, the generated terrain texture fails to produce a plausible ground texture (water area on the bottom in Fig. 9 (a)). By enabling fine-tuning, the terrain texture precisely reflects the ground but with a monotonous appearance (see Fig. 9 (b)). By enabling geometric adaptation, the ground texture shows more detail (rocks on the bottom in Fig. 9 (c)). By enabling all the strategies, we can obtain terrain texture with fine-level details and natural world structure (see Fig. 9 (d)).
Figure 8: We present more examples of generated environments in panoramic and perspective views.
Figure 9: We analyze the geometric adaptation and fine-tuning of the conditional network for terrain-conditioned textur generation.
Figure 10: We inspect the efficacy of semantic grid-based analysis of our asset arranger by comparing it with random layout, LLM-based layout and naïve VLM-based layout.
Semantic grid-based analysis. We then evaluate the efficacy of the proposed semantic grid-based analysis for the asset arranger (Sec. 3.2). Specifically, we compare our method with different strategies, including random layout generation, LLM-based generation that outputs object coordinates directly, and naïve VLM-based generator that receives unmodified base world images. As shown in Fig. 10, the output of random layout incorrectly places trees on the lake (Fig. 10 (a). The layout generated by generic LLM and naïve VLM improves the coherence by providing compatible texture descriptions and plausible coordinates, but still suffers from inappropriate placements. By using semantic grid-based visual prompts as input for the VLM, our method generates a pleasant scene composition while addressing the placement issue.
Figure 11: We analyze the contribution of different scenery by ablating proxy scenery of different types.
Table 3: We perform the ablation study on the aesthetic improvement of adding proxy assets.
Aesthetic Contribution with Proxy Scenery. We also investigate the aesthetic contribution when adding generated proxy scenery into the base world. Specifically, we randomly select 10 generated scenes and remove midground or foreground assets for rendering, and then evaluate the aesthetic metric in Tab. 3 and visualize in Fig. 11. As shown in Tab. 3 and Fig. 11, the added scenery significantly improves the QA-Aesthetic [Wu et al. 2023] and CLIP-Aesthetic score and visual quality by enriching the base world with diverse elements and improving 3D volumetric impression.
Please refer to the supplementary material for more experiments. | Automatic creation of 3D scenes for immersive VR presence has been a
significant research focus for decades. However, existing methods often rely on
either high-poly mesh modeling with post-hoc simplification or massive 3D
Gaussians, resulting in a complex pipeline or limited visual realism. In this
paper, we demonstrate that such exhaustive modeling is unnecessary for
achieving compelling immersive experience. We introduce ImmerseGen, a novel
agent-guided framework for compact and photorealistic world modeling.
ImmerseGen represents scenes as hierarchical compositions of lightweight
geometric proxies, i.e., simplified terrain and billboard meshes, and generates
photorealistic appearance by synthesizing RGBA textures onto these proxies.
Specifically, we propose terrain-conditioned texturing for user-centric base
world synthesis, and RGBA asset texturing for midground and foreground
scenery.This reformulation offers several advantages: (i) it simplifies
modeling by enabling agents to guide generative models in producing coherent
textures that integrate seamlessly with the scene; (ii) it bypasses complex
geometry creation and decimation by directly synthesizing photorealistic
textures on proxies, preserving visual quality without degradation; (iii) it
enables compact representations suitable for real-time rendering on mobile VR
headsets. To automate scene creation from text prompts, we introduce VLM-based
modeling agents enhanced with semantic grid-based analysis for improved spatial
reasoning and accurate asset placement. ImmerseGen further enriches scenes with
dynamic effects and ambient audio to support multisensory immersion.
Experiments on scene generation and live VR showcases demonstrate that
ImmerseGen achieves superior photorealism, spatial coherence and rendering
efficiency compared to prior methods. Project webpage:
https://immersegen.github.io. | [
"cs.GR",
"cs.CV"
] |
# 1. Introduction
Nowadays, modern avionics systems are based on Integrated Modular Avionics (IMA), which simplifies software development and certification. The ARINC 653 standard [1], which specifies the Real-Time Operating System (RTOS) interface [2], has been widely accepted. One of the prominent concepts adopted by ARINC 653 is time-partitioned scheduling, which ensures the required separation of individual application components.
Implementing partitioned scheduling on new platforms naturally brings some challenges. To a certain extent, many of them have already been addressed, such as resource sharing [3], scheduling [4], and security [5]. However, the increasing demand for computing power in avionics applications brings new challenges related to thermal properties. Usually, recent avionics systems utilize modern and powerful Multi-Processor System-on-Chips (MPSoC) [6, 7] which require careful thermal management.
It is well known that overheating negatively affects system reliability and safety [8]. Moreover, the rise in the operational temperature of on-chip components may lead to irreversible permanent failures [9]. On the other hand, reducing the on-chip temperature leads to a decrease in leakage power, which nowadays accounts for a significant part of modern MPSoC power consumption [10, 11]. Many thermal-related issues have already been addressed individually, including power and thermal modeling [12, 13], design of energyefficient real-time scheduling algorithms [14, 15, 16], energy-efficient execution of redundant, safety-critical tasks [17, 18], and study of thermal behavior of hardware platforms [19, 20]. However, the combination of ARINC-653 partitioned scheduling and thermal management has not yet been addressed.
Throughout this paper, we address the problem of thermally efficient task allocation under ARINC653 temporal isolation windows on heterogeneous MPSoCs. We assume that the allocation is computed offline (in the design phase) and that Dynamic Voltage and Frequency Scaling (DVFS) is not used, as it increases failure rates [17] and therefore is typically forbidden by typical safety certification requirements [21]. Since the combination of thermal-aware offline scheduling with time-partitioning constraints has not been addressed before, we propose several optimization methods to allocate periodic workload on heterogeneous MPSoC while minimizing the steady-state on-chip temperature. We focus on the relationship between the selected thermal model and the optimization method that integrates it, and how the inaccuracy of the former affects the efficiency of the latter. As required by our industry partner, we emphasize a data-driven evaluation based on the measured characteristics of real physical platforms. Therefore, we conduct a series of experiments using three hardware platforms (I.MX8QM MEK, I.MX8QM Ixora, NVIDIA TX2) to assess the quality of the proposed methods.
In our experiments, we use an open-source ARINC-653-like Linux scheduler called DEmOS that we developed and publicly released (see https://github.com/CTU-IIG/demos-sched). It provides independence from proprietary avionics RTOSes. Furthermore, we make all measured data, as well as optimization methods, publicly available at https://github.com/benedond/safety-critical-scheduling. This allows easy reproducibility of our results and enables the use of the proposed methods in an industrial context, possibly with different hardware platforms.
# Contributions.
• We propose multiple optimization methods to tackle the problem of thermal-aware task allocation on heterogeneous MPSoC under ARINC-653 temporal isolation constraints, including two informed methods based on mathematical programming, two informed methods based on genetic algorithms, one local informed heuristic, and two uniformed heuristics.
• We analyze the trade-offs between the accuracy of the power model and the performance of the optimization method by integrating several power models with the proposed optimization methods. • We conduct physical experiments on three different hardware platforms demonstrating practical applicability of the results and showing that, across all tested scenarios and platforms, the empirical sum-max power model (SM) integrated within an Integer Linear Programming formalism outperforms the other methods in terms of the thermal objective. • We make all measured data and source code of proposed optimization methods publicly available.
This paper substantially extends our preliminary study [22] by: (i) introducing more optimization methods; (ii) extending the evaluation of the empirical SM power model; (iii) introducing a linear regression-based power model for comparison; (iv) conducting experiments on three hardware platforms; (v) introducing new benchmarks based on the industrial standard EEMBC Autobench 2.0; and (vi) overall extending the scope of the experiments.
Outline. The rest of this paper is organized as follows. Section 2 summarizes the related work. Section 3 describes the system model and formalizes the scheduling problem definition. The physical hardware used for the experiments and the benchmarking kernels are characterized in Section 4. Thermal modeling with regard to the allocation of the safety-critical tasks and implementation of specific models are addressed in Section 5. The main outcome – integration of the thermal models with the optimization procedures is discussed in Section 6. Experimental evaluation follows in Section 7, and finally, Section 8 concludes the paper.
# 2. Related Work
To the best of our knowledge, our paper is the first to address the unique challenge of combining avionicsinspired time-partitioned scheduling of safety-critical workloads with thermal issues on real heterogeneous hardware platforms. We have not come across any other work that has tackled this specific problem. In Section 2.1, we review the previous research on the use of temporal isolation windows as a method for reducing interference in real-time safety-critical systems, and in Section 2.2 the research on thermal-aware scheduling and optimization.
# 2.1. Temporal Isolation
Temporal isolation, which is a key concept in ARINC-653 standard, ensures that different tasks within the avionics system are executed in a predictable and deterministic manner, even in the presence of hardware or software failures. Additionally, the temporal isolation ensures that the system can meet real-time performance requirements, such as worst-case response times, even in the presence of changes in the system’s workload or environment.
Several works address the problem of scheduling under temporal isolation constraints. Han et al. proposed a model-based optimization method for addressing the temporal isolation of systems using the ARINC 653 standard [4]. Their method based on heuristic search was intended to minimize the processor occupancy of the system, making it possible to accommodate additional application workload. Tama¸s-Selicean and Pop researched the time-partitioned scheduling of safety-critical and non-critical applications in their paper [23]. They proposed an optimization approach based on Simulated Annealing to determine the sequence and length of partitions, assuming a fixed mapping of tasks to processing elements. In their subsequent work [24], they extended the problem to also optimize the allocation of tasks to processing elements, considering different criticality levels of the applications. To solve this problem, they proposed an optimization approach based on Tabu Search. Chen et al. [25] investigated the scheduling of independent partitions in the context of IMA systems. They proposed a Mixed Integer Linear Programming (MILP) model to represent the schedulability constraints of the independent partitions, where each partition was modeled as a non-preemptive periodic task. Additionally, the authors proposed a heuristic approach to determine a start time and processor allocation for each partition.
Even though the aforementioned works address scheduling under temporal isolation constraints, none of them aims for energy-efficient schedule. As energy consumption (thermal-efficiency) and timeliness are conflicting objectives, direct application of the proposed methods in the context of thermal optimization is not easily possible. We aim at designing such methods that would respect the temporal constraints while minimizing the on-chip temperature.
# 2.2. Thermal-aware Scheduling
Thermal-aware and energy-efficient scheduling for real-time systems has been studied for many years [26]. Even though the individual approaches differ in many aspects, there are several common steps – namely benchmarking, optimization and evaluation. Typical pipeline connecting these steps is illustrated in Figure 1. The decisions made at each step relate to the other steps and have an influence on the overall properties of achieved results. In the following, we describe each step and related decisions in more detail and review relevant literature.
Benchmarking. Any thermal-aware optimization algorithm requires information about the platform and tasks to be executed, referred to as platform characteristics and task characteristics. Platform characteristics describe the thermal and power behavior without relation to the workload and might include the area of the chip, power consumption w.r.t. selected frequency, and thermal conductances and capacitances. These parameters are often taken from technical documentation [27, 28] or pre-defined configurations provided with the simulation software like HotSpot [29, 30]. However, these may not accurately reflect reality as thermal parameters are often influenced by the printed circuit board (PCB) layout chosen by a particular board manufacturer and manufacturing variations. To obtain more accurate parameters, benchmarking can be used [31, 32]. Task characteristics allow for distinctions between workloads and their thermal effects, and are typically obtained through benchmarking. However, the complexity of task characteristics can vary; some works assume all tasks to be identical [33], while others assume a single numerical coefficient [34, 29], multiple coefficients [22, 35, 36], or even very complex characteristics obtained, e.g., from CPU performance counters [37], or neural networks [38]. Care must be taken when selecting characteristics as their benchmarking can be time-consuming. In this paper, we show that even simple characteristics can be sufficient for significant temperature reduction.
Figure 1: Three steps (bechmarking, optimization and evaluation) towards thermally efficient scheduling.
Optimization. The optimization procedure integrates the scheduling algorithm and the thermal model to allocate and schedule tasks while ensuring all other timing, thermal, and resource constraints are met. For safety-critical applications, such as in avionics, offline algorithms are often required [39]. The thermal model predicts the evolution of on-chip temperature in time, based on the system state and workload. From the viewpoint of the systems dynamics, the thermal model can be transient-state [32, 40, 14, 41] or steady-state [42, 35, 29, 43], where the former is more general, while the latter is simpler to implement. According to Chantem et al. [44], the steady-state model is sufficient if the temporal parameters of the workload are short enough, which is the case for our target applications in avionics. From the spacial point of view, the thermal model can be single-output [22, 45], which provides a prediction for a single thermal node only, or multioutput [32, 43, 46]. As we experimentally show, it is quite difficult to distinguish between the temperatures of the individual computing elements of our tested platforms. We therefore use a single-output thermal model. Considering the optimization procedures for task scheduling and allocation, many different approaches have been studied. Due to the inherent complexity of thermal-aware scheduling, authors often rely on local or greedy heuristics [43, 9, 46, 42, 47, 14, 18]. Other approaches include meta-heuristics [29, 48, 49], or formulations based on mixed-integer linear programming [29, 43, 44, 45]. In this paper, we use all these three approaches and compare them. Another aspect to consider is the interaction between the scheduling algorithm and the thermal model. In the literature, we can find the following approaches: (i) The thermal model is used only to validate whether the schedule provided by the scheduling algorithm can be executed under the given thermal constraints or not [32]; (ii) In more complex cases, the loop is closed, and the information about thermal constraints violation is fed back to the scheduling algorithm, which, in turn, tries to rebuild the schedule [43, 27]; (iii) Finally, the thermal model and the thermal constraints can be integrated directly within the scheduling algorithm, thus providing the most integrated solution [44, 22]. In this paper, we implement and evaluate all three approaches.
Evaluation. Evaluation of the properties and performance of the resulting schedule is typically conducted (i) in a simulator or (ii) by evaluation on a physical platform. Simulation-based approaches are found more often [43, 9, 46, 16, 30, 44, 47, 45, 17, 18]. The reasons justifying this approach include simpler execution of the experiments and better reproducibility of the results. On the other hand, simulation is always based on models, which might fail to properly capture all details of the hardware platform. Thus some authors evaluate thermal effects of the schedules experimentally on real hardware [50, 51, 52, 53, 32]. We believe that experimenting on real hardware provides a more accurate representation of the thermal behavior of a system due to its ability to reflect real-world conditions, take into account hardware variations, and allows observing interactions with other components. Therefore, we follow the experimental path in this paper.
# 3. Goal and System Model Formalization
In this section, we define the goal of the thermal optimization and formalize the system model and input parameters defining the scheduling problem of the thermal-aware safety-critical tasks allocation on MPSoC under temporal isolation constraints.
# 3.1. Goal
We want to find an assignment of tasks to CPUs of a heterogeneous multi-core platform together with an allocation of tasks to the temporal isolation windows such that the steady-state temperature of the platform is minimized.
There are at least three factors that make this problem complex. (i) All the tasks must be scheduled within the pre-defined scheduling hyper-period, which repeats indefinitely; therefore, any task allocation with the makespan exceeding the hyper-period is not feasible. (ii) The number of isolation windows and their lengths are not known a priori. (iii) The steady-state temperature depends on the thermal interference of the tasks running in parallel on different CPUs.
# 3.2. System Model and Input Parameters
We define our model and parameters as follows.
Model of Processing Elements. We assume a heterogeneous architecture, i.e., MPSoC having $m$ computing clusters (possibly of different hardware architecture) denoted by $\{ C _ { 1 } , C _ { 2 } , \dots , C _ { m } \} = { \mathcal { C } }$ . Cluster $C _ { k }$ has $c _ { k } \in \mathbb { Z } _ { > 0 }$ cores, which are assumed to be identical. All cores in the cluster share the same clock frequency, which we assume to be fixed due to typical safety requirements [21]. Different clusters can have different frequencies. We assume that the platform has so-called platform characteristics (features) denoted as $\mathcal { F } ^ { ( { p } ) }$ , which are obtained by benchmarking and characterize the platform’s thermal/power behavior.
Task Model. We assume a set of independent, non-preemptive, periodic tasks $\mathcal { T } = \{ \tau _ { 1 } , \tau _ { 2 } , \ldots , \tau _ { n } \}$ . By $e _ { i , k } \in \mathbb { Z } _ { > 0 }$ we denote the worst-case execution time of task $\tau _ { i } \in \mathcal { T }$ on cluster $C _ { k } \in \mathcal { C }$ . All tasks are ready at time 0 and have a common period $h \in \mathbb { Z } _ { > 0 }$ , which is called a major frame length. We assume that the deadline of each task is equal to period $h$ . Each task represents a single-threaded safety-critical process that needs to be executed on one of the platform cores. We denote the task characteristics associated with task $\tau _ { i }$ as $\mathcal { F } _ { i } ^ { ( t ) }$ . Note that both task characteristics and platform characteristics are platform-specific, i.e., they need to be obtained separately for each tested platform.
Temporal Isolation. The temporal isolation of the safety-critical tasks is ensured by so-called scheduling windows, which are non-overlapping intervals partitioning the hyper-period (see Figure 2) inspired by the ARINC-653 standard; we discuss more details in [22]. We denote the set of such windows as ${ \mathcal { W } } = \{ W _ { 1 } , W _ { 2 } , \ldots , W _ { q } \}$ . Length of window $W _ { j } ~ \in ~ \mathcal { W }$ is denoted by $l _ { j } ~ \in ~ \mathbb { Z } _ { \geq 0 }$ . Each task needs to be assigned to a single window, within which it will be executed at the core of one of the clusters. At most one task per core can be executed within each window. Note that the number of windows $q$ is not known a priori but can be upper-bounded by $n$ since the tasks are non-preemptive and so only one task will be present in each window in the worst case.
Used notation is illustrated in Figure 2 on a schedule of seven tasks.
Figure 2: Illustration of the used notation.
Figure 3: Embedded platforms used for the evaluation: (a) I.MX8QuadMax Multisensory Enablement Kit by NXP (I.MX8 MEK), (b) Toradex Apalis I.MX8 board (I.MX8 Ixora), (c) Nvidia Jetson TX2 Developer Kit (TX2).
# 4. Hardware Platforms and Benchmarks
As we discussed previously, we opt for an experimental evaluation instead of a simulation. To make the comparison of optimization methods and power models more representative, we conduct the benchmarking and evaluation phases on three platforms, which are briefly described in Section 4.1. Further, we describe the benchmarking kernels selected for the experiments in Section 4.2.
# 4.1. Physical Hardware for Evaluation
We selected modern high-performing MPSoCs for the evaluation, namely I.MX 8QuadMax by NXP [54] and Nvidia Tegra X2 T186 [55]. Both of these are based on ARM big.LITTLE heterogeneous architecture hosting two CPU clusters including so-called high-performing and energy-efficient cores.
The I.MX 8QuadMax features four ARM Cortex-A53 cores and two ARM Cortex-A72 cores. Each of the cores has $3 2 \mathrm { k B }$ data cache, and each cluster has 1 MB L2 cache. We set the clock frequency of each cluster to the highest values, which is 1200 MHz for the A53 cluster, and 1600 MHz for the A72 cluster, respectively.
Similarly, Nvidia Tegra X2 T186 hosts four energy cores and two high-performing cores, which are of ARM Cortex-A57 architecture and Nvidia Denver architecture, respectively. Each A57 core has $\mathrm { 3 2 k B }$ data cache, and each Denver core has $6 4 \mathrm { k B }$ data cache. The size of the L2 cache of each cluster is 2 MB. We set the clock frequency of both clusters to 2035 MHz.
In our testbed, we have two boards with I.MX8, namely I.MX8QuadMax Multisensory Enablement Kit (MEK) [56], and Ixora carrier board with Toradex Apalis I.MX8 module [57]. In the further text, these platforms are denoted as I.MX8 MEK and I.MX8 Ixora, respectively. Besides their different form factor and PCB layout, the first one has an Aluminum heat sink mounted on the chip while the latter has none, but we cool it by airflow from an external fan. In this way, the latter chip can be observed by a Workswell infrared camera [58]. We have extended both I.MX8 boards with external power meters. Besides I.MX8, we have
Nvidia MPSoC, which is mounted on NVIDIA Jetson TX2 Developer Kit carrier board [59]. We henceforth denote this platform simply as TX2. A part of our testbed is shown in Figure 3. The configuration and used sensors are described in more detail in [31].
# 4.2. Benchmarking Kernels
To mimic the safety-critical workloads used in avionics and other similar domains, we use a set of relatively simple applications (kernels) written in C. The set contains selected kernels based on EEMBC AutoBench 2.0 [60] together with custom memory stressing tool membench and OpenGL-like software rendering tool tinyrenderer [31]. We use tinyrenderer in two configurations – rendering boggie objects (-boggie) and diablo objects (-diablo).
AutoBench is a general-purpose benchmark set containing generic workload tests, as well as automotive and signal-processing algorithms. We use twelve of its kernels including: a2time (angle to time conversion), aifirf (finite impulse response filter), bitmnp (bit manipulation), canrdr (CAN remote data request), idctrn (inverse discrete cosine transform), iirflt (infinite impulse response filter), matrix (matrix arithmetic), pntrch (pointer chasing), puwmod (pulse width modulation), rspeed (road speed calculation), tblook (table lookup and interpolation), and ttsprk (tooth to spark). Each benchmark is used in two variants, i.e. -4K and -4M, representing two different input data sizes ( $4 \mathrm { k B }$ and 4 MB). Further information about the benchmarks can be found in [61].
Membench is a tool that stresses the memory hierarchy. It can be configured in many ways. We use it in three different configurations with respect to the working set size (WSS), i.e. -1K, -1M and -4M, representing WSS of $1 \mathrm { k B }$ , $1$ MB, and 4 MB, respectively. Further, we test both sequential (- $S$ ) and random (- $R$ ) memory accesses in both read-only (- $R O$ ) and read-and-write (-RW ) variants. Therefore, we have twelve membench kernels in our benchmark set.
Each of the kernels ( $1 2 \times 2$ autobench, 12 membench, 2 tinyrenderer) is wrapped inside of an infinite loop. A single iteration represents one execution of the kernel. We report the iterations per second (IPS) of each kernel (executed on a single core, without any interference) for each tested hardware platform in Table A.6 in Appendix A.
Figure 4 shows the relative speedup $s$ on a high-performing (big) cluster compared to the energy-efficient (little) cluster, i.e., the ratio between runtimes $e$ on these two clusters normalized by their frequencies $f$ . We calculate the relative speedup $s$ as:
$$
s = { \frac { \frac { e _ { \mathrm { l i t t l e } } } { f _ { \mathrm { l i t t l e } } } } { \frac { e _ { \mathrm { b i g } } } { f _ { \mathrm { b i g } } } } } = { \frac { \mathrm { I P S } _ { \mathrm { b i g } } f _ { \mathrm { b i g } } } { \mathrm { I P S } _ { \mathrm { l i t t l e } } f _ { \mathrm { l i t t l e } } } } .
$$
We observe that the big cluster of I.MX8 (TX2) platform is, on average, about $2 . 8 \times$ and (1.3 $\times$ ) more performant than the little one.
I.MX8 MEK I.MX8 Ixora TX2 \* 冈冈冈 Relative speedup s 4 冈区凶 冈图 图 X 区 3 区 卤风 区凶 闵 图凶 区区 区区区区 2 凶 冈 1 A X& 离幽△A 西区 太 欧 A ? X 0 1 canrdr-4K 1111 matrix-4K 曲 雕 i tblook-4K Benchmark
# 5. Thermal Modeling
A lot of attention must be paid when designing a thermal model. In our view, the main aspects of the thermal model to be considered and balanced are its simplicity and accuracy. A simpler model is easier to integrate with the optimization procedures. Also, it takes less effort to identify its parameters. However, a too simple model fails to predict the system’s behavior accurately. Therefore, the trade-off between simplicity and accuracy needs to be taken into account.
The rest of this section is divided into three parts. In Section 5.1, we experimentally justify the use of our thermal model. Then, we discuss the transition from thermal to power modeling in Section 5.2. Finally, in Section 5.3 we describe the specific power models that we later integrate with the optimization procedures.
# 5.1. Thermal Experiments with Used Platforms
To justify our thermal model selection (steady-state, single-output), we perform a set of experiments. First, we analyze the platform’s thermal dynamics in Section 5.1.1, and then we assess the relationship between the temperatures of little and big CPU clusters in Section 5.1.2.
# 5.1.1. Thermal Dynamics and the Major Frame Length
Based on the typical lengths of the major fame used in avionics applications (less than one second), we decided to have a steady-state thermal model. This decision is supported by the following experiment: a schedule containing two windows of the same length is created. These two windows constitute the major frame. In the first window, all the cores are loaded, executing some workload (here pntrch-4M ), whereas, in the second window, all cores are idling. We alternately execute these two windows and monitor the temperature and power consumption of the platform. We create three instances, which differ in the major frame length and Figure 5 shows them with different colors. The first (denoted with suffix -1s) has the major frame length equal to 1 s (each window is 500 ms long), the second (-10s) has the major frame length 10 s, and the third (-100s) has the major frame length 100 s. The resulting temperatures measured in the proximity of the big cluster are shown in Figure 5.
Figure 5: Influence of the on-chip temperature near the high-performing cluster on the major frame length (hyper-period) for three instances alternating between computing and idling.
Indeed, when the major frame length of the instance are long enough, such as in the -100s case, we clearly observe the heating and cooling curves corresponding to the individual scheduling windows. However, for our use-case (-1s), we see that the temperature is almost constant.
Note that the power consumption of both platforms based on I.MX8 is nearly the same; however, their thermal trajectories differ significantly due to different physical parameters (heat sink versus no heat sink, with/without airflow).
# 5.1.2. Spatial Distribution of On-Chip Temperatures
One of the decisions to take into consideration is whether to use a single- or multi-output thermal model. Based on the results presented in this section, we decided on a single-output model.
When scheduling a workload, different parts of the chip start to produce heat. Ideally, we would like to monitor the temperature of each core. However, per-core temperature monitoring might not be possible for many platforms, including ours. Our three platforms provide us with just several temperature sensors associated with the major thermal zones (little cluster, big cluster, PMIC, GPU, etc.). We visualize the temperatures measured for the pntrch-4M-100s benchmark (the one used in the previous section) near little and big clusters in Figure 6.
Figure 6: Temperatures obtained for pntrch-4M-100s from on-chip sensors near little and big cluster thermal zones.
We observe that the temperature difference on I.MX8 Ixora is smaller compared to I.MX8 MEK, because of the absence of a heat sink on the Ixora board and active cooling that is employed. Considering the TX2 platform, we observe that both thermal zones report the same value. This might be caused by a massive heatsink, combined with the imprecision of the sensors and their possible spatial proximity.
To further investigate the thermal behavior near the CPU clusters, we look at I.MX8 Ixora using the Thermal camera. We execute pntrch- $\it { 4 M }$ on all cores of each cluster and compare the resulting images. Figure 7 shows the spatial on-chip temperature $T ( x , y )$ , where the $x$ and $y$ coordinates are in pixels (each pixel corresponds to 0.29 mm). Also, we show the heat sources on a chip $h ( x , y )$ , where $h ( x , y ) = \operatorname* { m a x } \{ 0 , - \kappa \nabla ^ { 2 } T ( x , y ) \}$ is a positive part of negative Laplacian of $T ( x , y )$ scaled by factor $\kappa > 0$ , which follows from heat diffusion equation as explained in [62, 31].
Figure 7 shows that the big cluster is heating the platform much more (the peak of $h ( x , y )$ is about $2 . 5 \times$ higher) compared to the little one. Also, the left part of the figure shows how the on-chip heat spreader distributes the heat from the heat source to the borders of the chip. When only the little cluster is executing the workload, the difference between the individual cluster zones’ temperatures is nearly negligible. When the big cluster is performing the computations, the difference is more apparent, but still only about $1 ^ { \circ }$ C for this particular workload.
To summarize the observations: although we see some differences between the temperatures measured in the vicinity of the individual clusters, both of their thermal trajectories are similar, as seen in Figure 6. Due to the heat spreader and relative proximity of both clusters, the change of the temperature near one of the clusters influences the temperature near the other one as shown in Figure 7. Taking that into account, we decided to model only the temperature near the big cluster, which is thermally dominant.
Figure 7: Spatial on-chip temperature $T ( x , y )$ on the left and hot spots $h ( x , y )$ on the right of I.MX8 Ixora with little (A53) cluster stressed at the top, and big (A72) cluster stressed at the bottom.
# 5.2. Transition from Thermal to Power Model
A single-output steady-state thermal model of an MPSoC adopted in this paper is tightly related to a simpler model based on average power consumption. In some sense, they can be used interchangeably, but from the practical viewpoint, the difference is very important. Deviations in the ambient temperature cause deviations in the steady-state temperature. On the other hand, measuring the power input is usually more stable and easily reproducible. Further, the time needed to reach stabilized temperature can be rather long, whereas the power measurements reflect the immediate state.
Widely used methodology for creating thermal models of MPSoC relies on resistance-capacitance (RC) thermal networks [12, 63, 27]. The system is modeled as a set of thermal nodes, that are interconnected via thermal conductances and associated with thermal capacitances. The relation between the temperature of every thermal node, its power consumption, and the ambient temperature can then be expressed by a set of differential equations [63]:
$$
A T ^ { \prime } + B T = P + G T _ { \mathrm { a m b } } ,
$$
where $\eta$ is the number of thermal nodes, $\pmb { A } \in \mathbb { R } ^ { \eta \times \eta }$ is a matrix of capacitances, $\boldsymbol { B } \in \mathbb { R } ^ { \eta \times \eta }$ is a matrix of thermal conductances, $\pmb { T } \in \mathbb { R } ^ { \eta \times 1 }$ is a vector of temperatures at each node, $P \in \mathbb { R } ^ { \eta \times 1 }$ is a vector of power consumption of the nodes, and $G \in \mathbb { R } ^ { \eta \times 1 }$ is a vector containing the thermal conductance between each node and the ambient.
When the system reaches a steady state, $A \mathbf { T } ^ { \prime }$ becomes zero as the temperature remains constant in time. Then, considering a single thermal node only, $B$ , $G$ and $_ { P }$ become scalars (denoted by $B$ , $G$ , and $P$ ) and the whole system reduces to linear relation with respect to $P$ :
$$
T = \frac { 1 } { B } P + \frac { G } { B } T _ { \mathrm { a m b } } ,
$$
where $T$ is the steady-state temperature at the thermal node (here, at the thermal zone near the big cluster), and $P$ is the power consumption.
We, indeed, observe this linear relation in reality, as shown in Figure 8. There, we plot the average power and steady-state temperature of various benchmarks (both memory and CPU-bound) executed on our platforms. Clearly, both measured quantities are strongly correlated.
In the rest of the paper, we work with the power model instead of the thermal model, assuming that the final transformation from the average power to the steady-state temperature can be done according to (3).
Figure 8: Average power and steady-state temperature of various benchmarks executed on tested platforms.
Note that model (2) does not take into account the temperature-dependent leakage power, contrary to, e.g., Guo et al. [64]. While this might look like a significant drawback, our results in Section 7 show that even such a model is sufficient for temperature reduction when integrated within the optimization framework.
# 5.3. Power Models
Following the general discussion on thermal modeling, we continue with descriptions of specific models. As noted in Section 5.2, since the average power and the steady-state temperature are linearly related, we implement just the models estimating the average power consumption.
# 5.3.1. Empirical Sum-Max Model
First, we summarize the sum-max model (SM) that we proposed in [22]. The model is purely empirical; given a scheduling window with the allocated tasks, it predicts the average power consumed during the execution of such a window.
Specifically, given window $W _ { j }$ of length $l _ { j }$ and tasks allocation $( T _ { 1 } ^ { j } , \dots , T _ { m } ^ { j } )$ , where set $\mathcal { T } _ { k } ^ { j }$ represents tasks allocated to cluster $C _ { k }$ in window $W _ { j }$ , the SM model predicts the average power consumption $P ( W _ { j } )$ as:
$$
P ( W _ { j } ) = \sum _ { C _ { k } \in \mathcal { C } } \sum _ { \tau _ { i } \in \mathcal { T } _ { k } ^ { j } } \left( a _ { i , k } \cdot \frac { e _ { i , k } } { l _ { j } } \right) + \operatorname* { m a x } _ { C _ { k } \in \mathcal { C } } o _ { i , k } + P _ { \mathrm { i d l e } } ,
$$
where $P _ { \mathrm { i d l e } }$ is the idle power consumption of the platform, and $o _ { i , k }$ and $u _ { i , k }$ are task-specific coefficients obtained via benchmarking. The average power of a schedule consisting of multiple windows is calculated as a weighted average of their individual contributions (the weights correspond to the window lengths).
The model is built upon the assumption that power consumption of $z$ instances of task $\tau _ { i }$ executed independently and in parallel on $z \in \{ 1 , 2 , \dots , c _ { k } \}$ cores of cluster $C _ { k }$ can be expressed as $\left( z \cdot a _ { i , k } + o _ { i , k } + P _ { \mathrm { i d l e } } \right)$ .
Table 1: Idle power consumption $P _ { \mathrm { i d l e } }$ of tested platforms.
Coefficients $u _ { i , k }$ and $o _ { i , k }$ can be related to dynamic and static power consumption incurred by execution of task $\tau _ { i }$ on $C _ { k }$ . At the end of this chapter, we present a numerical example illustrating the calculation of the SM power model for one specific window.
Platform Characteristics. In the context of the modeling and optimization framework discussed in this paper, platform characteristics $\mathcal { F } ^ { ( { p } ) }$ constitute a single parameter only, which is the idle power consumption, $\mathcal { F } ^ { ( p ) } = ( P _ { \mathrm { i d l e } } )$ . The idle power consumption of each tested platform is listed in Table 1.
Task Characteristics. The sum-max model needs two coefficients for each task and cluster, i.e., task characteristics $\mathcal { F } _ { i } ^ { ( t ) }$ are represented by a four-tuple for each $\tau _ { i } \in \mathcal { T }$ , $\mathcal { F } _ { i } ^ { ( t ) } = \left( a _ { i , 1 } , o _ { i , 1 } , a _ { i , 2 } , o _ { i , 2 } \right)$ . Following our methodology introduced in [22], we identify the coefficients for all benchmarks on each tested platform. Their values are visualized in Figure 9 (and further listed in Appendix B). Note that the sum of $o _ { i , k }$ and $u _ { i , k }$ (i.e., the height of the bar in Figure 9) represents the increase the power consumption of the platform w.r.t. $P _ { \mathrm { i d l e } }$ when executing the benchmark on a single core of cluster $C _ { k }$ .
# 5.3.2. Linear Regression Model
The sum-max model was designed to estimate the average power consumption of a whole isolation window. Its simple form allows for relatively straightforward integration with the optimization methods, e.g., with Integer Linear Programming, as we have shown in [22]. However, the model may fail to provide accurate prediction when the tasks of very distinct lengths are present in the window. Specifically, the max term counts with the largest $o _ { i , k }$ only, which might represent one of the short tasks that possibly ends early in the window. In that case, the predicted power might overestimate the actual one.
To overcome this shortcoming, we designed another power estimation model based on linear regression, which was successfully used in the context of power consumption estimation [65]. Instead of estimating the average power of the whole window, we split the window into several intervals, within which each core either executes a single benchmark for the whole time or remains idle for the whole time. We call them processing-idling intervals. The situation is illustrated in Figure 10. Then, the model estimates the power consumption of each such interval. Similarly to the SM model, the overall average power consumption is then estimated by a weighted average of the intervals’ individual contributions.
Figure 9: Values of task characteristics coefficients $o _ { i , k }$ and $_ { a _ { i , k } }$ of tested benchmarks on little (top semi-axis) and big (bottom semi-axis) clusters.
Figure 10: Illustration of processing-idling intervals needed for the LR model.
Thanks to the decomposition into the processing-idling intervals, data acquisition and parameter identification of the model becomes easier since the timing and overlaps of the individual tasks do not need to be considered (each core is either processing or idling for the whole interval). A further advantage is that such a model can be used even if the temporal isolation constraints (windows) are not considered since essentially any multi-core schedule can be divided into such intervals (simply by projecting the start times and end times of tasks to the time axis; the intervals are then defined by every two consecutive projected time points). On the contrary, the integration of the LR model with the optimization might be harder since the lengths of the processing-idling intervals are not known a priori as they depend on the allocation of the tasks to windows.
To describe the linear regression model (LR), let us assume that we have some interval $I$ with allocated tasks. By $i ( k , r )$ we denote the index of the task, which is allocated in interval $I$ to cluster $C _ { k } \in \mathcal { C }$ and its core $r \in \{ 1 , \ldots , c _ { k } \}$ . If the core is idle, we assume $i ( k , r ) = 0$ (where $\tau _ { 0 }$ represents an idle task). Now, let us assume that behavior of task $\tau _ { i }$ on each cluster $C _ { k }$ is characterized by a vector of real numbers $\hat { \pmb x } _ { i , k }$ (e.g., $\pmb { \hat { x } } _ { i , k } = ( o _ { i , k } , a _ { i , k } ) )$ . We assume that idle task $\tau _ { 0 }$ is characterized by zero vector, $\hat { \pmb { x } } _ { 0 , k } = \mathbf { 0 } ~ \forall C _ { k } \in \mathcal { C }$ . Then, the average power consumption of interval $I$ can be estimated by LR model as:
$$
P ( I ) = \sum _ { C _ { k } \in \mathcal { C } } \sum _ { r = 1 } ^ { c _ { k } } \left( \hat { \pmb { x } } _ { i ( k , r ) , k } \circ \beta _ { k , r } \right) + P _ { \mathrm { i d l e } } ,
$$
where $\mathsf { O }$ is the scalar product operator and $P _ { \mathrm { i d l e } }$ is the constant intercept term. Note that when no task is executed (all are idle), the prediction is exactly $P _ { \mathrm { i d l e } }$ , which is the platform’s idle power consumption. In this general form, the regression coefficients $\beta _ { k , r }$ are possibly different for each core. However, we assume that all cores of each cluster are identical, so arbitrary permutation of tasks allocated to a single cluster should lead to the same power consumption. By this, we can simplify the model (5) to:
$$
\begin{array} { r l } & { \displaystyle P ( I ) = \sum _ { C _ { k } \in \mathcal { C } } \sum _ { r = 1 } ^ { c _ { k } } \left( \hat { \pmb x } _ { i ( k , r ) , k } \circ \beta _ { k } \right) + P _ { \mathrm { i d l e } } } \\ & { \quad \quad \quad = \displaystyle \sum _ { C _ { k } \in \mathcal { C } } \beta _ { k } \circ \left( \sum _ { r = 1 } ^ { c _ { k } } \hat { \pmb x } _ { i ( k , r ) , k } \right) + P _ { \mathrm { i d l e } } , } \end{array}
$$
where all cores of the single cluster have the same regression coefficients. For simpler understanding of the calculation of the LR power model, we present a numerical example at the end of this chapter.
Platform and Tasks Characteristics. Similarly to SM model, the LR model needs just the idle power $P _ { \mathrm { i d l e } }$ as the platform characteristic. On the other hand, the individual tasks characteristics now correspond to the elements of input vector $\hat { \pmb x } _ { i , k }$ . For simplicity and better comparison, we can use already identified coefficients $o _ { i , k }$ and $_ { a _ { i , k } }$ , which are also the only tasks characteristics used by SM model. In general, we could also include more characteristics obtained, e.g., by monitoring the performance counters during the tasks execution [37].
Identification of Regression Coefficients. To identify the regression coefficients, we created 1000 unique instances (each representing one interval $I$ ), which were randomly populated with the benchmarking kernels described in Section 4.2. Specifically, each interval was $\mathrm { ~ 1 ~ s ~ }$ long and contained from zero (all cores idling) up to 6 (all cores processing) kernels randomly picked from the set. All these instances were executed on all tested platforms (each for 180 s), and the average power consumption was measured. Detailed evaluation of the LR model and its comparison with the SM model is presented in Section 7.1. Here, we just report the identified coefficients (i.e., elements of vectors $\beta _ { k }$ for each cluster $C _ { k } \in \mathcal { C }$ ) in Table 2.
Table 2: Regression coefficients identified for all tested platforms and coefficient of determination $R ^ { 2 }$ .
Example. To illustrate the the calculation of both power models numerically, let us assume three tasks, as illustrated in Figure 10. The first task executes a2time- $\it { 4 K }$ kernel for $4 5 0 \mathrm { m s }$ at the little cluster ( $k = 1$ ). The second task, also assigned to the little cluster, executes canrdr- $\mathbf { \Psi } _ { 4 } M$ kernel for $\mathrm { 5 5 0 m s }$ . Finally, the third task, which is assigned to the big cluster ( $k = 2$ ), executes membench-1M-RO- $\boldsymbol { \cdot } \boldsymbol { S }$ kernel for $7 0 0 \mathrm { m s }$ . Hence, the length of the window is $7 0 0 \mathrm { m s }$ . Considering the I.MX8 MEK, the relevant task characteristics coefficients (see Appendix B) are: $a _ { 1 , 1 } = 0 . 2 5$ , $o _ { 1 , 1 } = 0 . 2 5$ , $a _ { 2 , 1 } = 0 . 4 1$ , $o _ { 2 , 1 } \ : = \ : 1 . 3 6$ , $a _ { 3 , 2 } = 1 . 2 4$ , and $o _ { 3 , 2 } = 1 . 2 2$ . The SM power model just takes the maximum offset coefficient (1.36) and adds the activity coefficients contributions of the individual tasks, i.e., the estimated power consumption is $P _ { \mathrm { i d l e } } + 1 . 3 6 +$ $\begin{array} { r } { \left( 0 . 2 5 \cdot \frac { 4 5 0 } { 7 0 0 } + 0 . 4 1 \cdot \frac { 5 5 0 } { 7 0 0 } + 1 . 2 4 \cdot \frac { 7 0 0 } { 7 0 0 } \right) \doteq 8 . 5 8 \mathrm { W } } \end{array}$ . To evaluate the LR model, the whole window is split to three processing-idling intervals. The first interval is $4 5 0 \mathrm { m s }$ long and all three tasks are executed during it. The second interval is 100 ms long and covers the second and third task. The last interval is 150 ms long, and covers only the third task. The power is estimated individually for each interval, and averaged out in the end. The linear regression coefficients are listed in Table 2. For the first interval, the prediction is $[ \beta _ { 1 , 1 } ( a _ { 1 , 1 } + a _ { 2 , 1 } ) + \beta _ { 1 , 2 } \cdot a _ { 3 , 2 } ] + [ \beta _ { 2 , 1 } ( o _ { 1 , 1 } + o _ { 2 , 1 } ) + \beta _ { 2 , 2 } \cdot o _ { 3 , 2 } ]$ , which is $[ 1 . 2 0 5 \cdot ( 0 . 2 5 + 0 . 4 1 ) + 0 . 9 6 9 \cdot 1 . 2 4 ] +$ $[ 0 . 2 7 0 \cdot ( 0 . 2 5 + 1 . 3 6 ) + 0 . 4 5 6 \cdot 1 . 2 2 ] \doteq 2 . 9 9$ . Since this interval lasts only 450 ms, this number is then averaged to $2 . 9 9 \cdot \frac { 4 5 0 } { 7 0 0 } \doteq 1 . 9 2 \mathrm { W }$ . The calculations for the other two intervals are analogous; we report just their averaged contributions, which are $0 . 3 7 \mathrm { W }$ and $0 . 3 8 \mathrm { W }$ , respectively. In total, the power predicted by LR power model is $P _ { \mathrm { i d l e } } + 1 . 9 2 + 0 . 3 7 + 0 . 3 8 = 8 . 1 7 \mathrm { W }$ . The visual comparison is shown in Figure 11.
Figure 11: Comparison of LR and SM power models on an example task set from Figure 10.
# 6. Optimization Methods
In Section 5, we revised the thermal modeling, and summarized two specific power models, namely SM and LR models. Further, we identified all the necessary platform and task characteristics. In this section, we develop optimization methods that incorporate the power models and solve the problem of safety-critical task allocation while minimizing the estimated power consumption. The resulting combination presents a novel solution of the mentioned problem.
First, Section 6.1 summarizes the optimizer based on Integer Linear Programming (ILP) and the SM model that we proposed in [22]. Second, we discuss the optimization based on the LR power model in Section 6.2. Finally, we introduce an informed greedy heuristic and uninformed idle-time optimizers used for further comparison in Sections 6.3 and 6.4, respectively.
# 6.1. ILP and Sum-Max Model
The original implementation that we proposed in [22] relies on a simple encoding of the scheduling problem to the ILP formalism. Binary variable $x _ { i , j , k }$ equals to $1$ if and only if task $\tau _ { i } \in \mathcal { T }$ is allocated to window $W _ { j } \in \mathcal W$ and cluster $C _ { k } \in \mathcal { C }$ . Then, all the resource constraints can be simply written as:
$$
\begin{array} { r l } { \displaystyle \sum _ { W _ { j } \in { \mathcal W } } \displaystyle \sum _ { C _ { k } \in { \mathcal C } } x _ { i , j , k } = 1 } & { \forall \tau _ { i } \in { \mathcal T } , } \\ { \displaystyle \sum _ { \tau _ { i } \in { \mathcal T } } x _ { i , j , k } \leq c _ { k } } & { \forall W _ { j } \in { \mathcal W } , C _ { k } \in { \mathcal C } , } \end{array}
$$
meaning that each task is assigned to some cluster and core and that the capacity of each cluster is respected. To model the length of the individual windows, continuous variable $\hat { l } _ { j }$ is introduced for each window $W _ { j } \in \mathcal W$ . The length is then linked to the assignment variables by
$$
\hat { l } _ { j } \geq x _ { i , j , k } \cdot e _ { i , k } \quad \forall \tau _ { i } \in \mathcal { T } , W _ { j } \in \mathcal { W } , C _ { k } \in \mathcal { C } ,
$$
and constrained by the major frame length $h$ :
$$
\sum _ { W _ { j } \in \mathcal { W } } \hat { l } _ { j } \leq h .
$$
Finally, the SM model (4) is linearized to fit the ILP formalism and rewritten to the objective function (11). The idle power $P _ { \mathrm { i d l e } }$ is not included in the objective since it is considered to be constant. The power consumption predictions are averaged over all windows with the weights corresponding to the windows lengths. The non-linear max term is (in each window $W _ { j } \in \mathcal W$ ) replaced by continuous variable $y _ { j }$ , which serves as its upper bound. When the solver reaches the optimum, this upper bound becomes tight. The link between $Y _ { j }$ and $x _ { i , j , k }$ is formed by big-M, where $M$ is a sufficiently large constant. The objective (11) represents the estimated average power consumption. The whole ILP model encoding the scheduling problem and integrating the SM model then becomes:
$$
\mathrm { I L P - S M : } \quad \operatorname* { m i n } \frac { 1 } { h } \sum _ { W _ { j } \in \mathcal { W } } \left( \sum _ { \tau _ { i } \in \mathcal { T } } \sum _ { C _ { k } \in \mathcal { C } } ( x _ { i , j , k } \cdot a _ { i , k } \cdot e _ { i , k } ) + y _ { j } \right)
$$
subject to:
$$
y _ { j } \geq o _ { i , k } \cdot \hat { l } _ { j } - M \cdot ( 1 - x _ { i , j , k } ) \quad \forall \tau _ { i } \in \mathcal { T } , W _ { j } \in \mathcal { W } , C _ { k } \in \mathcal { C } ,
$$
6.2. Optimization Based on the Linear Regression Model
Contrary to the SM model, which predicts the power for each isolation window, the LR model predicts the power for processing-idling intervals only. Direct integration within the ILP formalism proved to be quite laborious1; therefore, we followed two different paths. First, we neglect the processing-idling intervals and assume that each task executes for the whole duration of the window to which it is allocated. We call this simplified model as LR-UB. Then we can simply formulate the optimization as a quadratic programming optimization problem as shown in Section 6.2.1. Second, we use a different optimization framework, namely the black-box optimization based on a genetic algorithm, which can simply integrate the LR model as a part of the fitness evaluation. We describe this approach in Section 6.2.2.
Figure 12: Illustration of a simplified window with a single processing-idling interval only, as used by LR-UB.
# 6.2.1. Integration of LR to QP
We assume that all tasks allocated to one isolation window are executed for the whole length of the window. Therefore, the execution times of the individual tasks are assumed to be potentially longer than they are in reality. In consequence, processing-idling intervals are completely neglected (each window becomes a single processing-idling interval). In Figure 12, we illustrate the assumed extensions of the task execution times in gray (the original processing-idling intervals are shown in Figure 10). In a sense, the idea is similar to the max term in the SM model. We hope that by minimizing the upper bound instead of the original objective, we get a schedule that performs reasonably well in practice while keeping the formulation relatively simple.
Then, we follow the same steps as for the integration of the SM model with the ILP. We use the same binary variables $x _ { i , j , k }$ deciding whether task $\tau _ { i } \in \mathcal { T }$ is allocated to window $W _ { j } \in \mathcal W$ and cluster $C _ { k } \in \mathcal { C }$ . Constraints modeling the task allocation and the resource capacity and limiting the major frame length are the same as before, only the objective changes, as the power is now predicted by the LR model for each window. The whole model is now as follows:
$$
\begin{array}{c} \mathbb { Q } \mathbb { P } \mathrm { - } \mathrm { L R } \mathrm { - } \mathbb { U } \mathrm { B } \mathrm { : ~ } \begin{array} { l } { \displaystyle \frac { 1 } { h } \cdot \sum _ { W _ { j } \in \mathcal { W } } \hat { l } _ { j } \cdot \sum _ { \tau _ { i } \in \mathcal { T } } \sum _ { C _ { k } \in \mathcal { C } } x _ { i , j , k } \cdot \underbrace { ( a _ { i , k } \cdot \beta _ { 1 , k } + o _ { i , k } \cdot \beta _ { 2 , k } ) } _ { \star } } \end{array} \end{array}
$$
Clearly, the objective becomes quadratic (due to multiplication of $x _ { i , j , k }$ and $\hat { l } _ { j }$ ).2 Note that for all the tested benchmarking kernels (see Appendix B) and identified linear regression coefficients (see Table 2), the expression denoted by $\star$ in (13) is positive. Furthermore, $\star$ is zero for the idle task. Therefore, the objective (13) becomes an upper bound on the original LR value (by the original LR we mean the LR model, which does not neglect the processing-idling intervals).
# 6.2.2. Black-Box Optimizer
Instead of using Mathematical Programming and building a complicated model, we can find the solution using a conceptually different black-box optimization framework. The objective function is not given in a closed form, but only its outputs can be observed provided the inputs. For us, given the full tasks allocation (schedule), we can compute the average power consumption based on the LR model (or possibly any other model). The black-box optimization algorithm searches through the space of all allocations and tries to find the best one w.r.t the given fitness function (here the LR model).
There are many algorithms that can be used for black-box optimization, including, for example, Particle Swarm Optimization, Differential Evolution, or Genetic Algorithms. Some of these algorithms are already implemented in existing libraries, which are often optimized for speed, easily usable, and open-source. We use Genetic algorithm (GA) from Evolutionary package implemented in Julia [66]. We use standard twopoint crossover and BGA mutation; mutation and crossover rates are set to 0.2 and 0.8, respectively. The selection is done according to a uniform ranking scheme (discarding the lowest $1 0 \%$ of the population), and the population size is set to $5 0 \cdot | \tau |$ .
We represent the position of each task $\tau _ { i } \in \mathcal { T }$ in the schedule by continuous variable $x _ { i } \in [ 0 , 1 )$ . In order to optimize the allocation problem using the continuous variables, we introduce the following transformation: Each variable $x _ { i }$ is evenly split to $m$ intervals, i.e., for two clusters, we obtain interval $[ 0 , 0 . 5 )$ representing the allocation to the first cluster and interval [0.5, 1) representing the allocation to the second one. Each such sub-interval is then again evenly split to $q$ intervals representing the allocation to the individual windows.
Still, it might happen that allocation represented by variables $x _ { i }$ would be infeasible – either due to the major frame length (when allocated windows are too long) or due to resource capacity constraints (when too many tasks are allocated to the same window and resource). There are several ways to handle this issue. One option is to use such a black-box solver that supports constrained optimization (GA can do that). Another option is to introduce post-processing that would try to reconstruct some feasible solution from the infeasible assignment. Even though it would appear that solely the former option solves the problem, too many infeasible solutions slow down the convergence of the optimization algorithm. Therefore, we use both presented options – the former for the major frame length constraint and the latter for the resource constraints.
The post-processing (reconstruction) procedure is described by Algorithm 2 in Appendix C. Informally, the preferred allocation of the tasks is pre-computed based on the transformation described above. Then, starting from the first window, the allocation of the tasks is iteratively fixed. If the task cannot be added to the current window (i.e., the resource capacity would be exceeded), its preferred allocation is moved to the next window (in a cyclic manner). The iteration over all windows is repeated twice. If there are still some unassigned tasks or the major frame length is exceeded, the solution is discarded; otherwise, feasible allocation of the tasks to windows and clusters is returned.
The black-box optimizer iterates over many possible instantiations of $x _ { i }$ . Every time some instantiation is tested, the schedule (defined by allocations created by Algorithm 2) is reconstructed and evaluated by the LR power model. After a termination condition is met (e.g., time limit or iteration limit is exceeded), the best-so-far solution is returned. We execute the algorithm with a pre-defined time limit; if it terminates sooner, it is restarted from a random instantiation of $x _ { i }$ until exhausting the time limit.
# 6.3. Greedy Heuristic
As a reference method, we describe a greedy heuristic. Such heuristics are often used in on-line realtime scheduling algorithms due to their low computation demand. Contrary to all the previous methods based on ILP, QP, or black-box optimization, the greedy heuristic does not try to search through the whole optimization space (here, set of all possible allocations). Instead, the search space is intentionally restricted in order to decrease the computation time and improve the scalability.
The heuristic that we present is based on the works of Zhou et al. [14] and Kuo et al. [47], but its main idea is rather general and applicable in the wider context. The tasks are sorted by their energy consumption and processed one by one in a non-increasing order (the most energy-consuming task goes first). In each iteration, the currently processed task is assigned to the cheapest computing cluster (w.r.t. energy consumption). The assignment is done only if some feasible schedule exists even for all the remaining (still unprocessed) tasks, i.e., the assignment cannot be fixed if it would cause infeasibility.
For the tasks ordering, we use analogous methodology that is used in [14] (in Algorithm 1) – we can identify the parameter $\mu _ { i }$ used in [14] with task characteristic $u _ { i , k }$ since both of these parameters represent tasks dynamic power consumption to some extent. Then, the task $\tau _ { i }$ is assigned to cluster $C _ { k } \in \mathcal { C }$ that minimizes $u _ { i , k } \cdot e _ { i , k }$ (i.e., expected task energy consumption). Before each assignment, feasibility needs to be checked. When considering the temporal isolation windows, it becomes a bit tricky because these windows make the scheduling on the individual clusters and their cores dependent on each other (without the windows, the situation is much simpler since only the utilization bound needs to be checked).
To check the feasibility, we use a modified ILP model as presented in Section 6.1:
ILP-FEAS: min 0 subject to:
$$
\sum _ { W _ { j } \in \mathcal { W } } x _ { i , j , r ( \tau _ { i } ) } = 1 \quad \forall \tau _ { i } \in \mathcal { T } _ { \mathrm { f i x e d } }
$$
where $\tau _ { \mathrm { f i x e d } }$ represents the set of tasks with already fixed assignment and $r : \mathcal { T } \{ 1 , \ldots , m \}$ maps the tasks with fixed assignment to the index of their assigned cluster. The whole greedy heuristic is summarized in Algorithm 1.
Note that solving ILP-FEAS model is easier compared to ILP-SM as the solver can terminate after finding any feasible solution, whereas in the latter case, it needs to explore the whole search space somehow (iterating over multiple feasible solutions).
# 6.4. Optimizer Minimizing/Maximizing the Idle Time
Finally, we present two more optimizers, this time uninformed, i.e., not using any task or platform characteristics. These methods simply optimize the idle (i.e., non-processing) time within the major frame. We present them as ILP models.
First, the total processing time $t _ { \mathrm { p r o c e s s i n g } }$ , can be expressed in terms of variables $x _ { i , j , k }$ introduced in ILP-SM as follows:
$$
t _ { \mathrm { p r o c e s s i n g } } = \sum _ { \tau _ { i } \in \mathcal { T } } \sum _ { W _ { j } \in \mathcal { W } } \sum _ { C _ { k } \in \mathcal { C } } x _ { i , j , k } \cdot e _ { i , k } .
$$
Then the total idle time $t _ { \mathrm { i d l e } }$ within the major frame is
$$
t _ { \mathrm { i d l e } } = \left( h \cdot \sum _ { C _ { k } \in C } c _ { k } \right) - t _ { \mathrm { p r o c e s s i n g } } .
$$
Now, the first model, ILP-IDLE-MAX, maximizes the idle time in the hope that long idle periods allow for the platform to cool down. Also, a schedule with maximal idle time is beneficial from the perspective of the practitioner. Sometimes the instance changes and some more tasks need to be added for the execution; in such a case, schedules with long idle periods offer the space to do so. The model can be formalized as:
Algorithm 1: Greedy heuristic.
input : set of tasks $\tau$ , set of clusters $\boldsymbol { \mathscr { C } }$ , major frame length $h$
output: assignment of the tasks to resources
1 Function CheckFeasibility $\mathcal T _ { f i x e d } \subseteq \mathcal T , r : \mathcal T _ { f i x e d } \to \{ 1 , \dots , m \} \rangle$ is
2 if a feasible solution to ILP-FEAS with fixed tasks assignment of $\tau _ { \mathit { f i x e d } }$ given by r exists then
3 return true
4 else return false;
5 sort all tasks $\tau _ { i } \in \mathcal { T }$ by $\operatorname* { m a x } _ { C _ { k } \in \mathcal { C } } \{ a _ { i , k } \cdot e _ { i , k } \}$ in non-increasing order
6 $\mathcal { T } _ { \mathrm { f i x e d } } = \left\{ \right\}$
7 foreach task $\tau _ { i } \in \mathcal { T }$ do
8 Csorted ← sort $\mathcal { C }$ by $\operatorname* { m a x } _ { C _ { k } \in \mathcal { C } } \left\{ a _ { i , k } \cdot e _ { i , k } \right\}$ in non-decreasing order
9 foreach cluster $C _ { k } \in \mathcal { C } _ { s o r t e d }$ do
10 assign $\tau _ { i }$ to $C _ { k }$ , $r ( \tau _ { i } ) = k$
11 if CheckFeasibility( $\langle T _ { \mathit { f i x e d } } \cup \{ \tau _ { i } \} , r \rangle$ then
12 Tfixed ← Tfixed ∪ {τi}
13 break
14 if $\mathcal { T } _ { \mathit { f i x e d } } = \mathcal { T }$ then
15 return assignment of tasks to clusters and windows given by the solution of ILP-FEAS with fixed
cluster assignments defined by r
16 else
17 error: feasible assignment of tasks to resources does not exist
ILP-IDLE-MAX: max $t _ { \mathrm { i d l e } }$
subject to:
Contrary to that, the second model, ILP-IDLE-MIN minimizes the idle time. The idea is that longer execution time is typically associated with the little cluster (see Figure 4), which might is more powerefficient. The model is described as
ILP-IDLE-MIN: min tidle
subject to:
# 7. Experimental Evaluation and Results
To evaluate the described power models and the optimization methods, we conduct a series of experiments on three physical platforms introduced in Section 4.1. First, we assess the quality of the power models in Section 7.1. Next, we compare the optimization methods with respect to the capability of reducing the peak temperature (Section 7.2) and based on their computational complexity and scalability (Section 7.3).
# 7.1. Power Model Evaluation
For each tested platform, we generate one thousand instances of CPU-bound workload (all kernels except membench) and one thousand instances of mixed workload (all kernels including membench), that is two thousand distinct instances in total. The workload consists of a single periodically repeated window with a length of $\mathrm { ~ 1 ~ s ~ }$ . In that window, each CPU executes either nothing (probability 0.5) or a random kernel. The kernels are executed for duration uniformly selected from interval 1 ms to 1000 ms.
Each instance was executed for $6 0 \mathrm { s }$ ; this gives more than four days (100 hours) of measured data in total. The power consumption was sampled every 10 ms, and the average value was reported. Further, we calculated the predicted power by SM, LR, and LR-UB power models.
The results for mixed workload instances are shown in Figure 13. The instances are sorted by the measured power consumption on I.MX8 MEK. Table 3 shows the mean absolute error of all power models on both types of workload. We observe that the lowest prediction error is achieved by the linear regression (LR) model. In relative terms (w.r.t. the idle power consumption), its error is $4 . 3 \%$ , $4 . 5 \%$ and $1 3 . 3 \%$ for I.MX8 MEX, I.MX8 Ixora and TX2, respectively. The SM model performed slightly worse, with average relative error of $1 1 . 2 \%$ , $5 . 6 \%$ , and $1 6 . 0 \%$ for I.MX8 MEX, I.MX8 Ixora and TX2. Finally, the LR-UB model failed to deliver satisfactory predictions; its relative error is $2 4 . 3 \%$ , $1 9 . 3 \%$ , and $7 4 . 4 \%$ for I.MX8 MEX, I.MX8 Ixora and TX2.
Figure 13: Measured and predicted power consumption of 1000 testing instances (mixed workload windows); instances are sorted by I.MX8 MEK measured power consumption.
The trends are also clearly visible in Figure 13; SM model is more pessimistic than the LR model, which is expected due to the max term in Equation (4). However, it steadily provides an upper bound on the measured power consumption. Even though the LR-UB mostly provides an upper bound as well, it is not as tight.
# 7.2. Optimization Methods Comparison
Here, we discuss how well the power models integrate with the optimization. We compare the optimization methods on two types of workloads as in the previous section: CPU-bound and mixed. For each workload type, we construct six different instances. We generate 20 tasks; each of them executes a randomly selected kernel. Each task is assigned a randomly generated execution time on the big cluster in the range
Table 3: Mean absolute error (in Watts) of the tested power models.
able 4: List of compared optimization methods and corresponding power models
40 ms to $1 6 0 \mathrm { m s }$ . Execution time for the little cluster is scaled appropriately to perform the same work. The scaling coefficient is calculated from Table A.6. The major frame length $h$ is calculated as $\begin{array} { r } { h = \frac { n \cdot e } { \kappa } } \end{array}$ , where $e$ is the average execution time across all clusters, $n$ is the number of tasks (here 20), and $\kappa$ is empirical constant changing the tightness of the schedules (here set to 3.5).
For each instance, all optimization methods, as listed in Table 4, were executed to generate a schedule for each platform. For better comparison, we execute the black-box optimizer with both SM and LR power models. The solving time limit was set to 300 s per instance. The schedules found for the first instance are illustrated in Appendix D.
During the experiment, each schedule was executed on the respective platform for 30 min; this gives 42 hours of measured data per platform. We measured the average power consumption and steady-state temperature. The power offset ( $P _ { \mathrm { m e a s u r e d } } - P _ { \mathrm { i d l e , } }$ ) is reported in Table 5 (the rows are sorted by the average power consumption on I.MX8 MEK). The ILP-SM method achieved the best results in almost all cases. The difference from the lowest result is negligible in the few cases where it was not the best. Slightly worse, but still good results were obtained by the BB-SM method. One practical difference between these two methods is that ILP-SM requires an ILP solver (here commercial Gurobi solver) for its operation, while BB-SM can be implemented with freely available tools.
An interesting observation is that the best results are obtained with the SM power model. Recall that the most accurate power model was LR, not SM. We account that to the fact that even though the SM is systematically overestimating the power consumption, it is consistent in a sense that windows with higher predicted power consumption indeed consume more than windows with lower predicted power consumption.
Table 5: Power offset $( P _ { \mathrm { { m e a s u r e d } } } - P _ { \mathrm { { i d l e } } }$ [W]) observed for six instances with mixed workloads and six instances with CPUbound workloads on tested platforms.
Figure 14 shows the temperatures near the individual clusters averaged over all six instances. We observe that the difference between the worst- and the best-performing methods are $5 . 5 ^ { \circ } \mathrm { C }$ , $4 . 9 ^ { \circ } \mathrm { C }$ , and $3 . 6 ^ { \circ } \mathrm { C }$ , corresponding to $2 2 \%$ , $1 9 . 6 \%$ , and $1 4 . 4 \%$ differences relative to the ambient temperature ( $2 5 ^ { \circ }$ C) for I.MX8 MEK, I.MX8 Ixora, and TX2.
When comparing the greedy local heuristic HEUR with the ILP-SM method, the exhaustive ILP-SM can save, in average, up to $1 . 6 ^ { \circ }$ C, $1 . 3 ^ { \circ } \mathrm { C }$ , and $0 . 6 ^ { \circ } \mathrm { C }$ (corresponding to $4 . 7 \%$ , $4 . 6 \%$ , and $1 . 8 \%$ ) for I.MX8 MEK, I.MX8 Ixora, and TX2, respectively.
# 7.3. Performance evaluation
Finally, we evaluate the scalability of tested optimization methods. We study how the computation time increases with the increasing instance size corresponding to the number of tasks $n$ .
Mixed little big
40 I.MX8 MEK I.MX8 Ixora TX2
305 25 山 20 CPU-bound
Tmeasured −Tamb [◦C] 35 I.MX8 MEK I.MX8 Ixora TX2 30 25 20 3 ILP-IDLE-MIN QP-LR-UB ILP-IDLE-MAX G ILP-IDLE-MIN QP-LR-UB ILP-IDLE-MAX S GE ILP-IDLE-MIN QP-LR-UB ILP-IDLE-MAX Method
Ten instances are randomly generated for each $n \in \{ 5 , 1 0 , \ldots , 6 0 \}$ (120 instances in total). Each optimization method is then executed for every instance; as some of the methods might be rather time-demanding for larger instance sizes, we limit the maximum computation time per instance to $3 0 0 \mathrm { s }$ . We use the same generator as in Section 7.2, but we perform the experiment only with the characteristics based on I.MX8 MEK. The outcome would be quite similar for the other platforms.
The average computation times for different values of $n$ are shown in Figure 15. As the black-box optimizer (BB) is programmed to randomly restart each time it converges to some solution, it always consumes all the provided time. Besides, the models globally optimizing the schedule w.r.t. the provided objective, i.e., QP-LR-UB and ILP-SM, are the first to run out of time. Out of these two, the more complex model based on the quadratic programming (QP-LR-UB) is about $6 \times$ slower than ILP-SM on instances with 15 and 20 tasks. Comparing the global methods to the local one (HEUR) on instances with 20 tasks, we see that the global methods ILP-SM and QP-LR-UB need about 18 $\times$ and 95 $\times$ more time, respectively. Performance of the heuristic method (HEUR) is comparable with ILP-IDLE-MIN on instances with 30 and more tasks; for smaller instances, HEUR is a bit slower, especially due to the overhead caused by performing the feasibility check (solving ILP-FEAS) multiple times. Even though the ILP-IDLE-MAX scales the best, it fails to produce thermally efficient schedules, as shown in Section 7.2.
Figure 15: Average computation time of different methods w.r.t. the instance size $n$ .
# 7.4. Evaluation Summary
To summarize the results, the linear regression-based power model (LR) exhibited lower errors than the empirical sum-max model (SM), but it proved to be harder to integrate with the optimization methods. Its simplified variant LR-UB failed to provide a tight upper bound and therefore performed rather poorly.
Considering the optimization methods, global ILP-SM based on the integer linear programming and simpler SM power model provided overall best results. The black-box approach based on metaheuristics proved to be competitive as well; especially, it might be preferred for large-size instances, for which the integer linear programming fails to deliver high-quality solutions in reasonable time. Also, the BB approach is based on an open-source implementation of a genetic algorithm, which might be an advantage when compared to the other tested methods based on the commercial Gurobi solver. | Multi-Processor Systems-on-Chip (MPSoC) can deliver high performance needed
in many industrial domains, including aerospace. However, their high power
consumption, combined with avionics safety standards, brings new thermal
management challenges. This paper investigates techniques for offline
thermal-aware allocation of periodic tasks on heterogeneous MPSoCs running at a
fixed clock frequency, as required in avionics. The goal is to find the
assignment of tasks to (i) cores and (ii) temporal isolation windows while
minimizing the MPSoC temperature. To achieve that, we propose and analyze three
power models, and integrate them within several novel optimization approaches
based on heuristics, a black-box optimizer, and Integer Linear Programming
(ILP). We perform the experimental evaluation on three popular MPSoC platforms
(NXP i.MX8QM MEK, NXP i.MX8QM Ixora, NVIDIA TX2) and observe a difference of up
to 5.5{\deg}C among the tested methods (corresponding to a 22% reduction w.r.t.
the ambient temperature). We also show that our method, integrating the
empirical power model with the ILP, outperforms the other methods on all tested
platforms. | [
"cs.SE"
] |
# 1 Introduction
With the advent of foundation models, it has become of great interest to exploit and transfer their capabilities to other models, e.g. via knowledge or dataset distillation. The goal of knowledge distillation is to train a smaller student on a small amount of data derived from a teacher [26]; dataset distillation focuses on finding a minimal training set that achieves high performance, possibly modifying the training set via information derived from the teacher [6, 28, 27]. Early on, it was shown that transforming a teacher (ensemble)’s logits into soft labels can efficiently train a possibly smaller student model [5, 1, 12]. This simple mechanism has since been extended to various architectures and modalities in distillation, with numerous methods building on soft label matching as a core ingredient, see e.g. [10, 28, 26] and references therein. Qin et al. [22] recently asserted that soft label matching on its own is still competitive for modern vision architectures. However, despite some theoretical advances [21, 24, 16, 4, 9], it still remains unclear exactly what the “dark knowledge”[12] is that soft labels contain, and how to reliably quantify it.
Among the hypotheses on the regularizing benefits of soft labels [17, 29, 32], one line of reasoning suggests that they are effective because they encode structure reflective of the data distribution [21, 16]. This implies that soft labels should be especially useful when the data distribution involves low-rank patterns and compresses informative representations to manifolds that allow for generalizing solutions. These intuitions have been verified in experiments on natural image classification data, where the top-few teacher soft labels play a crucial role in achieving a performance that matches the teacher closely [22]. However, most of current large language and vision models involve not only generalizing skills but also the memorization of facts and associations [26]. Transferring the full range of a model’s capabilities, including both generalizing patterns and memorized facts, therefore requires an understanding of whether the soft labels convey both types of information.
While there is a long history on understanding the memorization capacity of neural networks theoretically [13], practical investigations are more recent [30, 15, 7], and there is comparatively little work on how memorized facts can be transferred from white-box teachers with soft labels. Dataset distillation aims to elicit specific knowledge and general skills from the teacher by creating dedicated training data, yet there has not been a clear focus on disentangling how different generalizing and memorizing skills are transferred. Most prior work focuses on generalization and structure, leaving open the question of how memorization behaves during distillation. To fill this gap, we ask:
Figure 1: Information leakage via soft labels. We examine fully connected networks with ReLU activations and $p = 1 0 0$ hidden neurons and biases. A teacher network is trained on 2-dimensional input data $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ with i.i.d. random uniform labels drawn from $\{ 1 , 2 , \ \}$ . (A) Visualizes the data with the teacher that achieves $1 0 0 \%$ accuracy. Then, teacher data is partitioned into two disjoint sets $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ at a $( 6 0 \% , 4 0 \% )$ ratio. We examine 2 settings: Training student networks via cross-entropy (B) on the class information only, making the student independent from the teacher, and (C) on soft labels obtained from the teacher via softmax on the logits. While the independently trained model only achieves trivial accuracy of $\sim 3 0 \%$ , students that fit the teacher’s soft labels achieve non-trivial test accuracy of $\sim 5 0 \%$ . Markers indicate data from the test set, and whether it was classified wrongly $( \times )$ or correctly (◦). We report averages over 5 initializations with the standard error on the mean. (D) The decision boundaries for teacher $A$ (black) and student $B$ (blue). In Appendix A we show another example for 20 classes.
Do the teacher’s soft labels encode memorized knowledge? – And if yes, can students pick up this non-trivial information?
To isolate the role of memorization in distillation with soft labels, we train teacher networks to perfectly fit a finite dataset of input–label pairs. We then distill their “memorized” knowledge into soft labels, to train students who see only a fraction of those pairs, the distilled dataset, and are evaluated on the held-out remainder. We apply this protocol both to (i) small transformers on structured algorithmic tasks and (ii) fully connected networks on random i.i.d. data. While in (i) we exploit delayed generalization to obtain memorizing teachers, (ii) has no latent structure by design which leads to teacher memorization. Despite its simplicity, the controlled memorization-only setting (ii) has, to our knowledge, not been studied previously in the distillation literature.
For both cases we answer our question positively: From training on the teacher’s soft labels a student can indeed learn non-trivial information about held-out memorized data. A simple visual example for knowledge distillation in two dimensions is shown in Fig. 1. We summarize our specific contributions below1:
• We demonstrate for both structured but memorized datasets and purely random i.i.d. data that students trained on teacher’s soft labels can consistently recover non-trivial – in some cases perfect – accuracy on data the teacher memorized but the student never saw.
• We show that this effect depends strongly on the temperature with which the soft labels are created from the teacher logits and can be interpreted as a regularizer that interpolates between fitting the teacher function and recovering only the ground-truth training labels.
• For random i.i.d. data, we show that in logistic regression, simple closed-form capacity and identifiability thresholds separate distinct leakage regimes, and that these thresholds extend to the multi-class case with similar qualitative behavior. For ReLU MLPs, the soft label memorizing and teacher-matching solutions are distinct; the student transitions from the former to the latter only once the teacher is identifiable, with a sudden jump in accuracy.
# 2 Related Work
Knowledge distillation. Soft labels have been a central component of knowledge distillation since its inception [5, 1, 12], and have been applied across a range of domains [11, 26]. Prior work has attributed their effectiveness to regularization effects [17, 29] or to their ability to encode statistical structure aligned with the data distribution [16]. These explanations typically assume that the teacher model reflects meaningful structure in the data. In contrast, we isolate the role of soft labels when the teacher has memorized unstructured data and student and teacher have matched capacity. Under these conditions, the student achieves non-trivial accuracy on held-out memorized examples, suggesting that soft labels can transmit memorized information beyond mere regularization. Moreover, we find that even weak statistical signals in soft labels can suffice to support generalization on memorized data.
Theoretical investigations have considered deep linear networks [21], learning theoretic analyses based on linear representations [4] and information-theoretic limits on knowledge transfer such as the knowledge retained per neuron [9, 31]. In a similar setting as ours, Saglietti and Zdeborova [24] analyze regularization transfer from teacher to student, but take a teacher as a generating model itself rather than letting it memorize a fixed dataset.
Dataset distillation. With dataset distillation one aims to construct a small set of synthetic inputoutput pairs that transfer both generalization ability and task-specific knowledge [25, 28]. These synthetic examples often lie off the natural data manifold and are difficult to interpret, yet they remain effective for training student models [27]. Similar effects have been observed with arbitrary transfer sets [19], where inputs are sampled in a class-balanced manner from the teacher’s domain. These findings suggest that effective distillation may depend less on input realism and more on whether the teacher function can be inferred from the supervision [6]. While we do not modify the input distribution, our analysis shows that when the data is sufficient to identify the teacher, and softmax temperatures are high, the student can learn the teacher functionally rather than merely class labels.
Memorization. Zhang et al. [30] famously showed that deep networks can fit completely random labels, demonstrating their large capacity to memorize arbitrary data. We extend this observation by studying how such memorized information can be transferred via distillation with soft labels. This is relevant for modern large language models which do not memorize their training corpus, but still require mastering factual recall [7]. However, memorizing additional facts incurs a linear cost in model parameters [15]. Bansal et al. [2] distinguish example-level and heuristic memorization, where the latter relies on shortcuts or spurious correlations , which is known to hurt generalization [3]. In our random data setup, correlations in the dataset arise only from its finiteness, and our analysis in the large data and parameter limit rules out any spurious effects that are not incurred by the teacher.
# 3 Notation and Experimental Setting
Data. We consider input-output pairs in a classification setting, where input coordinates are $\mathbf { x } \in \mathbb { R } ^ { d }$ and there are $c$ possible labels $y \in \{ 1 , \ldots , c \}$ . This dataset is available either through the finite set $\mathcal { D }$ (Section 4) or a generating model from which we can sample i.i.d. (Section 5).
We define a finite dataset of $n$ such samples (elements) from $\mathcal { D }$ as $\mathcal { D } _ { \star } ^ { \mathrm { T } } = \{ ( \mathbf { x } ^ { \mu } , y ^ { \mu } ) \} _ { \mu = 1 } ^ { n }$ . To evaluate generalization of the teacher we consider ${ \mathcal { D } } _ { \mathrm { v a l } }$ which is either $\mathcal { D } \backslash \mathcal { D } _ { \star } ^ { \mathrm { T } }$ or an independent sample. For knowledge distillation the teacher dataset $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ is randomly partitioned into two disjoint subsets: the student training set $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and the student test set $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ . We refer to $\rho = | \mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } } | / n$ as the student’s training data fraction.
Models and Training. All models we consider are parameterized functions $f _ { \theta } : \mathbb { R } ^ { d } \mathbb { R } ^ { c }$ that map inputs $\mathbf { x }$ to class logits $\mathbf { z } \in \mathbb { R } ^ { c }$ . Predictions are obtained by applying an argmax over the output logits. We use the cross entropy loss for supervised classification. For $\mathbf { y } \in \mathbb { R } ^ { c }$ being the one-hot encoded label vectors, the cross-entropy loss with temperature $\tau$ is
Figure 2: Information leakage via soft labels for structured data in transformers. (A) Loss curves for small transformers trained on the $3 0 \%$ of the modular addition task with $p = 1 1 3$ . The models 1, 2 and 3 are stopped after different training times. (B) Students with a matching architecture trained on the respective teachers (rows) with different softmax temperatures $\tau$ (columns). We show the students train and test error, and their accuracy on ${ \mathcal { D } } _ { \mathrm { v a l } }$ . For comparison, we show the teachers validation accuracy as a horizontal line, marked with a green star. Appendix B.1 describes architecture and training details. The same experiment is repeated for ReLU MLPs in Appendix B.3.
$$
\begin{array} { l } { { \displaystyle { \mathcal { L } } _ { \mathrm { C E } } ( \{ { \bf x } ^ { \mu } , { \bf y } ^ { \mu } \} _ { n } ) = - \sum _ { i } \sum _ { k = 1 } ^ { c } ( { \bf y } ^ { \mu } ) _ { k } \log \Big [ \sigma _ { \tau } \left( f _ { \theta } \left( { \bf x } ^ { \mu } \right) \right) _ { k } \Big ] \ , \ } } \\ { { \displaystyle \sigma _ { \tau } ( { \bf z } ) _ { k } = \frac { \exp \left( z _ { k } / \tau \right) } { \sum _ { j = 1 } ^ { c } \exp \left( z _ { j } / \tau \right) } . } } \end{array}
$$
To transfer the knowledge from a teacher $f ^ { \star }$ to a student $f _ { \theta }$ we train them using the teacher’s soft labels. This is achieved using cross-entropy loss, but instead of the ground truth one-hot vector $\mathbf { y } ^ { \mu }$ we use a given teacher network’s soft labels $\hat { \mathbf { y } } ^ { \mu } = \sigma _ { \tau } ( f ^ { \star } ( \mathbf { x } ^ { \mu } ) )$ . We train using the Adam optimizer [14] with full batches and default PyTorch settings [20].
Evaluation. The performance of the teacher and student is measured in terms of accuracy of the argmax over the logit outputs. We distinguish different measurables
• $\operatorname { a c c } _ { \star } ^ { \mathrm { T } }$ – teacher on $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ (memorization), $\mathrm { a c c _ { v a l } ^ { T } }$ – teacher on ${ \mathcal { D } } _ { \mathrm { v a l } }$ (T-generalization), $\mathrm { a c c _ { t r a i n } ^ { S } }$ – student on $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ (training), $\mathrm { a c c } _ { \mathrm { v a l } } ^ { \mathrm { S } }$ – student on ${ \mathcal { D } } _ { \mathrm { v a l } }$ (S-generalization). • acctSes – student on $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ (test),
When there is no structure in $\mathcal { D }$ , the best estimator is random guessing the classes at an accuracy of $1 / c$ for $\mathrm { a c c _ { v a l } ^ { T } }$ and $\mathrm { a c c } _ { \mathrm { v a l } } ^ { \mathrm { S } }$ .
# 4 Leaking Held-Out Memories when Data is Structured
To complement our 2D toy setting from Fig. 1, we now study whether the leakage of memorized information through soft labels also occurs in more realistic architectures and structured data. Specifically, we use the modular addition task and a single layer transformer following the analysis of grokking by Nanda et al. [18]. The authors showed that in this setting training exhibits two phases: Even though the teacher quickly learns to fit the training set, generalization to the task is delayed. This allows us to isolate two different settings: Teachers that memorize their training set without discovering structure, and those that generalize. From this, we examine how student learning varies based on what the teacher has learned and how this impacts the soft labels leakage of memorized information.
Memorization and generalization in modular addition. The modular addition task requires adding two integers $a , b \in [ 0 , p ]$ modulo $p$ . We consider the case where this task is available as a dataset of tuples with one-hot encoded tokens $x = ( a , b , p ) \in \{ 0 , 1 \} ^ { 3 p }$ with the label $y \in [ 0 , p - 1 ]$ . For our experiments we consider only the case where $p = 1 1 3$ , so that the size of the complete data distribution is $\bar { | } \mathcal { D } | = 1 1 3 ^ { 2 } = 1 2$ , 769. We train the teacher on $30 \%$ of this data, the set $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ $( n = 3 , 8 3 0 )$ . The other part of the original dataset $\mathcal { D }$ is kept aside for validation in ${ \mathcal { D } } _ { \mathrm { v a l } }$ . When training a student we split $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ into two disjoint sets $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ , where $| \mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } } | = \rho n$ . The set $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ tests for the held-out memorized samples. We train transformer architectures with a single layer (see Appendix B.1). To analyze teachers that have memorized the input data to different degrees, we stop the training at three different points, see Fig. 2(A). Qualitatively we use the teachers generalization on ${ \mathcal { D } } _ { \mathrm { v a l } }$ as a measure of how much structure from the data was already discovered. At 1 the least structure is known, slightly more at 2, and the teacher completely generalizes at 3. In the following, we use these teachers to train students via soft labels. We use different temperatures $\tau$ to generate the soft labels; results as shown in Fig. 2(B). Importantly, we observe that $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \bar { \mathrm { S } } }$ is always $1 0 0 \%$ , regardless of $\rho$ and $\tau$ .
fSorftelacbhelrs1,mwaey loebasekrvmeethmaotrfiozresdmianlfl $\rho$ , im.ea.tsiomnalilntrtarianinsgfosretms $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ , the student achieves higher $\tau = 1 0$ $\mathrm { a c c _ { t e s t } ^ { S } }$ (orange) than $\mathrm { a c c _ { v a l } ^ { S } }$ (dashed green). This means that indeed the soft labels are leaking some information on the training set $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ , that accuracy on $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ is higher than for ${ \mathcal { D } } _ { \mathrm { v a l } }$ . This indicates that the soft labels leak information specific to the teacher’s training set $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ and allow the student to recover held-out memorized samples, while they do not improve performance on ${ \mathcal { D } } _ { \mathrm { v a l } }$ similarly strongly. As the fraction of seen teacher data $\rho$ grows, $\mathrm { a c c _ { t e s t } ^ { S } }$ reaches 1.0, and $\mathrm { a c c } _ { \mathrm { v a l } } ^ { \mathrm { S } }$ approaches $\mathrm { a c c _ { v a l } ^ { T } }$ . A similar but more abrupt transition occurs for teacher 2 at the same $\tau = 1 0$ (middle row, left).
These results parallel our earlier observations from Fig. 1: For some $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ training on the teacher’s soft labels leads to non-trivial accuracy on $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ , which is importantly higher than that on ${ \mathcal { D } } _ { \mathrm { v a l } }$ (analogous to random guessing previously). Unlike the 2D case, however, here the student can perfectly generalize to the held-out $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ . At the same time, despite $5 \times$ longer training than the teacher, at $\tau = 1 0$ , these students fail to generalize to ${ \mathcal { D } } _ { \mathrm { v a l } }$ when distilled from the non-generalizing teachers 1 and 2. Instead they match $\mathrm { a c c _ { v a l } ^ { T } }$ . This shows that while soft labels can leak memorized inputs, they can also prevent the student from learning latent structure that undertrained memorizing teachers have not discovered.
Higher temperatures are more data efficient for fitting the teacher. At lower temperatures $\tau$ , where the soft labels resemble one-hot labels and contain less information about the teacher, the student can outperform the teacher and generalize to ${ \mathcal { D } } _ { \mathrm { v a l } }$ . As shown in Fig.2(B, right column), at $\tau = 0 . 1$ student performance even becomes independent of the teacher. The student either fails to generalize due to insufficient data (e.g., at $\rho = 0 . 7$ ), or exhibits delayed generalization (learning curves Appendix B.2). Only for larger $\tau = 1 0$ , learning from the generalizing teacher 3 requires less data with almost immediate generalization on ${ \mathcal { D } } _ { \mathrm { v a l } }$ and for the memorizing teachers 1 & 2 the students matches their function. This highlights that higher temperatures both improve data efficiency and convergence speed, and increase the leakage of teacher-specific memorized information.
# 5 Distillation for Data without a Latent Structure
In the previous section, we used $\mathrm { a c c _ { v a l } ^ { T } }$ as a proxy for the amount of teacher memorization. However, a low $\mathrm { a c c } _ { \mathrm { v a l } } ^ { \mathrm { T } }$ does not rule out that the model internally captures some underlying structure, even if it was not predictive. To isolate memorization in a controlled setting and to characterize the leakage behavior theoretically, we now consider a data model where there is no structure in the data a priori – analogous to the introductory example from Fig. 1: The entries of the input $\mathbf { x }$ are sampled i.i.d. from a Gaussian $x _ { i } \sim \mathcal { N } ( 0 , 1 )$ and the labels $y \in \{ 1 , \ldots , c \}$ are sampled uniformly and i.i.d. from $c$ classes. The inputs and labels are independent by design, so any teacher needs to memorize the finite dataset $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ , failing to generalize to ${ \mathcal { D } } _ { \mathrm { v a l } }$ .
In the following, we analyze logistic regression, where we can derive closed-form thresholds for the recovery of $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ in the high-dimensional limit. We consider its multi-class version and show how the same threshold scales in $c$ . To estimate the impact of more complex non-linear teachers we analyze leakage in one hidden layer ReLU MLPs.
Figure 3: Binary logistic regression. (A.1) We show the training accuracy of the teacher on $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ for $\rho = 0 . 8$ and the students training (A.2) and testing accuracies (A.3) of the student on the two partitions of $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ . While the teacher is trained via Adam on the logistic loss, the student solutions are obtained from the teacher logits on $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ using the pseudo-inverse. The thresholds $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ , $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S } } ( \rho )$ and $\alpha _ { \mathrm { i d } } ^ { \mathrm { S } } ( \rho )$ are highlighted in green, pink and blue. (B) depicts the different regimes of teacher/student learning as a function of $\rho$ and the sample complexity $\alpha$ . The dimension is fixed at $d = 1$ , 600 and $n$ is varied. We distinguish whether the student fits $\bar { \mathcal { D } } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ with $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } } \geq 0 . 9 9$ (gray/blue/green) or not (red/orange). In the regime where it fits the $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ , gray implies that the student learns only close to trivial accuracy $( \mathrm { { { a c c } _ { t e s t } ^ { S } < 0 . 5 \bar { 5 } ) } }$ , blue that it is non-trivial $( \mathrm { a c c } _ { \mathrm { t e s t } } ^ { \mathrm { S } } < 0 . 9 9 )$ and green is perfect $( \mathrm { a c c } _ { \mathrm { t e s t } } ^ { \mathrm { S } } \geq 0 . 9 9 )$ . We measure the MSE loss directly on the teacher logit (see Appendix C.1) to evaluate whether the student learned the teacher (orange) or not (red) – with a threshold set at 0.1.
Finally, we show that a teacher GPT-2 model [23] fine tuned on a dataset of randomly associated sequences of tokens and classes can also exhibit non-trivial test accuracy $\mathrm { a c c _ { t e s t } ^ { S } }$ on held-out sequences.
# 5.1 Multinomial Logistic Regression
For multinomial logistic regression we consider linear models $f _ { \mathbf { W } } ( \mathbf { x } ) = \mathbf { W } \cdot \mathbf { x }$ with $\mathbf { W } \in \mathbb { R } ^ { c \times d }$ that are trained via cross-entropy, known as multinomial logistic regression or softmax regression. In knowledge distillation this limits us to a setting where teacher and student architecture match.
Formal analysis: leakage in logistic regression. We first consider the case of only two classes, logistic regression2. When we have direct access to the logit, the problem of recovering the teacher weights $\mathbf { W }$ under the square loss is equivalent to solving an (over- or under-parameterized) least squares problem, by means of the pseudo-inverse of the input matrix with the logits, i.e., $\widehat { \mathbf { W } } = \mathbf { X } ^ { + } \mathbf { z }$ where $\mathbf { X } \in \mathbb { R } ^ { n _ { \mathrm { t r a i n } } ^ { s } \times d }$ ; $\mathbf { z } = f _ { \mathbf { W } } ( \mathbf { X } ) \in \mathbb { R } ^ { n _ { \mathrm { t r a i n } } ^ { s } }$ and $n _ { \mathrm { t r a i n } } ^ { s } = | \mathcal { D } _ { \mathrm { t r a i n } } ^ { \bar { \mathrm { S } } } |$ .
We consider different sample complexities $\alpha = n / d$ . Fig. 3(A) shows the accuracy of the teacher on $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ , and the train and test accuracies of the student on $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ as a function of $\alpha$ at fixed training set size $\rho = 0 . 8$ . We observe that the $\mathrm { a c c _ { \star } ^ { T } }$ and $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ start decaying from 1.0 at a given $\alpha$ . The test accuracy grows monotonically in $\alpha$ from the trivial random guessing accuracy up to perfect accuracy, and at some point it decreases again. The general phenomenology concentrates for large $d$ and $n$ , as a function of $\rho$ , resulting in three thresholds that can be defined in terms of $\alpha = n / d$ :
$\alpha \leq \alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ – teacher memorization capacity: The teacher can fit all input-class pairs in $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ . In the proportional limit when $d , n \infty$ , Cover’s Theorem [8] states that $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } } \leq 2$ .
$a \geq \alpha _ { \mathrm { i d } } ^ { \mathrm { S } } ( \rho )$ – identifiability threshold: The student can identify the teacher using the logits, measured through the mean squared error loss on the teacher logits, which occurs at $\alpha _ { \mathrm { i d } } ^ { \mathrm { S } } = 1 / \rho$ , as the input matrix $\mathbf { X }$ becomes invertible.
$\alpha \leq \alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S } } ( \rho ) ~ \mathrm { - }$ student memorization capacity: The student can fit all data from $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ via the input-logit pairs from the teacher.
For finite sizes, we observe that the teacher memorization capacity $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } } ( d = 1 6 0 0 ) \simeq 1 . 9 6$ is already close to the infinite $d$ limit of $\alpha = 2$ . Beyond this threshold, the student cannot fit $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ perfectly anymore, as it is not memorized by the teacher and information is corrupted. However, when the teacher does memorize $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ perfectly, the student obtains perfect accuracy on $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ through the logit training set. In this case, we observe that the logits contain a weak signal on the other held-out memorized data and allow the student to obtain $\mathrm { a c c _ { t e s t } ^ { S } \geq 5 5 \% }$ for large enough $\alpha$ and $\rho$ , as shown in Fig. 3(A.3); some information on the held-out data is leaking.
Figure 4: Impact of (temperature $\tau$ | number of classes $c$ ). (A) For the setting with $c = 2$ possible classes and $d = 1 0 0 0$ , we show the capacity and learning thresholds $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ (green), $\bar { \alpha } _ { \mathrm { l a b e l } } ^ { \mathrm { S } }$ (pink), $\alpha _ { \mathrm { i d } } ^ { \mathrm { S } }$ (blue) and $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S - s h u f f l e } }$ (red) as a function of the softmax temperature $\tau$ and the sample complexity $\alpha$ ; accuracies are reported in Appendix C.2. (B) We take the number of classes as $c = \{ 2 , 5 , 1 0 , 2 0 , 3 0 , 4 0 , 5 0 \}$ , the larger the darker the color, and give train and test accuracies in varying scales $\alpha \cdot \{ c , { \sqrt { c } } , \log c \}$ . Here $\rho = 0 . 5 5$ is fixed and $d \in \{ 1 0 0 , 1 0 0 0 \}$ is varied for computational efficiency depending on $\alpha$ and $\tau = 1 0$ .
In terms of $\alpha$ , $\mathrm { a c c _ { t e s t } ^ { S } }$ grows monotonically, e.g. for $\alpha _ { \mathrm { i d } } ^ { \mathrm { S } } ( \rho = 0 . 8 , d = 1 6 0 0 ) \simeq 1 . 2 6$ , where reaches $\mathrm { a c c _ { t e s t } ^ { S } \geq 0 . 9 9 }$ – even though a fifth of the memorized data that was held-out. This means that the student can indeed recover the hidden memorized data by recovering the teacher weights $\mathbf { W }$ .
Fig. 3(B) shows the different phases can be delineated as a function of $\alpha$ and $\rho$ for a finite fixed $d = 1 6 0 0$ : low/no leakage where $\mathrm { a c c } _ { \mathrm { t e s t } } ^ { \mathrm { S } } \bar { < } 0 . 5 5$ , weak leakage of information $\mathrm { a c c } _ { \mathrm { t e s t } } ^ { \mathrm { S } } \in ( 0 . 5 5 , 0 . 9 9 )$ , fTull recovery of the held-out memorized data $\mathrm { a c c _ { t e s t } ^ { S } \geq 0 . 9 9 }$ and failed teacher memorization beyond $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ . We can separate the latter regime into two depending on $\rho$ and $\alpha$ , whether the student is able to recover the (non-memorizing) teacher or not, depending on $\mathbf { X }$ ’s invertibility.
The impact of temperature on memorization. In practical knowledge distillation with more expressive networks one cannot simply invert but instead one minimizes the cross-entropy loss on soft labels via gradient methods. Creating soft labels from a students logits requires choosing a temperature $\tau$ in the softmax function (2). With $\tau 0$ one recovers the one hot encodings of the labels and thereby destroys any information that would have been embedded by the teacher. At the other limit, when $\tau \to \infty$ , the soft labels become uniform and information about the labels and the teacher is destroyed.
For the case of multinomial regression with two classes Fig. 4(A) shows the relevant thresholds in terms of $\alpha = n / ( d c )$ and on the temperature $\tau$ for a fixed $\rho = 0 . 8$ (for accuracies see Appendix C.2).
Next to $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ , $\alpha _ { \mathrm { i d } } ^ { \mathrm { S } } ( \rho , \tau )$ , and $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S } } ( \rho , \tau )$ , we introduce another threshold, $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S - s h u f f l e } } ( \rho , \tau )$ , derived from a controlled experiment. For each input x with class $y$ in $\mathcal { D } _ { \mathrm { t e s t } } ^ { \mathrm { S } }$ , we assign a soft label sampled from a different teacher input $\mathbf { x } ^ { \prime }$ within the same class $( y = y ^ { \prime } )$ ). This procedure preserves the correct class identity – the highest soft label entry still corresponds to $y$ – but removes any teacher-specific information about $\mathbf { x }$ . As a result, the student sees noisy supervision: It is class-consistent but the correlation between the rest of the soft label and input is broken. We then define αlSa-bsehluffle $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S - s h u f f l e } } ( \rho , \tau )$ as the point at which this noise prevents the student from learning the class signal.
In Fig. 4(A), we observe that $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S - s h u f f l e } }$ transitions from $\alpha _ { \mathrm { i d } } ^ { \mathrm { S } }$ to $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ as $\tau$ increases. This supports interpreting $\tau$ as a hyperparameter that shifts the training objective between fitting soft labels and teacher function $( \mathrm { h i g h } \tau )$ and recovering class identity (low $\tau$ ).
Multiple classes $c > 2$ . As the number of classes increases, the student has a $c$ -sized soft label available per training sample, which can contain information about other samples. At the same time, the model size of both teacher and student scales with a factor of $c$ . We observe empirically that the behavior for several classes is consistent with that for two classes: The student can learn non-trivial information about held-out memorized samples and achieve up tp $1 0 0 \%$ accuracy from the soft labels.
In Fig. 4(B) we observe the scaling behavior of the four relevant thresholds in terms of the number of
classes $c$ for a fixed $\rho = 0 . 8$ and $\tau = 1 0$ , leading to
$$
\alpha _ { \mathrm { i d } } ^ { S } \sim 1 / c ; \alpha \qquad \alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } } \sim \alpha _ { \mathrm { l a b e l } } ^ { S } \sim 1 / \log c ; \alpha \qquad \alpha _ { \mathrm { l a b e l } } ^ { S \setminus \mathrm { s h u f l e } } \sim 1 / \sqrt { c } .
$$
Naturally, only the scaling of $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ is independent of $\rho$ and $\tau$ . Specifically the scaling of $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { S } }$ at $\tau 0$ should arrive at $\alpha _ { \mathrm { l a b e l } } ^ { \mathrm { T } }$ . Nonetheless, the order of the thresholds in $\alpha$ remains the same, retaining the original dependence. In Appendix C.2 we confirm this for varying temperatures and a fixed $c = 1 0$ , where the phenomena are consistent with $c = 2$ .
# 5.2 Two Mechanisms for Leaking Memorized Information in ReLU MLPs
In this section, we show that ReLU MLPs already exhibit more complex behavior for the same random uncorrelated inputs and labels as before than the multinomial regression case. In Fig. 5 we consider a matched teacher and student, ReLU MLPs with a single hidden layer, with $c = 1 0 0$ classes. On the $x$ -axis we vary the fraction $\rho$ of $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ that is observed by the student. In Fig. 5(A), the $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathrm { a c c _ { t e s t } ^ { S } }$ exhibit a similar phenomenon as before for the logistic regression in Fig. 3(D): While the student memorizes its own training set perfectly, the accuracy $\mathrm { a c c _ { t e s t } ^ { S } }$ on the held-out data is non-trivial and increases monotonically as more and more data from $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ is available. However, at a higher sample complexity shown in panel (B) of the same figure, we observe two new phenomena: A first observation is the pres
Figure 5: Both teacher and student are MLPs with a single hidden layer of size $p = 5 0 0$ and ReLU activations. The inputs are $d = 1 0 0 0$ and $c = 1 0 0$ . The teacher successfully memorizes a training set $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ of size 25,000 (A) and 100,000 (B). We track the accuracies onDb⋆oth via $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathrm { a c c _ { t e s t } ^ { S } }$ , with the standard error on the mean reported for 10 runs. In all cases shown here, some information is leaked statistically, allowing the student to surpass trivial performance on data not seen by the teacher $( \mathrm { a c c } _ { \mathrm { v a l } } ^ { \mathrm { S } } )$ , in some cases reaching up to $1 0 0 \%$ test accuracy.
ence of a phase where $\mathrm { a c c _ { t e s t } ^ { S } }$ slowly drops while the teacher accuracy $\operatorname { a c c } _ { \star } ^ { \mathrm { T } }$ remains perfect. Meanwhile, $\mathrm { a c c _ { t e s t } ^ { S } }$ is lower than for the same $\rho$ at lower sample complexity $\alpha$ . This is inconsistent with the previous observation, where larger $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ helped identifying the teacher better and therefore led to higher accuracy. A second observation is the marked jump after the drop in $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ , where both $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathrm { a c c _ { t e s t } ^ { S } }$ immediately rise to $1 0 0 \%$ accuracy.
Memorization fails before teacher identification succeeds: $\alpha _ { \mathbf { l a b e l } } ^ { \mathbf { S } } < \alpha _ { \mathbf { i d } } ^ { \mathbf { S } }$ . To understand these phenomena better, we turn to a more complete picture of the phase space in Fig. 6(A). Next to the regions already identified for the logistic regression in Fig. 3(B), we split the regions where a weak leakage is detected into two parts: The one where the student perfectly learns $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and the one where the student does not memorize the training data. In Fig. 6(B.2) it is further visible that $\mathrm { a c c _ { t e s t } ^ { S } }$ decreases before it increases as a function of the sample complexity $\alpha$ . To understand this behavior we observe $\mathrm { a c c _ { t r a i n } ^ { S } }$ and $\mathrm { a c c _ { t e s t } ^ { S } }$ as functions of training time in Fig. 6(C.3). There, a sudden jump in train and test accuractyraoinccurs as a function of student training time at around $t \sim 1 0 0$ . While before the jump, the training and testing accuracy are at different levels (and already non-trivial for the student), they jointly jump to $1 0 0 \%$ accuracy. In Appendix C.3 it is shown that this jump coincides with a drop in the CE loss on the teacher distribution and that just before the transition, $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ approaches that of a student trained on intra-class sampled soft labels. This suggests that for ReLU MLPs, there may be two distinct weight configurations: One where the student memorizes the soft labels, and another where it functionally matches the teacher. This distinction was not present for multinomial logistic regression.
Figure 6: Leakage in 1-hidden layer ReLU networks. The teacher and student architectures match as single hidden layer ReLU networks with $p = 5 0 0$ for varying settings of sample complexity $\alpha = n / ( d c )$ and student training fractions $\rho$ . The number of samples $n$ is changed while $c = 1 0 0$ and $d = 1 0 0 0$ and the temperature $\tau = 2 0$ are fixed. Each experiment is repeated 5 times and average accuracies are reported. (A) Different regimes distinguish the type of generalization the student achieves: (blue) weakly with memorization of $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ ; (light blue) weakly but without memorization of $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ ; (green) perfectly generalizing to held-out memorized data; (orange) the teacher cannot memorize $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ but the student fits the teacher nonetheless; (red) the teacher cannot fit $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ and the student does not discover the teacher either. (B) $\operatorname { a c c } _ { \star } ^ { \mathrm { T } }$ , $\mathrm { a c c } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and $\mathrm { a c c _ { t e s t } ^ { S } }$ . (C) Accuracy as a function of training time $t$ for fixed $\rho = 0 . 6 5$ and different sample complexities $\alpha$ as marked with white circles in (A) and (B), varying $n$ and keeping $d = 1 0 0 0$ . For comparison, we show averages of $\operatorname { a c c } _ { \star } ^ { \mathrm { T } }$ and $\mathrm { a c c } _ { \mathrm { v a l } } ^ { \mathrm { S } }$ at the end of training as horizontal lines.
Memorizing the soft labels vs. generalizing on the teacher function. These observations suggest that the student can learn two functionally different solutions that both leak information about held-out memorized data, but differently: One solution memorizes the teacher’s soft labels representing $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and another generalizing solution matches the teacher functionally. This extends the picture from the multinomial regression, in that not only weakly (and fully) learning the teacher function leads to non-trivial leakage on the held-out set, but also a solution that truly memorizes the soft labels can capture some additional structure on held-out data. Whether one or the other solution is learned depends non-trivially on the respective capacity thresholds, the algorithm, and the ratio between the teacher and student capacity. hidden layer size $p$ and the relative capacity of teacher and student in an unmatched setting impact $\mathrm { a c c _ { t e s t } ^ { S } }$
Localizing the information in the soft labels. In Appendix C.4 we test the effect of removing an input class $c _ { i }$ from $\mathcal { D } _ { \mathrm { t r a i n } } ^ { \mathrm { S } }$ and removing it from the soft labels by zeroing it out for all other classes $c _ { j } \neq c _ { i }$ . We find that while removing the inputs can still lead to a non-trivial accuracy on $c _ { i }$ in DtSest, removing the corresponding soft label entries is detrimental for test performance. Likewise, zeroing out the smallest $k$ values in every soft label negatively affects $\mathrm { a c c _ { t e s t } ^ { S } }$ . This leads us to hypothesize that the common practice of using only the top- $\mathbf { \nabla } \cdot k$ largest values may not allow for generalizing on the memorized information.
# 5.3 Dataset Distillation for Finetuned GPT-2 Classifiers on Random Sequences
In order to test whether these phenomena extend to random sequence data, we examine a similar setting with a GPT-2 architecture [23]. We consider sequences $x = ^ { \ast } 4 2 9 _ { - } 3 5 0 7 _ { - } 3 4 5 ^ { \prime }$ , where each sequence concatenates three random numbers sampled uniformly and i.i.d. between 1 and 1000, with a random class $y$ out of 1000 possible classes. In our setting $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ contains 6000 samples of such sequences and their classes. We equip the next-token prediction backbone GPT-2 with a linear classifier head. We use the standard tokenizer and train the teacher on $\mathcal { D } _ { \star } ^ { \mathrm { T } }$ for 100 epochs using AdamW with a learning rate of $5 \times 1 0 ^ { - 4 }$ .
After successful training, when the teacher memorizes the sentences with $X X \%$ accuracy, we extract the teacher’s logits for its training data and create soft labels, with temperature $\tau = 2 0$ . We train different the students for different fractions $\rho = \{ 0 . 2 , 0 . 5 , 0 . 8 \}$ for 200 epochs, but otherwise use the same settings as for the teacher. After convergence all three students reach $\mathrm { a c c _ { t r a i n } ^ { S } \simeq 9 9 . 5 \% }$ and $\mathrm { a c c _ { t e s t } ^ { S } = \{ 0 . 2 1 3 , 0 . 5 2 4 , 0 . 6 5 2 \} }$ (Fig. 7) – while the test accuracy of random guessing is approximately $\mathrm { a c c _ { v a l } ^ { S } = 0 . 1 \% }$ . For seeing only $8 0 \%$ of the teachers data, the student achieves $> 6 0 \%$ accuracy on the held-out data. This suggests that, similar to single-layer models, an over-parameterized language model may recover a non-trivial fraction of the teacher’s held-out memorized data. However, despite some exploration of different parameters, we did not yet observe a setting where the teacher function is exactly recovered as for the MLPs, i.e. where the student reaches $\mathrm { a c c _ { t e s t } ^ { S } } = 1 0 0 \%$ .
siyfinetrhsettuidcedntastat.r $\mathrm { a c c _ { t r a i n } ^ { S } }$ iangd $\mathrm { a c c _ { t e s t } ^ { S } }$ tf frGaPcTti-o2ncslaosf$\rho$ random sentences memorized by a teacher with the same architecture (both pre-trained). The size of the memorized training set is $n = 6 0 0 0$ sentences made of three random numbers up to 1000, each with one of 1000 classes assigned randomly. | Dataset distillation aims to compress training data into fewer examples via a
teacher, from which a student can learn effectively. While its success is often
attributed to structure in the data, modern neural networks also memorize
specific facts, but if and how such memorized information is can transferred in
distillation settings remains less understood. In this work, we show that
students trained on soft labels from teachers can achieve non-trivial accuracy
on held-out memorized data they never directly observed. This effect persists
on structured data when the teacher has not generalized.To analyze it in
isolation, we consider finite random i.i.d. datasets where generalization is a
priori impossible and a successful teacher fit implies pure memorization.
Still, students can learn non-trivial information about the held-out data, in
some cases up to perfect accuracy. In those settings, enough soft labels are
available to recover the teacher functionally - the student matches the
teacher's predictions on all possible inputs, including the held-out memorized
data. We show that these phenomena strongly depend on the temperature with
which the logits are smoothed, but persist across varying network capacities,
architectures and dataset compositions. | [
"cs.LG"
] |
introduction resulted in a greater number and diversity of needs, suggesting that a two-phase approach is beneficial. Based on our findings, we recommend a hybrid approach combining surveys and interviews to balance efficiency and coverage. Future research should explore how automation can support elicitation and how taxonomies can be better integrated into different methods.
Index Terms—requirements engineering, explainability, survey studies, focus groups, interviews
# I. INTRODUCTION
As the complexity of modern software systems continues to increase, explainability is gaining importance as a nonfunctional requirement (NFR) in software engineering [1], [2]. Explainability requirements may concern and affect different system aspects [3], [4], such as privacy [5], [6], security [7], computer-human interaction [8], [9], or artificial intelligence [10]. As such, an effective implementation of system-integrated explanations depends on the employment of appropriate requirements engineering techniques for implementing appropriate explanations [11]. Those explainability requirements may be elicited via a variety of different methods [11]. One approach is to analyze direct user feedback, such as app store reviews, support channels like emails, or helpdesk queries [9], [12]. Alternatively, more traditional methods such as surveys, interviews, and workshops or focus groups can be employed [11]. Depending on the method, the elicited requirements differ in number, depth, and quality [13]. Existing research in security requirements engineering has revealed that different elicitation methods have distinct strengths [14]. Related work in the context of explainability has shown that while methods like surveys and questionnaires yield quantitative data that provide an insightful overview, qualitative data from interviews or workshops provide the detailed feedback needed to implement explanations sensibly [15].
To facilitate the efficient handling of explainability requirements, developers can use taxonomies to classify user needs based on system-specific or general elements [4], [12]. Taxonomies can be useful tools in requirements elicitation, as they may help stakeholders understand what kinds of requirements exist and what they entail [4]. Furthermore, taxonomies may be used for automation in requirements management, as demonstrated in recent research [9].
To investigate the efficiency and effectiveness of different elicitation methods for collecting explainability requirements, this paper examines three commonly used manual approaches: focus groups, interviews, and online surveys. In conducting this study, we utilize the explanation need taxonomy developed by Droste et al. [4] to structure and support the elicitation process. To understand how these methods compare to each other, we conducted a case study at a large German IT consulting company, utilizing a web-based personnel management software as the case. We used the three elicitation methods throughout multiple iterations with stakeholders at the case company and compared their effectiveness and efficiency in capturing explainability requirements. To gain additional insights into how taxonomy usage affects the completeness, structure, and diversity of elicited explanation needs, we investigated the impact of introducing the explainability need taxonomy during the data collection process.
In summary, the contributions of this work cover the following:
Empirical comparison of three elicitation methods — focus groups, interviews, and surveys — regarding their efficiency and effectiveness in capturing explainability requirements. Investigation of the impact of taxonomy usage on the elicitation process, analyzing whether and when categorizing explanation needs by type improves their identification. Evaluation of how elicitation methods influence the distribution of explanation needs across taxonomy categories, highlighting differences in the types of needs identified.
The rest of this paper is structured as follows: Section II reviews relevant background and related work. The study design is presented in Section III. Section IV outlines the results of the study, which are analyzed and discussed in Section V. Finally, the conclusions are drawn in Section VI.
# II. BACKGROUND AND RELATED WORK
# A. Software Explainability
Software explainability has become an increasingly popular research field in recent years, especially in the field of requirements engineering [1]–[3], [11].
One challenge in this field is the high degree of individuality in users’ needs for explanation [16]. In addition, different stakeholders (e.g., engineers, end-users, legal professionals) require different types of explanations [1]. This variability makes it essential to correctly identify explanation requirements to ensure that user needs are met. In particular, it is important not to overestimate users’ needs, as explanations may also negatively affect other non-functional requirements [3]. For example, Chazette et al. [2] revealed that explainability can impair a system’s usability. Furthermore, explanations can negatively impact user experience [8], causing stress [17] and increasing users’ cognitive load [18]. Chazette et al. [2] proposed user-centered design techniques to mitigate these negative effects and recommended balancing the costs and benefits of providing explanations. Ko¨hl et al. [1] suggested treating explainability requirements as trade-offs during development.
To facilitate the handling of explainability requirements, several taxonomies have been created in previous work [4], [12]. Unterbusch et al. [12] developed a taxonomy for explanation needs derived from app reviews, distinguishing between primary and secondary concerns. Droste et al. [4] created a taxonomy for explanation needs in everyday software systems using an online survey. They found that the categories interaction and system behavior were the most prevalent explanation needs in everyday software. Speith [19] reviewed eleven existing explainability taxonomies and provided recommendations for their development.
Chazette et al. [11] identified six key activities for developing explainable systems, which they validated through interviews with 19 practitioners. They emphasized that existing user-centered practices could be effectively utilized to gather and implement explainability requirements.
# B. Requirement Elicitation
1) Elicitation of Explainability Requirements: In recent years, the elicitation of explainability requirements has gained increasing attention [4], [9], [12], [20]–[24]. Identifying the need for explanations is challenging because directly asking users what explanations they require may introduce hypothetical bias [25] and the so-called why-not mentality [15]. The why-not mentality refers to users’ tendency to always answer affirmatively when asked whether they need an explanation, as they do not perceive any negative consequences. To mitigate these biases, Deters et al. [26] attempted to objectively detect the need for explanations using biometric data. However, their results showed that biometric data is not yet a reliable predictor of explanation needs.
Ramos et al. [16] sought to support the elicitation process by developing explainability personas. Based on the responses of 61 users to a questionnaire, they identified five distinct personas to represent different explanation needs.
To systematically categorize explanation needs, Droste et al. [4] conducted an online survey with 84 participants, identifying explanation needs in everyday software systems. To enable structured categorization, they developed a taxonomy based on their study results, comprising five main categories: interaction, system behavior, privacy and security, domain knowledge, and user interface.
To initiate a structured requirements engineering process for explainability, Chazette et al. [27] developed a quality framework for explainability, summarizing external dependencies, characteristics of explanations, and evaluation methods. Their framework facilitates the analysis, operationalization, and assessment of explainability requirements and was validated through a case study involving a navigation app.
2) Requirement Elicitation in General: The elicitation of requirements is a fundamental phase in requirements engineering, as it lays the foundation for all subsequent development activities [28]. The integration of stakeholders is crucial at this stage, as the software must be designed according to their needs and expectations [29]. Various methods exist for requirement elicitation, including interviews, surveys, observations, focus groups, brainstorming, and prototyping [28], [30], [31].
For complex projects with frequently changing requirements, Mishra et al. [29] recommend combining interviews, workshops, and iterative development to improve requirement accuracy and completeness.
Hadar et al. [32] investigated the impact of domain knowledge on requirement elicitation through interviews. Their study revealed that domain knowledge can have both positive and negative effects on communication and the understanding of stakeholder needs.
3) Comparison of Elicitation Methods: The selection and combination of appropriate elicitation techniques can significantly improve the quality and accuracy of collected requirements [31]. Anwar and Razali [33] conducted an empirical study to establish practical guidelines for selecting requirement elicitation methods. They identified four main factors influencing the choice of methods: technical characteristics, stakeholder traits, requirement sources, and the project environment. Their study found that experts prefer conversational methods, such as interviews and workshops, when users have deep domain knowledge [33]. Conversely, questionnaires are more suitable when analysts possess some knowledge of the system, as they can guide users through the questions more effectively [33].
In 2015, Pacheco et al. [28] conducted a systematic literature review on frequently used elicitation methods. They found that approximately $2 2 \%$ of the studies employed more than one method, suggesting that combining multiple approaches is beneficial. However, they did not provide specific recommendations on how to integrate these methods, noting that each has unique advantages in different situations.
Younas et al. [30] highlighted the need to address nonfunctional requirements (NFRs) early in the software development process, as they influence technology selection, hardware allocation, and security standards.
# C. Selection of Elicitation Methods
Requirements engineering research indicates that combining multiple elicitation methods can be effective [34]–[36]. In this context, conversational techniques such as interviews are among the most commonly used [31], [33]. Workshops, focus groups, and interviews are all considered conversational methods. Interviews are particularly effective when the system involves users with different roles [33] and are beneficial in both global software development and traditional environments for gathering detailed information [28]. They are preferred because they foster deeper discussions through communication [33]. Workshops help resolve discrepancies between users [33], encouraging collaboration and discussion [28], and are often recommended for requirement elicitation [37]. A study by Chazette et al. [11] found that interviews, focus groups, workshops, surveys, and personas are among the most effective methods for eliciting explainability requirements, as reported by experienced IT professionals. Kru¨ger [38] defines focus groups as collaborative discussions aimed at elaborating stakeholder opinions, with a moderator facilitating a comfortable environment for discussion. Focus groups are effective for gathering multiple opinions to formalize requirements [28]. Given the aim of this work to identify as many explainability needs as possible, focus groups were selected, as users require an introduction to explainability, and expert moderators can guide discussions effectively. For large user groups, surveys are recommended [33], although they require careful planning of questions [33]. However, surveys alone are less suitable for analyzing user experience [39], and should be combined with other methods [15], [40]. Interviews, as the primary method, enhance requirement quality when combined with other techniques [36].
# III. STUDY DESIGN
We conducted a comparative analysis of three requirement elicitation methods – focus groups, interviews, and an online survey – within a company that uses personnel management software. Our objective was to determine which elicitation method is the most effective and efficient for capturing explainability requirements.
The design of the overall research for this work is illustrated in Figure 1. Our process began with a requirements engineer designing a structured elicitation procedure, which served as the foundation for all three elicitation methods. This included the application of an existing taxonomy for categorizing explanation needs into categories such as Interaction or System behavior. Based on the results of the focus groups and interviews, the online survey was conducted using the delayed taxonomy approach. Once the survey was completed, a third categorized list of explanation needs was generated. The lists of needs collected throughout the three elicitation methods were then consolidated, analyzed, and evaluated to address the research questions. Eventually, the evaluation resulted in a comprehensive coded list of explanation needs.
# A. Research Goal and Research Questions
We strive to achieve the following goal, formulated according to Wohlin et al.’s Goal-Definition Template [42]:
Research Goal: Compare the efficiency and effectiveness of different elicitation methods for the purpose of identifying the most suitable approach for gathering explanation needs with respect to the elicitation methods focus groups, interviews, and online surveys from the point of view of a requirements engineer in the context of explainability requirements elicitation.
We investigate the following research questions:
• RQ1: Which of the elicitation methods (focus groups, interviews, or online surveys) is the most efficient for collecting explainability requirements? Answering this research question will help us identify which data collection method optimizes resource usage, such as time and effort, while collecting explainability requirements from users.
• RQ2: Which of the elicitation methods (focus groups, interviews, or online surveys) is the most effective for collecting explainability requirements? This question will allow us to determine which method generates the highest quality and most comprehensive data regarding explainability requirements, helping to guide future studies in this area.
• RQ3: How do the results from the focus groups, interviews, and online survey differ? By addressing this question, we can compare how different methods influence the types and frequencies of explanation needs captured, providing insight into the strengths and limitations of each method.
RQ4: How do the collected explanation needs differ depending on when an explainability need taxonomy is applied? The answer to this question will help us understand the impact of introducing a taxonomy at different stages of data collection on the comprehensiveness and
Fig. 1: Overview of our research design in FLOW notation [41].
categorization of explanation needs, providing guidance on when to apply a taxonomy to maximize the quality and completeness of elicited requirements in practice.
# B. Participant Recruitment
Participants were recruited from a large German IT consulting company that actively uses the personnel management software examined in this study. The focus groups were conducted in person at company offices, while interviews and the online survey were administered remotely. The survey was distributed using LimeSurvey, ensuring broad accessibility for employees across different locations.
# C. Methodology to Compare the Different Elicitation Methods
This paper compares three different elicitation methods. To support the elicitation processes, we provided the stakeholders with a taxonomy that details possible types of explanation needs. In particular, we used the taxonomy by Droste et al. [4], which serves as a checklist and provides a guideline for the requirements engineer to identify the desired explanation needs.
To enable a detailed analysis, we designed two main study variants: one with direct taxonomy usage from the outset, and one where the taxonomy was introduced only after an initial open elicitation phase. From these two variants, we derived and analyzed three conditions: without taxonomy (before introduction), direct taxonomy usage, and delayed taxonomy usage (after initial open elicitation followed by taxonomy introduction).
For both focus groups and interviews, these two variants were implemented identically: In the first variant, the taxonomy was used from the start (“direct taxonomy usage”). In the second, needs were first collected openly without the taxonomy, after which the taxonomy was introduced to gather additional requirements (“delayed taxonomy usage”). This design allowed us to compare the without and delayed taxonomy conditions within the same group of participants.
For the online survey, the same process as used in the focus groups and interviews served as the basis for its design. Insights from the previously conducted qualitative methods guided the decision to apply the delayed taxonomy usage variant in the survey, as it yielded the highest number of distinct needs while maintaining efficiency.
Tables I and II provide an overview of the steps in the two different study procedures. In the first approach, the taxonomy was introduced at the beginning and used immediately to elicit explainability requirements (direct condition). In the second approach, requirements were initially collected without the taxonomy (without condition), allowing participants to express their needs freely. The taxonomy was then introduced, and additional requirements were gathered from the same participants, enabling the analysis of the delayed taxonomy usage condition. In this version, questions that were previously asked at the end of the session as closing questions were incorporated as interim questions to enhance the elicitation process.
1) Focus Groups: Two focus groups were conducted, each consisting of six participants. One group elicited explanation needs using the taxonomy from the beginning, while the other group first collected requirements without the taxonomy and was then introduced to it afterward. Each session lasted between 71 and 73 minutes.
2) Interviews: A total of 18 participants took part in the interviews, with nine assigned to the direct taxonomy usage group and nine to the group without an initial taxonomy usage. All interviews were conducted online. For the group with immediate taxonomy usage, the interviews lasted between 6 and 20 minutes, with an average duration of 11:53 minutes. For the group without an initial taxonomy usage, the duration ranged from 5 to 22 minutes, with an average of 11:07 minutes. In the group where requirements were first elicited without the taxonomy and later with its usage, the interviews lasted between 8 and 33 minutes, with an average duration of 15:53 minutes.
TABLE I: Overview of the steps of the methods with details, for the version with direct taxonomy introduction.
TABLE II: Overview of the steps of the methods with details, for the version with later taxonomy introduction with changes marked compared to the direct taxonomy introduction version.
3) Survey: For the online survey, the variant with delayed taxonomy usage was chosen. This decision was based on the increased average number of explanation needs per participant, as indicated in Table III. The survey was designed and conducted using LimeSurvey1. A total of 895 participants started the survey, of whom 277 completed it. However, responses containing irrelevant answers, such as “I have no needs” or nonsensical input (e.g., “??????”), were excluded from the final dataset. After filtering, 188 valid responses remained for analysis. The time participants spent completing the survey varied significantly. For participants who answered without taxonomy usage, completion times ranged from 1:54 minutes to 62:22 minutes, with an average of 11:24 minutes and a median completion time of 9:16 minutes. For participants who first provided responses without the taxonomy and then continued with its usage, completion times ranged from 2:28 minutes to 87:28 minutes, with an average duration of 14:08 minutes and a median of 11:09 minutes.
# D. Data Analysis
1) Metrics: To compare the efficiency of the elicitation methods, the time required to collect the explanation needs was measured. The total study time represents the entire duration from the beginning to the end of each focus group, interview, or survey. This includes both the moderator, who conducts the focus groups and interviews, and all study participants. The total effort required for each method is calculated by multiplying the total study time by the number of participants involved:
Personal effort $\ b =$ Total study time $\mathbf { \nabla } \cdot \mathbf { \varepsilon }$ Number of participants
The elicitation time consists of the time spent on introducing the study to the participants and the actual process of collecting explanation needs. Participants also received an introduction to the concept of explainability before identifying their explanation needs, which was included in the calculation.
To assess the effectiveness of the elicitation methods, the absolute number of explanation needs collected was analyzed. The total number of explanation needs was then divided by the number of participants to account for differences in sample sizes, providing a normalized measure of effectiveness. Additionally, the distribution of needs across different categories, as defined by the taxonomy of Droste et al. [4], was examined to gain insights into the ability of each method to capture a diverse range of explanation needs.
2) Types of Explanation Needs: To evaluate the effectiveness of the different elicitation methods, all explanation needs were categorized based on the taxonomy by Droste et al. [4]. Although somewhat recent, the taxonomy has already been applied in practice by other researchers [9], [21], [43], which underlines its value in practice. The categorization was performed by one requirements engineer with experience in explainability. In cases of uncertainty, a second requirements engineer, also experienced in explainability, was consulted to ensure a consistent and accurate classification. Furthermore, the taxonomy was extended in alignment with Obaidi et al. [9] and supplemented with additional software-specific categories to allow the identification of multiple explanation needs per response. This extension enables the determination of the distinct number of explanation needs.
# IV. RESULTS
The anonymized dataset, including the questionnaires, is available on Zenodo [44]. The quantitative results of the elicitation methods are summarized in Table III.
The survey method, particularly with delayed taxonomy usage, yielded the highest number of total (471) and distinct needs (364). However, interviews with delayed taxonomy usage produced the highest number of distinct needs per participant (14.78), suggesting its effectiveness in capturing diverse explanation needs. Regarding efficiency, focus groups and interviews required significantly less personnel effort compared to the survey. The shortest average total and elicitation times were recorded for the survey with direct taxonomy usage ( $1 1 { : } 2 4 \mathrm { m i n }$ and $4 { : } 0 6 \mathrm { m i n }$ , respectively). However, when normalizing for participant effort, interviews with direct taxonomy usage demonstrated the highest number of distinct needs per personnel hour (12.42). Overall, the results indicate that while surveys allow for broad data collection, interviews—particularly with delayed taxonomy usage—offer a more efficient and focused means of capturing diverse and distinct explanation needs.
TABLE III: Statistical comparison of the elicitation methods, distinguishing the versions of taxonomy usage, with the best values per aspect marked.
Table IV provides an overview of the number of distinct needs per taxonomy category.
The results show that the survey method collected the highest number of distinct needs (364), followed by interviews (133) and focus groups (27). However, the percentage distribution of explanation needs across taxonomy categories varies between methods. While focus groups resulted in a higher proportion of needs related to Feature missing (up to $2 2 \%$ ) and Domain $( 2 6 - 5 7 \% )$ , interviews captured a relatively balanced distribution across categories, with a notable emphasis on System behavior $( 2 0 - 2 6 \% )$ and Feature missing $( 7 - 1 3 \% )$ . Surveys, in contrast, had the highest share of $U I$ needs (up to $1 8 \%$ ), suggesting that larger-scale participation may surface additional concerns in this category.
Figure 2 presents the number of distinct explanation needs per participant, categorized by explanation need type, without the use of a taxonomy. The normalized values reveal notable differences between elicitation methods. Interviews generated the highest number of needs per participant (10.7), significantly outperforming focus groups (3.2) and surveys (1.7). Across categories, interviews elicited a particularly high number of Domain needs (4.6 per participant), while focus groups distributed explanation needs more evenly. Surveys resulted in relatively fewer needs per category, indicating that large-scale data collection may yield a broader but less detailed set of needs per respondent.
Fig. 2: Number of explanation needs per participant without taxonomy usage, categorized by explanation need type.
Figure 3 presents the number of distinct explanation needs per participant when the taxonomy was introduced at the beginning. The results show that interviews yielded the highest number of explanation needs per participant (11.7), significantly outperforming focus groups (3.8). This suggests that the structured nature of interviews, where individuals express their needs without group influence, may facilitate a broader collection of distinct explanation needs. In contrast, focus groups, while still effective, resulted in a lower number of needs per participant, indicating that group discussions may lead to shared perspectives and reduced individual variance. Across both methods, the domain-related category exhibited the highest number of needs, particularly in interviews (4.7 per participant), reinforcing the idea that taxonomy guidance helps participants articulate their domain-specific explanation needs.
Fig. 3: Number of explanation needs per participant with direct taxonomy usage, categorized by explanation need type.
Figure 4 illustrates the number of distinct explanation needs per participant when the taxonomy was introduced after an initial phase without it. As seen in the direct taxonomy usage scenario, interviews again resulted in the highest number of needs per participant (14.8), followed by focus groups (4.5) and surveys (1.9). The increase in collected needs compared to direct taxonomy usage (especially in interviews) suggests that allowing participants to express their needs freely before introducing a structured taxonomy may help them articulate a broader range of requirements. In both focus groups and interviews, domain-related needs remained the most frequently mentioned category, with 6.0 per participant in interviews and
TABLE IV: Comparison of elicitation methods and taxonomy usage across absolute values and percentage distributions. The highest numbers are highlighted.
1.3 in focus groups, highlighting the importance of domainspecific explainability concerns. The survey method, while having the lowest per-participant count, still showed a slight increase compared to its direct taxonomy usage counterpart, suggesting that a delayed taxonomy introduction may provide benefits in guiding participants without restricting their initial thought process.
Fig. 4: Number of explanation needs per participant with delayed taxonomy usage, categorized by explanation need type.
The comparison of taxonomy usage across elicitation methods reveals that delayed taxonomy usage consistently led to the highest number of explanation needs per participant. This effect was particularly strong in interviews, where the delayed approach resulted in 14.8 needs per participant, compared to 11.7 with direct usage and 10.7 without taxonomy. The increase was especially notable for domain-related and featuremissing needs, suggesting that an initial open-ended elicitation phase helps participants articulate more detailed requirements before being guided by the taxonomy. While focus groups showed a smaller increase, surveys exhibited only minor differences, indicating that structured input formats benefit less from delayed taxonomy introduction than interactive methods like interviews and discussions.
Table $\mathrm { \Delta V }$ highlights that the survey resulted in the highest number of repeated explanation needs, which is expected given the large number of participants (188 after filtering). The proportion of explanation needs mentioned multiple times was $2 0 . 0 5 \%$ without taxonomy usage and $2 2 . 7 2 \%$ with taxonomy usage, with the highest individual explanation need being repeated 17 times. Interestingly, focus groups exhibited the highest proportion of repeated needs when the taxonomy was introduced directly $( 1 4 . 8 1 \% )$ , suggesting that participants may have been more guided in their responses. In comparison, interviews showed a much lower proportion of repeated needs, with only $3 . 6 7 \%$ under direct taxonomy usage. This indicates that interactive settings, particularly those with direct moderation, lead to a broader distribution of explanation needs, while surveys facilitate higher redundancy in the collected requirements.
Fig. 5: Comparison of explanation needs across elicitation methods: left without taxonomy usage, right with delayed taxonomy usage.
The Venn diagrams in Figure 5 illustrate the overlap of explanation needs across elicitation methods for two different taxonomy usage conditions: without taxonomy usage and with delayed taxonomy usage.
For the condition without taxonomy usage (Figure 5a), the majority of needs remain unique to each method, with 327 needs exclusive to the survey, 96 to interviews, and 19 to focus groups. Overlap between methods is limited, with 24 needs shared between surveys and interviews, and only one need identified across all three methods.
For the delayed taxonomy usage condition (Figure 5b), the overall trend remains similar, with the majority of needs being method-specific. Here, 336 needs are unique to surveys, 105 to interviews, and 22 to focus groups, while 25 needs overlap between interviews and surveys. Notably, the introduction of the taxonomy after an initial round of elicitation does not significantly alter the overlap between methods, but it does lead to a slightly higher total number of identified needs.
For the direct taxonomy usage condition, only focus groups and interviews were conducted, with 20 needs exclusive to focus groups, 102 to interviews, and only three shared between both. This indicates that direct taxonomy usage does not necessarily increase the overlap between methods, but rather helps structure the responses within each method.
TABLE V: Comparison of elicitation methods and taxonomy usage, showing the proportion of total needs to distinct needs, with category-wise distributions. The highest values per taxonomy usage group are bolded.
# V. DISCUSSION
In the following, we answer the research questions, present threats to validity, and interpret the results.
# A. Answers to the Research Questions
RQ1: Which of the elicitation methods (focus groups, interviews, or online surveys) is the most efficient for collecting explainability requirements? Interviews were the most efficient elicitation method, achieving the highest values for distinct needs per participant per average time (0:34 or higher) and per personnel effort (10:08 or higher), while surveys followed closely. Focus groups were the least efficient, with values below 0:15 and 5:54, respectively. However, when considering absolute numbers, surveys collected the highest total number of explanation needs, making them the most productive method in terms of sheer volume.
RQ2: Which of the elicitation methods (focus groups, interviews, or online surveys) is the most effective for collecting explainability requirements? Surveys collected the highest total number of explanation needs, but many were repeated, with $2 0 . 0 5 \%$ to $2 2 . 7 2 \%$ redundancy depending on taxonomy usage. Interviews had the highest number of distinct needs per participant, making them the most effective method for capturing a diverse set of explainability requirements.
RQ3: How do the results from the focus groups, interviews, and online survey differ? The distribution of explanation needs varied across methods, with interviews and focus groups eliciting more domain-related and featuremissing needs, while surveys captured a broader spread across categories but with more repetition. Business-related needs were primarily captured in focus groups, while Security & privacy needs were almost exclusively found in surveys with delayed taxonomy usage.
RQ4: How do the collected explanation needs differ depending on when an explainability need taxonomy is applied? Delayed taxonomy usage led to the highest number of distinct needs per participant, especially in interviews, where it increased the number of elicited needs by approximately $2 6 \%$ .
Direct taxonomy usage resulted in a more structured distribution across categories, while no taxonomy usage produced fewer needs overall, particularly for less intuitive categories like Security & privacy.
# B. Interpretation
The results of our study provide several insights into the efficiency and effectiveness of different elicitation methods for explainability requirements. Interviews emerged as the most efficient method, achieving the highest values for distinct needs per participant per average time, while focus groups were the least efficient. However, surveys collected the highest number of total and distinct needs, making them the most effective in absolute terms. This aligns with prior research [28], [33], which has shown that interviews and surveys can yield large amounts of data, while workshops and focus groups offer more structured, interactive discussions.
Beyond efficiency and effectiveness, our findings reveal that different elicitation methods tend to capture distinct types of explanation needs. Interviews elicited a particularly high proportion of Domain needs, while surveys identified more System behavior and User Interface needs. Focus groups, despite yielding fewer overall needs, showed a notable share of Feature missing needs. The low overlap between methods further supports that they elicit different perspectives on explainability requirements, emphasizing the necessity of a mixed-methods approach to achieve comprehensive coverage. 1) The Role of Taxonomy Usage: One of the key research questions in our study was whether the use of a taxonomy improves the elicitation of explainability requirements. The results indicate that delayed taxonomy usage yields the highest number of distinct needs, particularly in interviews and surveys. This suggests that initially allowing participants to articulate their explanation needs without a predefined structure encourages more spontaneous and diverse responses. Only afterward, when the taxonomy is introduced, do participants benefit from additional guidance, helping them refine and categorize their needs. This insight supports the findings of prior research [4], which highlights the importance of structured taxonomies for requirements management. However, the results suggest that introducing a taxonomy too early may limit creativity and reduce the diversity of elicited needs.
Furthermore, our findings indicate that surveys had a high dropout rate, particularly after the software usage questions and the introduction to explainability. This suggests that participants may expect surveys to be quick and straightforward and may disengage when they perceive them as too complex or time-consuming. This is an important consideration for companies using surveys to elicit requirements, as overly technical or lengthy surveys may deter participation. A more concise and user-friendly survey design could improve completion rates while still capturing valuable insights.
2) Recommendations for Companies: Based on these findings, we propose the following recommendations for companies seeking to elicit explainability requirements:
1) Use interviews for efficiency and surveys for scalability. If time and resources are limited, interviews provide the best balance between depth and efficiency. If a broader range of needs is required, surveys should be preferred.
2) Consider a two-phase elicitation approach. First, collect explanation needs openly without imposing a predefined taxonomy, then introduce a structured taxonomy to refine and categorize responses. This method maximizes creativity while ensuring completeness.
3) Combine multiple elicitation methods. Prior studies [28], [31] suggest that combining qualitative and quantitative methods leads to better requirements coverage. Our findings reinforce this by demonstrating that surveys, interviews, and focus groups all have unique strengths.
4) Ensure active moderation in focus groups. Since group discussions do not inherently prevent redundant responses, facilitators should encourage participants to build on previous answers rather than repeat them.
3) Contributions: Our study builds on existing literature by providing an empirical comparison of elicitation methods specifically for explainability requirements. While previous studies have compared requirement elicitation methods in general [31], [33], our work is among the first to explore the role of taxonomy usage in this context. The finding that delayed taxonomy usage leads to the most diverse set of needs is particularly noteworthy and could influence how future requirement elicitation frameworks are designed. Additionally, our results challenge the assumption that structured taxonomies always improve elicitation—while they enhance categorization, their premature use may restrict the diversity of responses.
Overall, our findings suggest that there is no universally superior elicitation method, but rather that the choice depends on the specific goals of the elicitation process. If a company aims to minimize resource expenditure, interviews should be conducted, as they are the most efficient method. If the goal is to identify as many explanation needs as possible, surveys are the best option. However, if ample resources are available, a combination of surveys and interviews is recommended to maximize both breadth and depth in requirement elicitation. Based on our results, companies should adopt a hybrid approach, leveraging interviews for efficiency, surveys for scalability, and taxonomies strategically to maximize the completeness and quality of collected requirements.
# C. Threats to Validity
The following section applies the ”Threats to Validity” as described by Wohlin [42] to the content of this work. These threats are categorized into construct, internal, conclusion, and external validity.
1) Construct Validity: One potential threat is the reliance on distinct needs as the primary metric for effectiveness. While this prevents inflation from redundant responses, it may overlook nuances in how explanation needs are articulated. Additionally, the introduction of a taxonomy could have influenced participants’ responses by steering them toward predefined categories rather than capturing more spontaneous needs.
The categorization of needs was performed by a single requirements engineer who consulted another expert in cases of uncertainty. However, inter-rater reliability was not explicitly measured, which could impact reproducibility. While a second requirements engineer was consulted in cases of uncertainty, the lack of a systematic inter-rater agreement assessment means coder subjectivity may still have influenced the final coding of needs. The classification of unique needs also carries the risk of misclassification, potentially affecting the accuracy of the results.
A potential threat is that “no taxonomy usage” and “delayed taxonomy usage” were not examined in separate studies, making the “no taxonomy usage” data a subset of the “delayed taxonomy usage” data. While this introduces dependencies, it also ensures that comparisons are not influenced by external factors like participant differences or environmental variations, providing a controlled insight into the impact of taxonomy introduction.
2) Internal Validity: A key limitation is the difference in data collection conditions: focus groups were conducted in person, while interviews and surveys took place online. This discrepancy may have influenced engagement, discussion flow, and facilitator interaction. Additionally, the two focus groups differed in composition—one consisted of participants who knew each other well, while the other was composed of individuals with less prior interaction, potentially affecting group dynamics.
Another threat concerns facilitation bias: The specific role and behavior of facilitators in interviews and focus groups were not systematically documented or controlled, which may have influenced the responses and depth of elicited needs.
The survey exhibited a high dropout rate, particularly after the software usage questions and the introduction to explainability. This suggests that some participants found the survey too complex or time-consuming, leading to potential selfselection bias. Responses may therefore predominantly reflect the views of more motivated participants. In addition, since the survey was conducted remotely, we cannot determine whether participants remained actively engaged throughout the entire session or if they left the survey open while attending to other tasks. However, this reflects a realistic scenario for online surveys in practice, where participants complete them at their own pace and with varying levels of focus, making this threat minimal in real-world applications.
Variations in session duration could have influenced participant attention, particularly in longer sessions where cognitive fatigue may have played a role. However, the durations of all three elicitation methods were comparable.
3) Conclusion Validity: The participant distribution across methods was imbalanced, with 188 survey respondents compared to 18 interview participants and 12 in focus groups, which affects the direct comparability of methods due to differences in statistical power. However, to ensure a fair evaluation of efficiency and effectiveness, the data was normalized, allowing for a more balanced comparison across elicitation methods.
No statistical tests were conducted to assess the significance of differences observed between elicitation methods. As a result, reported differences should be interpreted as descriptive rather than statistically confirmed.
While taxonomy usage was controlled in a structured manner, participants’ prior knowledge of explainability concepts was not explicitly measured. Differences in familiarity could have influenced how well participants articulated their needs. Additionally, the manual categorization of needs introduces potential subjectivity, as inter-rater reliability was not systematically assessed.
4) External Validity: The study was conducted within a single German company using personnel management software. The findings may not fully generalize to other domains, such as security-critical or AI-driven applications, where explanation needs could differ. Moreover, as the study focused on human resources software, the specific domain context may have biased the types of explanation needs collected. The influence of domain characteristics was acknowledged but not analyzed in depth. Furthermore, all participants were employees of the same company, limiting the diversity of perspectives.
Only three elicitation methods—focus groups, interviews, and surveys—were examined. Alternative approaches, such as ethnographic studies, participatory design, or observational techniques, were not included. Additionally, the high survey dropout rate suggests that online elicitation methods require careful design to maintain engagement, particularly for complex topics like explainability.
The role of taxonomies in elicitation was explored, but their impact beyond this stage remains unclear. Further research is needed to determine whether taxonomies support prioritization, validation, and implementation of explainability requirements in software development.
# D. Future Work
Our study provides valuable insights into the efficiency and effectiveness of different elicitation methods for explainability requirements. However, several avenues for future research remain open.
To confirm the generalizability of our results, similar studies should be conducted in different software domains (e.g., safety-critical systems, AI-driven applications) and across companies in different countries. This would help determine whether cultural or industry-specific factors influence the optimal choice of elicitation methods. Additionally, further research could explore automated or semi-automated approaches to support the elicitation process, such as AI-driven questionnaires or natural language processing techniques to refine and categorize explanation needs dynamically.
While our study focused on focus groups, interviews, and surveys, future research could investigate additional elicitation methods such as workshops, observations, user diaries, ethnographic studies, or participatory design approaches. These methods might offer alternative advantages in different organizational settings, especially when explanation needs are complex and context-dependent. Furthermore, an extended study could examine how group size, facilitator involvement, or interaction styles influence the effectiveness of these elicitation methods.
Another promising direction is to broaden the scope beyond explainability requirements and apply a similar methodology to other NFRs, such as usability, security, or transparency. Investigating whether elicitation methods yield different patterns for various NFRs could help refine best practices for software requirements engineering in general.
Additionally, the role of taxonomy usage in elicitation could be further explored. Our results suggest that delayed taxonomy usage is particularly effective, making it worthwhile to refine the taxonomy further by adding new categories or clarifying existing ones. Future studies could also investigate whether taxonomies not only aid in collecting requirements but also support requirements engineers and developers in structuring, prioritizing, and implementing explanations more effectively. Furthermore, using different taxonomies tailored to specific domains or software types could provide insights into their adaptability across various projects.
Our dataset includes demographic factors such as job role, age, and software usage frequency, which were not analyzed in this study. Future research could explore correlations between user characteristics and explanation needs, identifying patterns that could refine the selection of elicitation methods for different stakeholder groups.
Finally, our findings suggest that taxonomy usage impacts the diversity of collected needs, but the long-term impact of taxonomy usage in software development remains unclear. Future work could explore how taxonomies influence requirement validation, prioritization, and integration into agile development workflows. Investigating how teams adopt taxonomies beyond the elicitation phase—such as in design, implementation, and evaluation—could provide further guidance on their overall value in software engineering. | As software systems grow increasingly complex, explainability has become a
crucial non-functional requirement for transparency, user trust, and regulatory
compliance. Eliciting explainability requirements is challenging, as different
methods capture varying levels of detail and structure. This study examines the
efficiency and effectiveness of three commonly used elicitation methods - focus
groups, interviews, and online surveys - while also assessing the role of
taxonomy usage in structuring and improving the elicitation process. We
conducted a case study at a large German IT consulting company, utilizing a
web-based personnel management software. A total of two focus groups, 18
interviews, and an online survey with 188 participants were analyzed. The
results show that interviews were the most efficient, capturing the highest
number of distinct needs per participant per time spent. Surveys collected the
most explanation needs overall but had high redundancy. Delayed taxonomy
introduction resulted in a greater number and diversity of needs, suggesting
that a two-phase approach is beneficial. Based on our findings, we recommend a
hybrid approach combining surveys and interviews to balance efficiency and
coverage. Future research should explore how automation can support elicitation
and how taxonomies can be better integrated into different methods. | [
"cs.SE"
] |
# 1 Introduction
Commonsense reasoning is an important problem in natural language understanding. It helps models make inferences that match human knowledge about the world, such as cause and effect, physical actions, social behavior, and hidden assumptions (Sap et al., 2019; Aggarwal et al., 2021). Unlike models that only rely on surface-level text, commonsense reasoning allows systems to fill in missing information, reduce ambiguity, and understand what is not directly said. This ability is especially important in dialogue systems, where understanding the speaker’s intent often depends on shared background knowledge (Wu et al., 2020).
Intent detection is the task of identifying the purpose behind a speaker’s utterance. It is usually treated as a classification problem with a fixed set of intent labels (Mensio et al., 2018). However, in real-world settings, people often express their intentions in indirect ways. To handle this, models need to rely on pragmatic and commonsense understanding (Pareti et al., 2013; Louis et al., 2020). For example, knowing whether a question shows confusion, politeness, or sarcasm may require the system to infer social and contextual cues (Rashkin et al., 2018).
Recently, several studies have explored how commonsense knowledge can support intent detection and related tasks. These studies use different sources of knowledge, such as structured graphs (Lin et al., 2019), pretrained language models (Bosselut et al., 2019), and knowledge generated through prompting (Liu et al., 2021). At the same time, researchers have developed new benchmarks that test models’ reasoning skills beyond surface similarity, focusing on causal reasoning, generalization, and understanding implicit meaning (Tafjord et al., 2019; Chi et al., 2024).
This review gives an updated overview of recent work on commonsense reasoning and intent detection, focusing on papers published between 2020 and 2025 in ACL, EMNLP, and CHI. Unlike earlier surveys, we include a wider range of methods, such as graph-based, generative, contrastive, and hybrid models, and cover different types of reasoning like causal, physical, and social. We also explore how these methods are used in intent detection, especially in multilingual, interactive, and user-centered settings. By combining work from both NLP and HCI fields, this review offers a more interdisciplinary view and points out important directions for future research that connects reasoning with intent understanding in real-world applications.
# 2 Related Works
Prior surveys in commonsense reasoning and intent detection have primarily focused on knowledge representation frameworks or specific downstream tasks such as question answering or dialogue generation. For instance, Yu et al. (2024) provide a comprehensive overview of natural language reasoning in NLP. and the integration of commonsense knowledge into NLP tasks. In the area of conversational AI, Richardson and Heck (2023)
examine the role of commonsense reasoning in dialogue systems, discussing relevant datasets and approaches. Other reviews have assessed commonsense reasoning from a systems perspective, outlining how benchmarks are structured and revealing inconsistencies in how such knowledge is evaluated (Losey et al., 2018). Meanwhile, Weld et al. (2022) emphasize the integration of physical, social, and intuitive commonsense into NLP and LLM-driven systems.
Regarding intent detection, Weld et al. (2022) analyze trends and challenges in joint models for intent classification and slot filling. Additionally, Jbene et al. (2025) evaluate the performance of different neural network architectures in intent recognition. Broader reviews further extend this perspective by considering intent inference in physical human–AI interaction, where real-time prediction of user intent is critical for shared control and feedback (Liu et al., 2019; Davis and Marcus, 2015). Joint modeling of intent and slots has also been critically evaluated, particularly in regard to system performance under multi-task constraints (Sap et al., 2020).
Previous surveys on commonsense reasoning and intent detection have mostly focused on knowledge representation or specific applications like question answering and dialogue systems. They rarely integrate diverse modeling approaches or consider human-centered use cases. Moreover, they are largely confined to NLP venues such as ACL and EMNLP, often overlooking insights from HCI forums like CHI. Our review expands on this by covering a broader set of papers from ACL, EMNLP, and CHI (2020–2025), organized into themes such as zero-shot modeling, cultural adaptation, structured reasoning, and user-focused applications. By reviewing both NLP and HCI work, we offer a more interdisciplinary and updated perspective on reasoning and intent understanding, including multilingual evaluation, generative intent labeling, and interactive systems.
# 3 Methodology
We conducted a structured review of commonsense reasoning methods in NLP with a focus on their relevance to intent detection and dialogue understanding. The keywords used in our search were selected based on terminology frequently appearing in prior surveys, benchmark datasets, and recent ACL, EMNLP, and CHI publications. Terms like "commonsense reasoning," "intent detection," and "social inference" were included, along with others related to methodology and context. The full list is in Appendix A. We focused on peer-reviewed papers from top conferences between 2020 and 2025 (ACL, EMNLP, CHI). We excluded preprints and unrelated downstream tasks for reviewing. After initial screening, 28 papers were selected for detailed review. These were grouped into categories based on methodological and application-based approach (graph-based, generative, prompting or hybrid) and reasoning type (causal, dialogic, social). The aim of this review is to synthesize key research trends, critically compare existing methodologies, and highlight ongoing challenges at the intersection of commonsense reasoning and intent detection in natural language processing.
# 4 Result
We have divided our selected papers into two main themes and then into subthemes. The following part of this section will discuss about each subthemes in details. The categorization of all the papers is shown in Table 1.
# 4.1 Commonsense Reasoning
We have categorized all the papers related to commonsense reasoning to four sub-themes: (1) Selfsupervised and Zero-shot Learning, (2) Multilingual and Cultural Adaptation, (3) Structured Reasoning and Evaluation Analysis, and (4) Interactive, Dialog-based, and Applied Commonsense.
# 4.1.1 Self-supervised and Zero-shot Learning
Modern approaches often move beyond supervised training, relying on internal model dynamics or indirect supervision. The "self-talk" approach allows a model to pose and answer internal queries, improving zero-shot QA by enhancing internal knowledge activation (Shwartz et al., 2020). Similarly, Klein and Nabi (2021) applied perturbation-based refinement to improve Winograd Schema Challenge performance, achieving competitive results without external supervision. Lin et al. (2021b) further extended this trend by proposing DrFact, a differentiable open-ended reasoning system using multi-hop retrieval from commonsense corpora. The system shows strong improvements but still relies on high-quality corpora, which may not cover all needed knowledge. Another work introduces COMET, a model that uses transformers to generate commonsense knowledge graphs, building upon existing resources like ConceptNet (Murata and Kawahara, 2024). However, the generated knowledge can be noisy sometimes and lacks grounding in specific contexts.
Table 1: Categorization of reviewed papers by main theme and sub-theme
# 4.1.2 Multilingual and Cultural Adaptation
Commonsense knowledge is often culturally embedded. Lin et al. (2021a) addressed this limitation with the Mickey Corpus and multilingual pretraining to improve non-English commonsense QA. Despite success, the model risks inheriting Englishcentric biases through translated datasets. Yin et al. (2021) explored cultural bias in visual reasoning. Their GD-VCR dataset reveals that models trained on Western imagery perform poorly on scenes from Africa and Asia, especially when reasoning requires cultural understanding (e.g., ceremonies). Another multilingual dataset named X-CSQA, is designed to evaluate and improve commonsense reasoning across different languages (Sakai et al., 2024). However, many translated questions preserve English-centric logic and cultural assumptions.
# 4.1.3 Structured Reasoning and Evaluation Analysis
To evaluate structured reasoning, Saha et al. (2021) introduced ExplaGraphs, where models generate explanation graphs to justify stance classification.
Results show that current systems fall short of human explanation quality. Branco et al. (2021) raised fundamental concerns about the reliability of commonsense benchmarks, showing that models exploit shallow dataset artifacts instead of reasoning. Another paper presents ATOMIC 2020, a comprehensive commonsense knowledge graph that enhances the reasoning capabilities of AI models (Hwangy et al.). Though the symbolic structure offers high coverage, it suffers from sparsity. He et al. (2020) extended this idea into NMT, revealing that translation systems struggle with disambiguation that requires implicit knowledge.
# 4.1.4 Interactive, Dialog-based, and Applied Commonsense
Commonsense reasoning is often embedded in social interaction and social computing area. Ghosal et al. (2022) introduced CICERO, a large annotated dialogue corpus where utterances are tagged with motivations, reactions, and other inferences. Such dialogue-specific reasoning challenges the generalizability across domains. Romero and Razniewski (2022) suggested training models with children’s books to teach simple, explicit commonsense. Their childBERT system performs well, indicating that less abstract texts may benefit model training. However, such texts cover mostly basic knowledge and exclude nuanced or adult-level reasoning.
Chen et al. (2023) examined how large language models (LLMs) fail at generating negative commonsense (e.g., "Lions don’t live in the ocean"), even when they can handle the equivalent questionanswer form. This shows a structural bias in generative training objectives. Qu et al. (2022) applied commonsense reasoning to e-commerce by creating a benchmark for "salient" commonsense facts that are relevant to product entities. Their work moves toward task-specific commonsense selection rather than general retrieval. And, Xu et al. (2023) proposed an HCI approach where children co-participate with AI in storytelling dialogues to train systems in commonsense reasoning. The system is experimental, but reflects a broader vision of collaborative knowledge learning.
# 4.2 Intent Detection
For the intent detection, we have divided the papers in multiple directions: (1) Open-set and Zero-shot Detection, (2) Multi-intent Modeling and Generative Formulation, (3) Contrastive Learning and Intent Discovery, (4) Human-centered and HCI Applications.
# 4.2.1 Open-set and Zero-shot Detection
Fan et al. (2020) used a semantic Gaussian mixture model (SEG) to identify out-of-distribution utterances based on learned intent clusters, offering a robust open-set detection framework. Ouyang et al. (2021) used energy-based modeling and synthetically generated "pseudo-OOD" utterances to boost detection ability. Choi et al. (2021) offered a complementary adversarial strategy via HotFlip attacks that perturb known intent utterances to simulate unseen cases. Wu et al. (2021) framed zero-shot multi-intent classification through label-aware embeddings. Their LABAN model projects utterances into a shared space with intent labels, achieving high performance on unseen intent types.
# 4.2.2 Multi-intent Modeling and Generative Formulation
To handle utterances with more than one intent, Qin et al. (2020) proposed AGIF, a graph-based model that dynamically integrates slot and intent features, showing strong performance on both multi- and single-intent datasets. Meanwhile, Zhang et al. (2024) reframed intent detection as a generative task. Their Gen-PINT system uses prompts to generate intent labels from utterances in low-resource or few-shot setups.
# 4.2.3 Contrastive Learning and Clustering
Discovery of unseen intents without full supervision is also studied. Kumar et al. (2022) applied deep contrastive clustering on partially labeled user logs, enabling dynamic intent categorization. Zhang et al. (2022) enhanced this idea with a twostage strategy: pre-training on known data and finetuning with contrastive loss to better isolate novel intents. Both works show that clustering-based approaches can reduce annotation cost, though they depend on accurate semantic separation.
# 4.2.4 Human-centered and HCI Applications
Real-world HCI contexts create specialized needs for intent detection. ¸Sencan (2024) developed a classifier to detect self-harm intent from search queries, supporting mental health interventions. Their work highlights how intent modeling must balance sensitivity and specificity in critical use cases. Yu et al. (2023) designed a voice assistant for older adults, focusing on UI-navigation queries like "where is history?". Their study identifies key intent types and challenges in parsing helpseeking utterances. Belardinelli (2024) provided a broader survey on gaze-based intent estimation, which uses visual attention transformed into text to infer user goals in UI interactions, although this line is still mostly exploratory. Reese and Smirnova (2024) compared ChatGPT to human subjects in the Japanese Winograd Schema Challenge, noting that the model underperforms in linguistically subtle and culturally specific reasoning. This study links with commonsense reasoning and emphasizes cross-linguistic limitations of current LLMs.
# 5 Discussion
This review reveals a methodological shift in commonsense reasoning and intent detection from supervised learning toward more adaptive, contextaware approaches. Zero-shot and generative methods exemplify efforts to reduce reliance on labeled data while expanding generalization capacity. However, challenges remain around grounding: systems like COMET and DrFact extend knowledge bases and retrieval capabilities, but often lack contextual anchoring, limiting their practical reasoning depth. Cultural and multilingual adaptation has gained attention, yet many resources still exhibit English-centric biases. Though datasets like X
CSQA and Mickey aim to diversify language coverage, translated content frequently retains Western logic, undermining efforts toward true crosscultural reasoning. This tension is also present in visual and symbolic reasoning systems, where knowledge sparsity and cultural abstraction persist despite broader coverage. Structured reasoning frameworks improve interpretability, but findings from works like ExplaGraphs and Branco et al. (2021) caution against overreliance on benchmark performance, which may mask shallow heuristics. Interactive and dialog-based applications reflect a promising shift: Systems trained in pedagogical or social dialogue corpora foreground situational common sense and user alignment, although domain limitations remain.
In intent detection, open-set and contrastive models show progress in handling unseen or overlapping intents. Still, their success depends heavily on embedding quality and semantic separation. Generative intent labeling provides flexibility in lowresource settings, but introduces challenges of consistency and evaluation. Clustering based methods reduce annotation cost but risk semantic noise. Finally, HCI-focused work marks an essential evolution in framing intent not just as a classification task but as a design problem. Systems that address vulnerable users, such as older adults or individuals in distress, highlight the importance of interpretability, fairness, and contextual awareness. These findings point to a broader convergence: future systems must balance robustness with cultural sensitivity, task specificity with generalization with human-centered values.
# References
Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, and Dinesh Garg. 2021. Explanations for commonsenseqa: New dataset and models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3050–3065.
Anna Belardinelli. 2024. Gaze-based intention estimation: principles, methodologies, and applications in hri. ACM Transactions on Human-Robot Interaction, 13(3):1–30.
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for automatic knowledge graph construction. arXiv preprint arXiv:1906.05317.
Ruben Branco, António Branco, João António Rodrigues, and João Ricardo Silva. 2021. Shortcutted commonsense: Data spuriousness in deep learning of commonsense reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1504–1521, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei Li, and Yanghua Xiao. 2023. Say what you mean! large language models speak too positively about negative commonsense knowledge. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9890–9908, Toronto, Canada. Association for Computational Linguistics.
Haoang Chi, He Li, Wenjing Yang, Feng Liu, Long Lan, Xiaoguang Ren, Tongliang Liu, and Bo Han. 2024. Unveiling causal reasoning in large language models: Reality or mirage? Advances in Neural Information Processing Systems, 37:96640–96670.
DongHyun Choi, Myeong Cheol Shin, EungGyun Kim, and Dong Ryeol Shin. 2021. OutFlip: Generating examples for unknown intent detection with natural language attack. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 504–512, Online. Association for Computational Linguistics.
Ernest Davis and Gary Marcus. 2015. Commonsense reasoning and commonsense knowledge in artificial intelligence. Communications of the ACM, 58(9):92– 103.
Lu Fan, Guangfeng Yan, Qimai Li, Han Liu, Xiaotong Zhang, Albert Y.S. Lam, and Xiao-Ming Wu. 2020. Unknown intent detection using Gaussian mixture model with an application to zero-shot intent classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1050–1060, Online. Association for Computational Linguistics.
Deepanway Ghosal, Siqi Shen, Navonil Majumder, Rada Mihalcea, and Soujanya Poria. 2022. CICERO: A dataset for contextualized commonsense inference in dialogues. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5010–5028, Dublin, Ireland. Association for Computational Linguistics.
Jie He, Tao Wang, Deyi Xiong, and Qun Liu. 2020. The box is in the pen: Evaluating commonsense reasoning in neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3662–3672, Online. Association for Computational Linguistics.
D Jena Hwangy and 1 others. Atomic2020: On symbolic and neural commonsense knowledge graphs. AAAI2020.
Mourad Jbene, Abdellah Chehri, Rachid Saadane, Smail Tigani, and Gwanggil Jeon. 2025. Intent detection for task-oriented conversational agents: A comparative study of recurrent neural networks and transformer models. Expert Systems, 42(2):e13712.
Tassilo Klein and Moin Nabi. 2021. Towards zeroshot commonsense reasoning with self-supervised refinement of language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8737–8743, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Rajat Kumar, Mayur Patidar, Vaibhav Varshney, Lovekesh Vig, and Gautam Shroff. 2022. Intent detection and discovery from user logs via deep semisupervised contrastive clustering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1836–1853, Seattle, United States. Association for Computational Linguistics.
Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. arXiv preprint arXiv:1909.02151.
Bill Yuchen Lin, Seyeon Lee, Xiaoyang Qiao, and Xiang Ren. 2021a. Common sense beyond English: Evaluating and improving multilingual language models for commonsense reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1274–1287, Online. Association for Computational Linguistics.
Bill Yuchen Lin, Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Xiang Ren, and William Cohen. 2021b. Differentiable open-ended commonsense reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4611–4625, Online. Association for Computational Linguistics.
Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2021. Generated knowledge prompting for commonsense reasoning. arXiv preprint arXiv:2110.08387.
Jiao Liu, Yanling Li, and Min Lin. 2019. Review of intent detection methods in the human-machine dialogue system. In Journal of physics: conference series, volume 1267, page 012059. IOP Publishing.
Dylan P Losey, Craig G McDonald, Edoardo Battaglia, and Marcia K O’Malley. 2018. A review of intent detection, arbitration, and communication aspects of shared control for physical human–robot interaction. Applied Mechanics Reviews, 70(1):010804.
Annie Louis, Dan Roth, and Filip Radlinski. 2020. " i’d rather just go to bed": Understanding indirect answers. arXiv preprint arXiv:2010.03450.
Martino Mensio, Giuseppe Rizzo, and Maurizio Morisio. 2018. Multi-turn qa: A rnn contextual approach to intent classification for goal-oriented systems. In Companion Proceedings of the The Web Conference 2018, pages 1075–1080.
Eiki Murata and Daisuke Kawahara. 2024. Time-aware comet: a commonsense knowledge model with temporal knowledge. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LRECCOLING 2024), pages 16162–16174.
Yawen Ouyang, Jiasheng Ye, Yu Chen, Xinyu Dai, Shujian Huang, and Jiajun Chen. 2021. Energy-based unknown intent detection with data manipulation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2852–2861, Online. Association for Computational Linguistics.
Silvia Pareti, Tim O’keefe, Ioannis Konstas, James R Curran, and Irena Koprinska. 2013. Automatically detecting and attributing indirect quotations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 989–999.
Libo Qin, Xiao Xu, Wanxiang Che, and Ting Liu. 2020. AGIF: An adaptive graph-interactive framework for joint multiple intent detection and slot filling. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1807–1816, Online. Association for Computational Linguistics.
Yincen Qu, Ningyu Zhang, Hui Chen, Zelin Dai, Chengming Wang, Xiaoyu Wang, Qiang Chen, and Huajun Chen. 2022. Commonsense knowledge salience evaluation with a benchmark dataset in E-commerce. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 14–27, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic opendomain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
May Lynn Reese and Anastasia Smirnova. 2024. Comparing chatgpt and humans on world knowledge and common-sense reasoning tasks: A case study of the japanese winograd schema challenge. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pages 1–9.
Christopher Richardson and Larry Heck. 2023. Commonsense reasoning for conversational ai: A survey of the state of the art. arXiv preprint arXiv:2302.07926.
Julien Romero and Simon Razniewski. 2022. Do children texts hold the key to commonsense knowledge?
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10954–10959, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Swarnadeep Saha, Prateek Yadav, Lisa Bauer, and Mohit Bansal. 2021. ExplaGraphs: An explanation graph generation task for structured commonsense reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7716–7740, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Yusuke Sakai, Hidetaka Kamigaito, and Taro Watanabe. 2024. mCSQA: Multilingual commonsense reasoning dataset with unified creation strategy by language models and humans. In Findings of the Association for Computational Linguistics: ACL 2024, pages 14182–14214, Bangkok, Thailand. Association for Computational Linguistics.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019. Socialiqa: Commonsense reasoning about social interactions. arXiv preprint arXiv:1904.09728.
Maarten Sap, Vered Shwartz, Antoine Bosselut, Yejin Choi, and Dan Roth. 2020. Commonsense reasoning for natural language processing. In Proceedings of the 58th annual meeting of the association for computational linguistics: Tutorial abstracts, pages 27–33.
Cevdet ¸Sencan. 2024. Intention mining: surfacing and reshaping deep intentions by proactive human computer interaction.
Vered Shwartz, Peter West, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Unsupervised commonsense question answering with self-talk. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4615–4629, Online. Association for Computational Linguistics.
Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019. Quartz: An open-domain dataset of qualitative relationship questions. arXiv preprint arXiv:1909.03553.
Henry Weld, Xiaoqi Huang, Siqu Long, Josiah Poon, and Soyeon Caren Han. 2022. A survey of joint intent detection and slot filling models in natural language understanding. ACM Computing Surveys, 55(8):1– 38.
Sixing Wu, Ying Li, Dawei Zhang, Yang Zhou, and Zhonghai Wu. 2020. Diverse and informative dialogue generation with context-specific commonsense knowledge awareness. In Proceedings of the 58th annual meeting of the association for computational linguistics, pages 5811–5820.
Ting-Wei Wu, Ruolin Su, and Biing Juang. 2021. A label-aware BERT attention network for zero-shot multi-intent detection in spoken language understanding. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4884–4896, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Erqian Xu, Hecong Wang, and Zhen Bai. 2023. Engage ai and child in explanatory dialogue on commonsense reasoning. In Extended Abstracts of the $2 0 2 3 \ C H I$ Conference on Human Factors in Computing Systems, pages 1–8.
Da Yin, Liunian Harold Li, Ziniu Hu, Nanyun Peng, and Kai-Wei Chang. 2021. Broaden the vision: Geodiverse visual commonsense reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2115–2129, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Fei Yu, Hongbo Zhang, Prayag Tiwari, and Benyou Wang. 2024. Natural language reasoning, a survey. ACM Computing Surveys, 56(12):1–39.
Ja Eun Yu, Natalie Parde, and Debaleena Chattopadhyay. 2023. “where is history”: Toward designing a voice assistant to help older adults locate interface features quickly. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1– 19.
Feng Zhang, Wei Chen, Fei Ding, Meng Gao, Tengjiao Wang, Jiahui Yao, and Jiabin Zheng. 2024. From discrimination to generation: Low-resource intent detection with language model instruction tuning. In Findings of the Association for Computational Linguistics: ACL 2024, pages 10167–10183, Bangkok, Thailand. Association for Computational Linguistics.
Yuwei Zhang, Haode Zhang, Li-Ming Zhan, Xiao-Ming Wu, and Albert Lam. 2022. New intent discovery with pre-training and contrastive learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 256–269, Dublin, Ireland. Association for Computational Linguistics.
# A Keywords Used to Look For Papers
"commonsense reasoning" "commonsense inference" "intent detection" "dialogue intent" "pragmatic inference" "context-aware intent classification" "commonsense in dialogue" "situated reasoning", "knowledge graph" "commonsense knowledge" "conceptnet" "social commonsense" "causal reasoning" "multi-hop reasoning" "language model prompting" "neural-symbolic reasoning" "zeroshot intent detection" "few-shot commonsense reasoning" "commonsense benchmarks", "social inference" | This review explores recent advances in commonsense reasoning and intent
detection, two key challenges in natural language understanding. We analyze 28
papers from ACL, EMNLP, and CHI (2020-2025), organizing them by methodology and
application. Commonsense reasoning is reviewed across zero-shot learning,
cultural adaptation, structured evaluation, and interactive contexts. Intent
detection is examined through open-set models, generative formulations,
clustering, and human-centered systems. By bridging insights from NLP and HCI,
we highlight emerging trends toward more adaptive, multilingual, and
context-aware models, and identify key gaps in grounding, generalization, and
benchmark design. | [
"cs.CL",
"cs.HC"
] |
# 1 INTRODUCTION
The usability of open source software (OSS) is a qualitative characteristic evaluating its ease of learning and usage, significantly impacting their adoption, development efficiency, and user satisfaction [1, 5, 6, 8]. A highly usable OSS platform helps efficient task completion, enhances developer productivity, reduces error likelihood, promotes broader adoption within developer communities, improves code maintainability, and fosters a satisfying development experience. However, despite its importance, conducting usability evaluations remains challenging in practice. Traditional evaluation methods primarily rely on empirical studies involving human participants, where developers are recruited to interact with the platform and answer interview questions [3, 7]. However, these human-centric methods face substantial challenges, including high costs, limited scalability, and difficulties in recruiting qualified participants, particularly when evaluating OSS with extensive functionality and complex application scenarios [3].
To address the shortcomings of existing evaluation methods, we introduce OSS-UAgent, a scalable and automated usability evaluation framework that replaces human evaluators with intelligent agents powered by large language models (LLMs), which have demonstrated capabilities comparable to, or even exceeding, human performance in various programming and evaluation tasks, making them a feasible solution to reduce evaluation costs and improve scalability [2, 4, 9]. Specifically, our framework first collects platform-specific data, including API documentation, research papers, and sample code, to construct a vectorized knowledge base. This knowledge base enables context-aware retrieval of relevant information during code generation, ensuring alignment with platform standards. OSS-UAgent simulates developers across multiple experience levels (from Junior to Expert) by generating role-specific prompts that reflect varying platform familiarity. Subsequently, the LLM-based agent generates corresponding code based on these multi-level prompts with knowledge base support. Finally, the generated code is evaluated against standard implementations based on compliance, correctness, and readability metrics. OSS-UAgent significantly reduces evaluation costs and enhances scalability by automating the evaluation process, enabling the evaluation of largescale OSS platforms without the need for extensive human participation.
As graph analytics platforms have been widely studied in the database community, we use them as representative examples to demonstrate our OSS-UAgent through an interactive graphical
Researcher Results Chunk 01011010110101101011 日 Chunk Embedding 01011010110101101011 Store 三四 01 1101011 VectorDB Documents Chunk 1101011
Platform Knowledge Construction Query Generated Code Output Junior graph) Nodes
Documents Intermediate PFaisls td:vetdrs,oub1 Evaluate Prompts + Developer X_ITER Output Senior @ 。 8 Input Input 福 Expert Code Generator 点
Prompts Metrics Evaluator Codes
Multi-Level Developer Simulation Code Generation Multi-Dimensional Evaluation
user interface1. This includes automatic retrieval and analysis of data from platform GitHub repositories, construction of a dynamic knowledge base, generation of multi-level prompts, and automated code evaluation. To ensure fairness and consistency in assessments, the collected platform data will be anonymized. The user interface presents generated code, evaluation outcomes, and detailed reports segmented by experience level, providing clear insights into the platform’s usability.
# 2 FRAMEWORK OVERVIEW
The core idea of our usability evaluation framework OSS-UAgent is to leverage LLM-based agents to simulate developers with different experience levels, enabling them to complete platform-specific tasks and to evaluate the usability of the OSS by analyzing the quality of the generated code. As shown in Figure 1, OSS-UAgent consists of multiple agents and has four main steps, Platform Knowledge Construction, Multi-Level Developer Simulation, Code Generation and Multi-Dimensional Evaluation. Specifically, (1) the Researcher agent automatically gathers and processes platform-specific documents, then builds a vector database that supports efficient, context-aware retrieval for code generation; 2 the Developer agent simulates developers at different experience levels (Junior, Intermediate, Senior, Expert); 3 the Code Generator agent generates code based on prompts and retrieved knowledge; and (4) the Evaluator agent scores the generated code according to predefined criteria.
# 2.1 Platform Knowledge Construction
To ensure the generated code follows platform-specific standards, our framework introduces the Researcher agent, which dynamically builds and maintains a structured knowledge base. As illustrated in Figure 1 $\textcircled{1} ,$ Researcher first gathers platform-specific documents, including research papers, API documentation, sample code, and coding guidelines. These documents are preprocessed into smaller chunks, embedded into vector representations, and stored in a vector database to support efficient similarity-based retrieval.
During code generation, the Code Generator queries the vector database to retrieve the most relevant data, which is injected into the prompt context. This ensures the generated code follows platformspecific standards and improves overall accuracy.
To ensure fairness and eliminate bias, we anonymize all platformspecific identifiers (e.g., platform names, unique function names, and parameter identifiers) during the preprocessing phase. This anonymization ensures that Evaluator evaluates platforms based on general usability rather than prior familiarity with specific platform details.
# 2.2 Multi-Level Developer Simulation
A platform’s usability can vary significantly depending on a developer’s experience. However, it is not straightforward to determine which experience level a generic LLM might represent in practice. To address this challenge, our framework employs the Developer agent to simulate developers at four distinct experience levels, Junior, Intermediate, Senior, and Expert, through hierarchical prompts, ensuring a more comprehensive and realistic evaluation.
• Level 1 (Junior) represents individuals with basic programming knowledge but little experience with APIs. Prompts provide only task descriptions.
Level 2 (Intermediate) corresponds to developers with some exposure to APIs but limited expertise. Prompts include general guidance, such as function names and parameter hints.
• Level 3 (Senior) represents experienced developers familiar with best practices and efficient API usage. Prompts provide more structured information, including example use cases.
• Level 4 (Expert) simulates highly skilled developers who can understand complex requirements and optimize solutions. Prompts contain comprehensive details and and they expect high-quality, efficient implementations.
For each experience level, Developer receives tailored prompts, such as: Junior Prompt: “You are a beginner developer and you have no prior knowledge of the API in platform $X$ ... Here is the relevant information about the task...” Expert Prompt: “You are an expert developer ... Below is detailed information about the API and related concepts...”
# 2.3 Code Generation
In this step (Figure 1 $\textcircled{3}$ , the Code Generator agent receives multilevel prompts from Developer. Each prompt is linked to a specific developer role and represents a distinct experience level. Based on the task requirements and the developer’s role, Code Generator queries the vector database built by Researcher for relevant API function details. It then adds this information to the prompt to ensure the generated code follows platform standards. Finally, Code Generator produces a set of code implementations that match the style and knowledge of each role. These implementations become the main input for the evaluation described in Section 2.4.
# 2.4 Multi-Dimensional Evaluation
Determining the quality of the generated code is critical for evaluating the usability of developer platforms. Correctness and readability are widely recognized metrics in API usability evaluation [5, 7, 8]. Correctness reflects whether the API documentation and examples sufficiently guide developers to achieve intended functionality accurately. Readability measures whether the API’s design encourages clear and maintainable code. However, we observe that LLMs often exhibit “hallucination”, focusing on general programming patterns or inventing nonexistent API functions while ignoring platformspecific APIs, typically due to unclear or ambiguous prompts. This limitation mirrors the behavior of human developers: less experienced programmers are more likely to make mistakes when dealing with poorly designed APIs. To address this, we introduce a new metric, compliance, which measures how closely the generated code aligns with standard code. This metric reflects the API’s intuitiveness and accessibility for developers with varying skill levels. By assessing whether the API enables users to easily produce code that establishes best practices and standards, compliance provides an objective basis for scoring API usability.
The details of evaluation metrics are as follows:
• Compliance. Checking the adherence to platform-specific coding standards and best practices by comparing the generated code with standard code examples. Compliance ensures that the code integrates well with existing systems • Correctness. This metric is for ensuring the generated code performs the intended task accurately. This includes verifying the logic of the code and the correctness of function calls. Readability. This metric focuses on code clarity and maintainability. Readable code is easier to understand, modify, and debug. It should be well-structured, logically grouped, and follow consistent naming conventions.
Figure $\textcircled{4}$ illustrates the training process: we first provide detailed scoring criteria and set basic requirements instructions to get Evaluator. Then, we introduce some test code and provide feedback based on the output results to optimize the Evaluator’s instructions. We iterate this process until it can produce stable and satisfactory evaluation results.
# 3 DEMONSTRATION FOR GRAPH ANALYTICS PLATFORMS
In this section, we demonstrate our usability evaluation system using graph analytics platforms as an example. Graph analytics platforms have received significant attention from the database community due to their efficiency in data management and analysis, making them ideal candidates for demonstrating our OSSUAgent. The system is built upon our agent-based usability evaluation framework and offers an intuitive graphical user interface. The backend is developed using Python, and the frontend is constructed with JavaScript and the React library. Users only need to provide the GitHub repository URL of the target platform. From there, the system automates the entire evaluation process, including data retrieval, knowledge base construction, code generation, and comprehensive multi-dimensional evaluation.
Although our demonstration focuses on graph analytics platforms, the underlying framework is highly generalizable and can be effectively applied to other open source software, such as databases, machine learning frameworks, and development platforms. Platform Data Retrieval & Knowledge Base Construction. The user first inputs the target evaluation platform’s GitHub URL (Figure 2 $\textcircled{1}$ . The system then automatically fetches basic information, including the README, core API files, and example code. Due to varying naming conventions across platforms, locating core API and example code files directly is challenging. To address this, the LLM-based Researcher agent parses all file paths and filenames to automatically identify the core API and example code files, as shown in Figure 2 $\textcircled{2}$ .
To ensure evaluation fairness and anonymity, the platform and core function names are anonymized. This anonymization process is also conducted by the Researcher, which generates a set of uniform anonymization rules based on the retrieved data (Figure 2 $\textcircled{3}$ . All subsequent processes, including knowledge base construction and code generation, apply these rules consistently. Finally, the anonymized README content and API documentation are segmented, vectorized, and stored in a vector database, forming a structured knowledge base.
Code Generation. Based on the fetched files, our system automatically generates tailored prompts corresponding to four developer experience levels (Junior, Intermediate, Senior, Expert). These prompts guide the Code Generator to generate code that adheres to platform-specific APIs and best practices (Figure $\textcircled{4}$ . The detailed template for these prompts is illustrated in Figure2 $\textcircled{7}$ . The requirements for each experience level are defined as follows:
• Level 1 (Junior). At this level, no specific technical details are provided. The prompt only consists of a description of the task, without any guidance on how to implement it. • Level 2 (Intermediate). This level offers minimal technical information to guide the code generation. Basic prompts are given, including the names of core APIs and parameters. • Level 3 (Senior). This level provides detailed usage instructions for the relevant APIs, including the names of the APIs and parameters, and a detailed introduction to them. In addition, some example code is provided to guide the usage of API functions.
# OSS-UAgent: An Agent-base Usability Evaluation System
https://github.c 1 Junio ready!! ready!!! Senior is ready!! Expert is ready!!! Evaluate Fetching repository information 百 点 Perg □ " \*\*Branch: flash ostrea #include <iostream #include <iostream> Feching REDW.e... la tforstfo #include PplatformAPIhead ceho lder for definitior struct \*srloili constdoubleF.85 std::vector<int> outEdges; intdegree; 5 hi //Ass structGraph oh is defined elsewt mock definitionfor doub1 Degree using usingGraphType=Graph<Vertex>; \*\*PageRank vector<vector<int>> edges; // adjacency ranks t \*\*Betweenness Centrality void initPageRank(Vertex &v) { \*\*LPA \*\*TriangleCounting Evaluation Results ayminda Platform:flash 3 \*\*edgeMapDenseFunction --> linkMapThickFunction Task: PageRank \*\*edgeMapSparseFunction --> linkMapThinFunction \*\*edgeMapFuncti -> linkMapFunction Bul nglga. Result Analysis essegis tructuredttiCstteitt 6 ogic.Cor ectness varies with Code 4 leading inaccuracy of implementation.Overal, readability,highlighti the challenge of maintaining compliance w logical accuracy and clarity. Generate prompts for different experience levels Prompt for generating codes Prompt for evaluating
Prompt: Prompt: Prompt: 9
Generate prompts at different levels for implementing You are a {role} developer tasked with implementing the Evaluate code quality of {algorithm} implementation
{task} on {platform} according to the following codes and {algorithm} using {language}. You have access to a \*\*This is the standard reference code\*\*: {standard_code}
requirements: knowledge base and the provided information. Ignore the \*\*The evaluate code\*\*: {evaluate_code}
Code:{code}; header files and the input/output handling of graph data. \*\*Evaluation Criteria\*\*: You need to evaluate submitted
Requirements: {requirements} \*\*Platform Features\*\*: The platform provides the code against a reference standard, focusing on three key
Output the results in the following format: following key APIs for graph processing: {prompt_level} aspects: deviation from the standard code in the main body "Junior": [the prompt at Junior level], Generate JSON output in the following format: (excluding headers), emphasizing consistency in main "code": "complete code", function usage, where greater deviations result in lower "Expert": [the prompt at Expert level] "explanation": "detailed design " scores (0-100); …
Level 4 (Expert). In addition to the detailed API instructions similar to the previous level, this level also includes the pseudocode of the relevant algorithm.
Once the prompts are generated, the system provides them to the Code Generator, which then produces code implementations corresponding to each experience level. The generated code is displayed in the evaluation interface, allowing direct comparison across different experience levels (Figure 2 $\textcircled{5}$ . The prompt template for generating code is shown in Figure 2 $\textcircled{8}$ . Code Evaluation & Result Presentation. The generated code is assessed against a standard reference implementation based on three key criteria, Compliance, Correctness, and Readability, detailed in Section 2.4. Each generated code is scored based on these criteria, and the results are presented in a visual format for direct comparison across different experience levels (Figure 2 $\textcircled{6}$ . The evaluation results provide insights into the quality of generated code at different experience levels, helping users to clearly understand API usability and developer-friendliness.
To ensure fair and consistent assessment, the system employs a structured prompt to guide the Evaluator during evaluation. The prompt template is illustrated in Figure 2 $\textcircled{9}$ . Based on this prompt, the system generates a detailed evaluation report that highlights key differences across generated code.
# REFERENCES
[1] Morten Sieker Andreasen, Henrik Villemann Nielsen, Simon Ormholt Schrøder, and Jan Stage. 2006. Usability in open source software development: opinions and practice. Information technology and control 35, 3 (2006).
[2] Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937 (2023).
[3] Umer Farooq and Dieter Zirkler. 2010. API peer reviews: a method for evaluating usability of application programming interfaces. In Proceedings of the ACM conference on Computer supported cooperative work. 207–210.
[4] Yuan-Hao Jiang, Jinxin Shi, Yukun Tu, Yizhou Zhou, Wenxuan Zhang, and Yuang Wei. 2024. For Learners: AI Agent is All You Need. (Oct. 2024), 21–46.
[5] Brad A. Myers. 2017. Human-Centered Methods for Improving API Usability. In 1st IEEE/ACM International Workshop on API Usage and Evolution, WAPI@ICSE 2017, Buenos Aires, Argentina, May 23, 2017. IEEE Computer Society, 2.
[6] Brad A. Myers and Jeffrey Stylos. 2016. Improving API usability. Commun. ACM 59, 6 (2016), 62–69.
[7] Marco Piccioni, Carlo A Furia, and Bertrand Meyer. 2013. An empirical study of API usability. In IEEE International Symposium on Empirical Software Engineering and Measurement. IEEE, 5–14.
[8] Irum Rauf, Elena Troubitsyna, and Ivan Porres. 2019. A systematic mapping study of API usability evaluation methods. Comput. Sci. Rev. 33 (2019), 49–68.
[9] Jiayin Wang, Weizhi Ma, Peijie Sun, Min Zhang, and Jian-Yun Nie. 2024. Understanding user experience in large language model interactions. arXiv preprint arXiv:2401.08329 (2024). | Usability evaluation is critical to the impact and adoption of open source
software (OSS), yet traditional methods relying on human evaluators suffer from
high costs and limited scalability. To address these limitations, we introduce
OSS-UAgent, an automated, configurable, and interactive agent-based usability
evaluation framework specifically designed for open source software. Our
framework employs intelligent agents powered by large language models (LLMs) to
simulate developers performing programming tasks across various experience
levels (from Junior to Expert). By dynamically constructing platform-specific
knowledge bases, OSS-UAgent ensures accurate and context-aware code generation.
The generated code is automatically evaluated across multiple dimensions,
including compliance, correctness, and readability, providing a comprehensive
measure of the software's usability. Additionally, our demonstration showcases
OSS-UAgent's practical application in evaluating graph analytics platforms,
highlighting its effectiveness in automating usability evaluation. | [
"cs.SE",
"cs.AI"
] |
# 1 Introduction
The rapid development of Large Language Models (LLMs) has led to an expansion of their applications and effectiveness across various domains [42, 37, 39, 64]. One important area where LLMs have shown impressive results is code translation, including tasks such as code generation from natural languages [61] and transformation between programming languages [60]. In code translation, LLMs have demonstrated remarkable accuracy and readability, often surpassing manually crafted translators.
While LLMs have shown promising results in translating between high-level programming languages [44, 45, 48] and in decompilation tasks [18, 6, 5], their application to translating from high-level languages to low-level assembly languages remains a relatively unexplored area, which is traditionally dominated by handcrafted compilers. However, compilers require significant engineering effort and are tailored to specific languages/architectures. It is interesting to explore whether LLMs can be used to automate the compilation process, and if so, to what extent they can achieve this. Although earlier investigations [3, 21] have shown low translation accuracy, recent work [63] has shown that LLMs finetuned with compiler-generated bilingual corpora can outperform advanced
LLMs in $\scriptstyle \mathbf { C - x } 8 6$ compilation tasks and can achieve up to $91 \%$ behavioral accuracy. However, an in-depth understanding of LLMs’ capabilities in this domain is still lacking. As compilation is typically divided into two main aspects: translation and optimization. This work focuses on exploring and answering about LLM capabilities in the translation aspect of compilation.
LLMs are pre-trained on vastly large code corpora. some are monolingual, and some may be bilingual (where LLMs can learn the translation rule between two languages). However, most of these LLMs do not disclose their training datasets, so their capabilities can only be assessed through empirical testing. We primarily find that current LLMs learn the neural compilation process from directly compiler-generated bilingual corpora, which is an intuitive way to construct a pretraining dataset and teach LLMs to compile. However, we also found that assembly code directly generated by compilers is hard for LLMs to learn due to several challenges. These include the presence of semantically opaque labels, symbols or numeric values that LLMs struggle to translate accurately, and the need to handle symbol renaming for identifiers with the same name in different scopes, etc. Although style migration or modifications to existing compilers can be made, these approaches still rely on an existing compiler to perform the neural compilation job, which doesn’t outperform existing designs.
Our work takes a different approach where we do not require bilingual corpora, as a result, we don’t necessarily rely on an existing compiler. We guide LLMs to transform high-level code into assembly code step-by-step. To achieve this, we first propose an adaptive compilation-knowledge guided LLM workflow, which involves a series of steps with verifications to ensure stepwise correctness, including control flow annotation, struct annotation, renaming transformation, variable mapping transformation and final assembly generation. By splitting the complex compilation task into smaller, manageable steps, we significantly reduce the task complexity in each step by forcing LLMs to focus on certain easier step and achieve substantial improvements in overall accuracy.
More importantly, the scalability of current code translation is an important and challenging problem. Although advanced LLMs already have hundreds of thousands tokens context limit, they can not merely compile a code with $2 . 6 \mathrm { k }$ tokens in CoreMark [19], which is just a 200-LOC function. The major challenges within this significant LLM failure are two-fold: (1) the complexity within each expression/statement, and (2) the complexity of program structures. (1) is continuingly improved with more advanced LLMs or with proper knowledge guidance and is not our research focus. However, (2) is more fundamentally challenging, as a program can be arbitrarily large and complex, and LLMs are not designed to handle such complexity, direct LLM translation is just not scalable.
To tackle this scalability problem, we propose LEGO translation, which draws inspiration from the modular and composable nature of LEGO blocks to divide-and-conquer it. This method breaks down large programs into manageable, semantically-composable control blocks, analogous to LEGO pieces. These blocks are then independently translated and rebuilt to form a full translation in much larger scale.
We combine the novel LEGO translation method with our proposed neural compilation workflow, and design LEGO-Compiler, a scalable, LLM-driven system that leverages the power of LLMs to perform scalable neural compilation tasks. LEGO-Compiler can correctly compile over $9 9 \%$ of the code in ExeBench [4], a large scale dataset with careful unit-testing. We can also correctly compile $9 7 . 9 \%$ of AnsiBench [33], a collection of well-known ANSI C standard benchmark suites, including CoreMark [19], an industrial-grade codebase that encompasses most common programming language features in C, where we compile all of its 40 functions correctly. Regarding scalability, we have verified that LEGO translation method can significantly scale up the capability of neural code translation performed by LLMs. By ablating LEGO-Compiler methods in AnsiBench evaluation and additional Csmith [59] evaluation, a random C code generator for compiler testing, LEGO translation scales up the available code size for neural compilation by near an order of magnitude.
The main contributions of this work are as follows:
• We propose the novel LEGO translation method to scale up the neural compilation task. By breaking down large programs into manageable, semantically-composable control blocks, the complexity of neural compilation tasks for LLMs is significantly reduced. • We propose a novel verifiable step-by-step neural compilation workflow that guides LLMs to transform high-level code into low-level assembly. With breakdown steps, we characterize and evaluate the compilation process in LLM’s perspective, and achieve substantial improvements in behavioral accuracy compared to end-to-end translation.
• We provide both theoretical and empirical studies by formally defining the composability in code translation that underpins the LEGO translation method and empirically demonstrating LEGO-Compiler’s effectiveness through extensive evaluations. LEGO-Compiler achieve over $9 9 \%$ accuracy on ExeBench, $9 7 . 9 \%$ accuracy on AnsiBench. Ablation study also showcases that LEGO translation boosts the scalability of neural-compilable code size by an order of magnitude. The model-independent evaluation process also serves as an important benchmark for LLMs on complex system-level tasks.
# 2 Related Work
# 2.1 Code Translation
Recent neural-based Code Translation researches can be majorly categorized to two types: learningbased transpilers [44, 45, 56] and pre-trained language models [16, 54, 29, 43, 36, 1]. The former majorly studies the scarcity of parallel corpora [58] and develops unsupervised learning methods to overcome it. The latter using Large Language Models’ vast pretrained knowledge, can also perform code translations well without training [60, 25].
As for compilation related translations, [3, 21] preliminarily study on $\scriptstyle \mathbf { C - x } 8 6$ and C-LLVM IR translation with limited investigations on the methods. There are also works on the reverse decompilation process [18, 6, 5] and works on code optimizations [9, 10]. The most related work is [63], which achieves state-of-the-art $91 \%$ Pass $\ @ 1$ accuracy on the $\mathbf { C } \mathbf { - } \mathbf { \mathbf { X } } 8 6$ task using a finetuned CodeLlama model, where our work surpasses. Besides, their approach relies on compiler-generated bilingual corpora, while our methods can effectively eliminate such dependency by reasoning the steps of how a compiler works.
Finally, Modular approach is recognized as the key insight to scale up neural code generation/translation [34, 62, 51, 7]. Our work leverages similar idea of divide-and-conquer to breakdown a large long code into manageable control block parts, then LLMs can translate these parts separately with the aid of necessary context and combine their results into a large, complete and coherent translation.
# 2.2 Other Related Work
LLM self-repair. Recent research has focused extensively on enhancing LLMs’ self-correction capabilities. Several studies closely related to our work deserve mention. A comprehensive survey by [41] thoroughly examined methods for leveraging feedback to autonomously improve LLM outputs. [53] first uses compiler feedback for better code generation, and [15] establishes the syntax-runtime-functional bug type taxonomy and builds corresponding self-repair pipelines for code. Our work is their natural extensions to neural compilation scenario. While [35] investigated the limitations of self-repair mechanisms in code generation, our findings diverge significantly. Contrary to their conclusions, we discovered that self-repair serves as a highly effective solution in the neural compilation process, particularly when incorporating syntax feedback and runtime feedback.
In-context learning and Chain-of-Thoughts. LLMs are able to in-context learn via inference alone by providing few shots of demonstration then predicting on new inputs [31, 13]. Thus customized Chain-of-Thoughts [55, 8] can guide LLMs to perform complicated reasoning [50, 46], which is the cornerstone of our work. More specifically, [23] reveals the degradation of LLMs’ performance for long context, and validate the effectiveness on using Chain-of-Thoughts to mitigate. We found similar results in code translation/compilation tasks. However, our proposed LEGO translations method can significantly mitigate such degradation as it turns a long context direct translation into multiple composable, shorter ones that LLMs can handle.
Generation Scalability and Long Context Learning. Except for code translation, many LLM-based methods suffer scalability problems since larger inputs are not well trained like the smaller ones, which makes general methods to extend LLMs long-context capability challenging. For example, in order to coherently generate long passages of text, [49] proposes a multi-staged keyword-first progressive method to improve it significantly, where our work shares a similar insight. [24] introduces a self-route method to dynamically choose the usage of RAG or fully in-context, balancing the cost and performance in long-context scenario, which inspires us to use a similar dynamic approach.
Plain Translator LEGO Translator LEGO Compiler
c\`io\`nid\`t sqt ican},nt (n{iedng }n,0c;iondte m) { c\`io\`nid\`t sqtr ican},nt (n{iedng }n,0c;iondte pm)a {t by part unu"sn"is.g..ingend $\dot { \textbf { 1 } } = \textbf { \Theta }$ $\dot { \textbf { \mathscr { I } } } = \boldsymbol { \Theta }$ \*x(Au;nshiogrnte bB;lksize, "S"y...)mb{ol Table $\mathrm { ~ i ~ } <$ i = n / m; i = n / m; i:-28(%rbp)unsigned if (n > m / 2){i += 1;} if (n - m \* i > m / 2){i += 1;} 2 \* 4; j:-32(%rbp)unsigned i \*= m; i \*= m; blksize:-36(%rbp)unsigned
} if (neg){i = -i;} return(i); 1-::: if (neg){i = -i;} return(i); 2.Part 1.Part split # block 0 - i-1 3.Part rebuild
\`d\`e\`f dqsuta}nt(n: int, m: int) "-> int: \`d\`e\`f dqsuta}nt(n: int, m: int) 1-.>Pairtn ts:plit translation Block 1 #.L_bloocpk_i1: neg = 0 neg = 0 Block i-1 movl -32(%rbp), %eax if ne <+0=:1 Direct if ne +0=:1 while (j < blksize) { cjmgpe -L3_6l(o%orpb_p1)_,e %deax -n translation n = -n i++; incl -28(%rbp) # i++ if n "// > m "// 2: i n m/ m\* > m // 2: \*B 4o;ck i 2.Part translation misomhvull - 2-e28a(8x(%r%rbbp)p#),,i%\*e%iea\*ax2x # i\*i i \*n=emg: 曲 i n=emg: Block n smholvl \$%2e,a ,%e-a3x #( ir\*bip\*)2#\* j= i = -i i = -i 5 jmp .L_loop_1 return i return i L_loop_1_end: 3.Part rebuild # block i+1 - n (a) (b)
# 3 Methods
# 3.1 Problem Definition
Before introducing our method, we first define the neural compilation problem. Neural compilation can be viewed as a specialized version of code translation problem, as defined in Definition 1, with the goal of translating high-level programming language as the src language (such as C) into lowlevel assembly language as the dst language (such as $\mathbf { \boldsymbol { x } } 8 6$ , ARM, or RISC-V). Unlike general code translation, compilation needs to handle more low-level details, such as memory layout and calling convention, while ensuring the functional correctness of the translated result.
Definition 1. There are two programming languages: $\mathcal { L } _ { s r c }$ and $\mathcal { L } _ { d s t }$ , each is an infinite set of valid program strings. There exists a unary relation ⇀ from $\mathcal { L } _ { s r c }$ to $\mathcal { L } _ { d s t }$ . The problem is to perform $a$ translator function $T$ : $\forall x \in { \mathcal { L } } _ { s r c }$ , $( \exists u \in { \mathcal { L } } _ { d s t } , x u ) \to ( x T ( x ) ) , T ( x ) \equiv x$ semantically.
# 3.2 LEGO Translation: Core Method
As depicted in (a) in Figure 1, previous neural code translation methods typically convert entire programs at the function or file level. While this approach may be effective for smaller programs, it struggles with larger programs due to significant accuracy degradation. These methods translate code at a coarse granularity, making it challenging to translate very long functions using LLMs. This limitation is stark: taking neural compilation as an example, even state-of-the-art LLMs [1, 38], despite possessing context windows potentially spanning hundreds of thousands of tokens (e.g., 128k-200k), demonstrably fail to correctly compile a C function exceeding just $2 . 6 \mathrm { k }$ tokens using direct translation. They could also perform code-snippet level translation, but they lack guidelines and necessary information to compose the code-snippet level results together, and there is also no clear formal proof of the composability of code. Despite these limitations, we observe an inherently composable nature in code. In the context of neural compilation, we propose the following insights to enhance translation scalability:
• Fine-grained translation: Instead of translating an entire program at once, focus on translating smaller code snippets accurately. By ensuring each part is correctly translated, they can be combined to form a semantically equivalent complete translation. • Contextual Awareness: Effective translation of smaller code snippets requires understanding their contextual positioning within the code. This includes recognizing the relationship with preceding and succeeding snippets to maintain semantic coherence.
Figure 2: Neural compilation workflow in LEGO-Compiler. Left figure shows the behavioral verification process with unit-tests. Right figure shows the detailed steps in the workflow, some step is residual that may be skipped as performing such step may be unnecessary for certain input program.
• Symbol Handling: Accurate translation necessitates careful management of symbols (like variable scopes, types, and memory locations) and program constructs within each block to ensure correct mapping to the target architecture’s semantics and preserve functionality.
Inspired by [52], where this process is similar to the destruct and rebuild process of a LEGO toy, we named the fine-grained translation technique as LEGO translation and our system built upon it as LEGO-Compiler. As depicted in (b) in Figure 1, LEGO translation first breaks down large programs into manageable, self-contained blocks analogous to LEGO pieces (Part split). Then these blocks are independently translated (Part translation) and finally recombined, enabling scalable and accurate translation of complex programs (Part rebuild). All these methods rely on an inherent nature in programming languages, the composability in control block level, which reflects the linearization process in compiler design [57], where tree-structured control flow can be linearized, and therefore, composable. We have proved the widely applicable composability of programming languages using a constructive approach in Appendix A.
# 3.3 A Verifiable, Stepwise Neural Compilation Workflow
Directly translating complex high-level code to low-level assembly poses significant challenges for LLMs. To address this, we structure the neural compilation process as a stepwise workflow. This approach decomposes the overall task into a sequence of distinct, more manageable sub-tasks, each designed to be handled effectively by an LLM guided by specific prompts and context.
A key principle integrated throughout this workflow is verifiability. As depicted in Figure 2, by breaking down the complex, hard-to-verify compilation process into smaller steps, we create opportunities to validate the intermediate results of many stages before proceeding. This significantly enhances the reliability of the entire process and contributes to understanding the capabilities and limitations of LLMs in compilation tasks. Verification techniques employed vary depending on certain step and may include static source code analysis (e.g., comparing ASTs), cross-referencing intermediate calculations against compilation-based frontend tools, ensuring behavioral equivalence through execution testing, and potentially applying formal methods like SMT-based checks for specific properties like memory safety.
As depicted in Figure 2, the workflow proceeds through the following major steps:
• Variable Renaming: An initial source-level transformation ensures all variable identifiers within the compilation scope (e.g., a function) have unique names, resolving potential ambiguities from name shadowing and simplifying subsequent mapping. This renaming step is verifiable by executing the original and renamed source code on test cases to ensure behavioral equivalence.
• Type and Layout Analysis: This stage focuses on understanding the program’s data structures. The LLM performs a structured reasoning process for compound types (structs, unions, arrays) to determine their memory layout (size, alignment, member offsets) based on target architecture conventions and constituent basic types. The correctness of this analysis is verifiable by cross-referencing the inferred type sizes and offsets against outputs from standard development tools like Clangd [27] or IntelliSense [30].
Variable Mapping and Allocation: This step identifies all variable instances and determines the correspondence between high-level variables and their low-level assembly representations. Global variables are mapped to labeled memory, while local variables are assigned stack offsets relative to a base pointer. Access to compound type elements uses calculated offsets adhering to conventions like the System V ABI [47]. Verification for this stage can involve checks using techniques like SMT-based verification [11] to detect potential memory allocation issues (e.g., overlaps, out-of-bounds accesses) based on the derived allocation plan.
• Part Split (Control Flow Decomposition): Leveraging the LEGO translation principle, this step decomposes the input function into smaller, manageable control blocks. It analyzes the program’s control flow graph (CFG) and uses an LLM-driven process to perform an adaptive algorithm in Algorithm 2 to decide where to split, aiming for semantically coherent units suitable for independent translation. The structural integrity of the split is verifiable by ensuring that the CFG formed by recombining the split blocks (before translation) is isomorphic to the original function’s CFG, or more simply, immediate recombination.
• LEGO Part Translation: Each control block generated by the Part Split step is translated independently by the LLM into assembly code. The LLM receives the source code for the block along with relevant context derived from previous steps, such as the established variable mappings and type layouts.
• Part Rebuild and Final Verification: The translated assembly blocks are reassembled according to the original control flow structure, typically, for two adjacent code blocks, the assembling operation is just concatenation as assembly language is linearized. The functional correctness of the final, combined assembly code generated by the entire workflow (Split, Translate, Rebuild) is verified through behavioral equivalence checks against the original source code, implemented using unit-tests.
Finally, LEGO-Compiler integrates a self-correction loop with error feedback as a final quality check after full translation. This mechanism detects residual errors using the assembler (semantic errors), runtime execution/debuggers (runtime errors), and behavioral testing via unit tests (behavioral errors). Diagnostic information is fed back to the LLM to iteratively refine the generated assembly. This self-correction process is crucial for enhancing the robustness and accuracy of the LLM-based compilation system. As LLMs are non-deterministic and may exhibit trivial errors, where the feedback loop is essential for correcting these errors.
# 4 Experiments
# 4.1 Experimental Setup
Major parameters we have tested are listed below: note that not all combinations of experimental settings are tested due to resource constraints.
• Models: We select a variety of state-of-the-art LLMs from different vendors, including OpenAI’s newest GPT-4.1 and its mini version [40], Anthropic’s Claude-3.5-sonnet [1] and Claude-3.7-sonnet [2], Deepseek’s Deepseek-V3 and its newest 0324 version [12], and Google’s Gemini-2.0-flash and Gemini-2.5-pro [20]. We select models pairwisely to illustrate the model-side improvement on the neural compilation task. • Benchmarks: We majorly test on ExeBench [4], a large-scale dataset of executable C programs, additionally we use AnsiBench [33] and Csmith-generated programs [59] as case studies. Technically, we use ExeBench’s Real-Executable subset, initially containing over $4 0 \mathrm { k }$ cases, after data cleaning and removing cases uncompilable by an oracle compiler, we finally obtain a 17,121 cases testset of ExeBench, which is too large for full scale evaluation. We further filter two subsets of ExeBench for evaluation, we filter a hard-cases subset of 1,996 samples based on the number of basic blocks and instructions within these blocks using the LLVM toolchain [28], shown in Figure 5, and the other is a randomly selected subset with 2000 cases, which is used for comparison and ablation studies.
• Temperature: 0.0-1.0, with 0.2 step increments • Architecture: x86_64, arm-v8a, riscv64, majorly on x86
Table 1: ExeBench (17,121 cases) experimental results with method-level ablation on $\scriptstyle \mathbf { C - x } 8 6$ neural compilation task compared to previous state-of-the-art [63], the base model performs similarly, but with our proposed methods, we achieve substantial accuracy improvement on the full dataset. Except DS-V3-0324 model, other models are evaluated on a subset of 2000 cases due to budget constraints, DS-V3-0324 is evaluated on both datasets and performs nearly identical $( < 0 . 2 \% )$ .
Table 2: ExeBench Hard Subset (1996 cases) evaluation: The applied filters are based on nums of basic blocks(10), and max(80)/all(200) instructions within these blocks analyzed by LLVM toolchain [28], Detailed characterization of ExeBench and its hard subset is on Figure 5.
# 4.2 ExeBench Evaluation
Recall Figure 2, we evaluate ExeBench through the following setup:
1. Translate the C program to assembly (to generate hypothesis), where we have three different methods: direct translation(as baseline), stepwise workflow translation and LEGO translation. Note that some steps in the workflow is necessary for the LEGO translation method, as a global context is needed.
2. Assemble and link the hypothesis assembly to create an executable.
3. Run the executable through 10 different IO test cases provided by ExeBench.
4. Consider the translation successful if it passes all test cases.
5. If a translation fails (at any former step), apply self-fixing with the collected error feedback to the LLM, will try k rounds. We set k to 5 during the evaluation.
6. Consider the translation failed if it doesn’t pass after all configured attempts.
Table 1 and Table 2 summarize the empirical results of our LEGO-Compiler on ExeBench targeting $\mathbf { x } 8 6 - 6 4$ architecture. We establish carefully crafted 1-shot prompts to guide LLMs for each translation method. With our proposed methods, all models achieve substantial improvements, where newest models’ accuracy on the whole dataset/hard subset reaches averagely $9 9 . 5 6 \%$ and $9 8 . 6 5 \%$ . Claude-3.7- sonnet and Gemini-2.5-pro models are the best performing models during our evaluation, achieving over $9 9 \%$ accuracy on the hard subset. We analyze the ablated results as follows:
• Step-by-step workflow improves the translation majorly related to complex data structures and many variable assignments, where direct translation may fails to handle such complexity all together.
Table 3: ExeBench’s hard subset evaluations with different architectures with Gemini-2.5-pro.
• LEGO translation majorly improves lengthy code translation with multiple control statements, where the model struggles to keep track of the context and the correct label usage without our divide-and-conquer methodology.
• Self-correction fixes most trivial errors related to architecure-specific knowledge, and improves at all methods as it is orthogonal. Two major types of observed errors are: 1) misuse of instruction operands, like ‘cmp’ instructions cannot compare two immediate values or two memory values; 2) mnemonics-related, like access global variables or values stored in data section, LLMs need to generated $\%$ rip relative addressing operands instead of direct label usage. Taking Deepseek-V3-0324 as an example, 1) and 2) account for 26 and 40 failed cases in its 172 failed cases during CoT workflow evaluation.
Except $\mathbf { \Delta x } 8 6$ evaluation, we also evaluate LEGO-Compiler on arm64 natively on Apple M1 chip and riscv64 through Spike simulator. Due to time and budget constraints, we only evaluate the hard subset as it contains more challenging cases. As depicted in Table 3, the evaluation results are similar to $\mathbf { x } 8 6$ , where our LEGO-Compiler powered by Gemini-2.5-pro achieves similar improvements with our proposed methods. The globally lower accuracy may due to lower pretrained knowledge of these assembly languages in LLMs, or insufficient prompt engineering efforts as we have not thoroughly tested prompts usage on these architectures. It is also an interesting work to automate the prompt engineering process for different architectures to inject compilation-related knowledge to the translation process, which we leave for future work.
Another finding is observed through pairwise comparison of models, where we find clear improvement of newer/larger models over older/smaller models. We analyze the reasons as two-fold: First, more advanced new models are pretrained with more compilation-related knowledge, which helps the translation of certain expressions and statements. Second, newer models are more capable of reasoning, which is critical for the workflow translation and LEGO translation methods.
To sum up, the empirical results of our LEGO-Compiler system is promising, we prove a training-free approach to use LLMs as neural compilers, which can successfully translate averagely $9 9 . 5 6 \%$ of ExeBench testset and $9 8 . 6 5 \%$ of its hard subset across advanced LLMs from 4 state-of-the-art vendors. The model-independent evaluation process also establishes a challenging benchmark for LLMs, which requires 3 key capabilities: 1) mathematical reasoning and long-context reasoning 2) code/assembly understanding and translation, 3) error localization and correction.
# 4.3 AnsiBench: more real-world codebase evaluation
We conduct additional real-world codebases evaluation, we use AnsiBench [33], a collection of well-known ANSI C standard benchmark suites [22, 14, 19], benchmarking a wide variety of systems and compilers, including a number of classic, industry-standard benchmarks as well as some selected programs that can be used as benchmarks.
We evaluate the whole AnsiBench collection with our LEGO-Compiler, powered by Claude-3.7- Sonnet, our best-performing model in previous evaluation. We list the details of every function we compiled in Figure 3, totally we have 96 functions in total, except for few utility functions which are easy to compile, most of them represent real-world codebase complexity. We ablate the translation methods we applied to showcase both the effectiveness of CoT-like workflow and LEGO translation. In total, we pass 94 out of 96 cases in Ansibench across 7 different codebases, including Whetstone, Dhrystone, Hint(one failure), Linpack, Tripforce(one failure), Stream and CoreMark.When measured by token count, LEGO translation method significantly improves the translation scalability of real-world code by near an order of magnitude as illustrated in Figure 3.
There are majorly three types of errors where the first two types are where LEGO translation is superior.
Figure 3: AnsiBench evaluation results with Claude-3.7-Sonnet. The token count only computes the input length of C code, and typically, the output assembly will be 3-6 times larger in token size.
• Lengthy code input with over a thousand token size (typically), where the output size is truncated due to limited model output length. Besides, the coarse-grained translation itself is prone to bugs. LEGO translation method can significantly reduce such errors, the case in which LEGO translation also fails is the main function of Hint benchmark, which is even more complex than the main function of CoreMark depicted in Figure 6. We analyze its failure, where the LLM-reasoning step of the stack allocation fails to generate a correct mapping. Despite this, LEGO translation handles all the other lengthy code correctly as it successfully reduces the translation complexity to control-block level. • Long context forgetting problem [26], where the model can not match the current processing assembly with the source code faithfully, LEGO translation method, on the other hand, can handle these cases efficiently with less unnecessary contexts that may cause these ‘random’ errors. Besides, finer-grained translation also gives LLMs more attention to faithful translation of operations, the order of operations and implicit conversions. • Insufficient pretraining in LLMs, where LLMs lack certain knowledge to perform certain expression/statement translation or other architecture-specific details. For example, the other error in AnsiBench, the generate_password function in TripForce, where the translation fails to translate the multiline strings correctly. Feedback correction can mitigate such failures. Besides, a clear model-level improvement is observed by all model pairs, and we can be positive about these failures because as LLMs advance with more pretrained knowledge, their peformance will improve as well.
# 4.4 Csmith: randomly generated programs evaluation
Except for AnsiBench evaluation. We further perform evaluations on randomly generate programs with sufficient complexity. We use Csmith [59], a random generator of C programs which is widely used for finding compiler bugs using differential testing as the test oracle. Typically, Csmith examines compilers with random programs with corner case features and numbers, testing the robustness of compilers. Code examples generated from Csmith are illustrated in Figure 7.
We follow similar ablation strategy in AnsiBench evaluation. As depicted in Figure 8, randomly generated programs by Csmith are very hard for both baseline and CoT-only methods to translate. In a randomly generated test suite of 40 cases generated by Csmith, LEGO translation successfully compiles 25 cases, while baseline translation can only compiles 4 cases, and 13 cases for CoT workflow. Besides, the complexity of cases passed by LEGO translation method are significantly larger than others, characterized by token count, basic block count and total instructions, LEGO translation scales in code size and complexity by near an order of magnitude.
During Csmith evaluation, we also identify several kinds of errors during LEGO-Compiler translation. For example, overflow value assignment is an error which doesn’t occur usually but can be found in compiler testing. Taking int16_t $\textbf { x } = ~ 0 \mathbf { x } 5 6 6 7 1 4 8 5$ ; as an example, it will trigger errors because LLMs directly generate movw $\$ 023,456,71485$ , $\mathbf { x } \prime \mathbf { s }$ address in $\mathbf { \boldsymbol { x } } 8 6$ , which fails to check whether the numerical value (overflows the 16 bit word) can be represented through movw instruction.
Another example is, when handling with implicit type conversions, LLMs may not cast the type correctly, this is critical for floating point computation as operations with wrong precision will cause accumulated numerical errors. As a result, LEGO-Compiler in Csmith evaluation only achieves moderate behavioral accuracy of $6 2 . 5 \%$ .
However, Csmith-generated test cases do not commonly appear in real-world usages. Therefore, LEGO-Compiler is still promising in compiling common programs as evaluated by ExeBench and AnsiBench evaluation, indicating great potentials in the field of neural compilations. | Large language models (LLMs) have the potential to revolutionize how we
design and implement compilers and code translation tools. However, existing
LLMs struggle to handle long and complex programs. We introduce LEGO-Compiler,
a novel neural compilation system that leverages LLMs to translate high-level
languages into assembly code. Our approach centers on three key innovations:
LEGO translation, which decomposes the input program into manageable blocks;
breaking down the complex compilation process into smaller, simpler verifiable
steps by organizing it as a verifiable LLM workflow by external tests; and a
feedback mechanism for self-correction. Supported by formal proofs of
translation composability, LEGO-Compiler demonstrates high accuracy on multiple
datasets, including over 99% on ExeBench and 97.9% on industrial-grade
AnsiBench. Additionally, LEGO-Compiler has also acheived near one
order-of-magnitude improvement on compilable code size scalability. This work
opens new avenues for applying LLMs to system-level tasks, complementing
traditional compiler technologies. | [
"cs.PL",
"cs.AI",
"cs.SE"
] |
# 1 INTRODUCTION
Recent reinforcement learning methods for language models (LMs), particularly those using verifiable rewards (RLVR; Lambert et al., 2024), typically rely on signals that reflect output accuracy to guide training. These approaches have proven effective in enhancing reasoning by reinforcing correct outputs and discouraging incorrect ones (Guo et al., 2025). However, as training progresses under purely accuracy-driven objectives, these benefits often diminish. LMs tend to converge on narrow and over-optimized behaviors, gradually losing their incentive to explore alternative strategies. This lack of exploration weakens the model’s capacity for sustained, multi-step reasoning, causing performance to plateau or even regress, especially in complex or underspecified settings (Yu et al., 2025; Cui et al., 2025b).
In traditional RL, exploration plays a vital role alongside exploitation by encouraging the policy model to explore alternative strategies and avoid overfitting. A common metric for measuring exploration is entropy, which quantifies uncertainty in the policy’s action distribution (Haarnoja et al., 2018; Ziebart et al., 2008). Motivated by this, we investigate the relationship between entropy and exploratory reasoning in LMs, and uncover strong correlations: (1) Pivotal tokens that guide or connect reasoning steps—such as first, because, and however—consistently exhibit higher entropy; (2) Reflective actions (Shah et al., 2025), such as self-verification and error correction, tend to emerge under high-entropy conditions; (3) During RL training, rare or under-explored solutions also coincide with elevated entropy. Together, these findings suggest entropy can be a valuable signal for recognizing exploratory reasoning behaviors in LMs.
Based on these findings, we propose incorporating entropy as an auxiliary term to encourage exploratory reasoning during RL training. While traditional maximum entropy methods encourage exploration by promoting uncertainty (O’Donoghue et al., 2016), our approach takes a different path to balance exploration and exploitation: we introduce a clipped, gradient-detached entropy term into the advantage function of standard RL algorithms. Clipping ensures that the entropy term neither dominates nor reverses the sign of the original advantage, while gradient detachment preserves the original optimization direction. This design amplifies exploratory reasoning behaviors that emerge under uncertainty while maintaining the original policy gradient flow. Moreover, because of the intrinsic tension between entropy and confidence, the entropy-based term naturally diminishes as confidence increases—encouraging exploration in early stages while avoiding over-encouragement as training progresses. Furthermore, our method is extremely simple, requiring only one extra line of code to seamlessly integrate into existing RLVR training pipelines (Sheng et al., 2024).
We validate our method on mainstream RLVR algorithms, GRPO (Shao et al., 2024) and PPO (Schulman et al., 2017b), and observe several distinct benefits. First, it amplifies exploratory reasoning behaviors—such as the use of pivotal tokens and reflective actions—by decreasing the policy’s uncertainty at these key decision points. Second, it encourages the generation of longer, more exploratory responses without increasing the repetition rate, enabling coherent multi-step reasoning. On challenging benchmarks, beyond average accuracy, we further evaluate LMs using $P a s s @ K { \mathrm { - } } { \mathrm { a } }$ metric recently regarded as an upper-bound estimator of a LM’s reasoning capability (Yue et al., 2025a). $P a s s @ K$ measures the probability that an LM can solve a problem within $K$ attempts, reflecting its potential for multi-try reasoning (Chen et al., 2021). Our method yields substantial gains even with large $K$ values, pushing the boundaries of LM reasoning.
In summary, the key contributions of this work are as follows:
• We investigate and reveal a strong correlation between entropy and exploratory reasoning in LMs, showing that pivotal tokens, reflective actions, and rare behaviors emerge with higher entropy. • We propose a minimal yet effective method that augments the standard RL advantage with a clipped, gradient-detached entropy term, encouraging exploration by fostering longer and deeper reasoning chains while preserving the original policy optimization direction. • We validate our approach on mainstream RL algorithms: GRPO and PPO, achieving substantial improvements on the $P a s s @ K$ metric and pushing the boundaries of LM reasoning capabilities.
# 2 PRELIMINARY ANALYSIS: ENTROPY AND EXPLORATORY REASONING
We examine entropy—a core signal of exploration in RL (Schulman et al., 2017a; Haarnoja et al., 2018; Nachum et al., 2017)—and its relationship with exploratory reasoning in LMs. We begin by
lPoigvicoatlacloTnonkecetnors: quations with logarithms: 1.2 Ptiovkoetnal 1.2 Reafclteicotinve 0.8 beRharveior $l o g _ { x } ( y ^ { x } ) = l o g _ { y } ( x ^ { 4 y } ) = 1 0$
First, Because… iFdierntsiti,else:t's simplify each equation through the application of logarithmic 1.0 Other 1.0 Other Other $1 . \ l o g _ { x } ( y ^ { x } ) = x l o g _ { x } ( y ) = 1 0 \qquad \ 2 . \ l o g _ { y } ( x ^ { 4 y } ) = 4 l o g _ { y } ( x ) = 1 0$ token action 0.6
Reflection behavior
saenldf-vceorirfeicatition Tofhilsicnoenarverqtsuoautrilongsa: r…i thmic system into the following system 0.8 Recall that !"#\$ + = $\begin{array} { r } { l o g _ { a } ( b ) = \frac { 1 } { l o g _ { b } ( a ) ^ { \prime } } } \end{array}$ hence we rewrite the system: … 0.4 0.6 0.6
Rare behavior From the first equation: $x =$ 10, …
under-explored Substitute ,+" into & = 10,: … Thus, &\$ = 25.
solutions Let’s verify if this is correct using Python: … Answer: 25 0.4 0.4 0.2
Entropy Visualization: Tokens in Bold Show Higher Entropy Entropy Comparison: Exploratory Reasoning vs. Others
Figure 2: Entropy Visualization and Comparison between Exploratory Reasoning and Others. We categorize tokens/actions/behaviors based on their role in the reasoning process. In the visualization, tokens with higher entropy appear in bold and larger sizes, with colors denoting different reasoning roles. In the comparison, we show average entropy values across different categories.
Figure 3: Behavior Clustering. t-SNE projection of response embeddings. Base denotes the pre-RL model outputs; RL Other and RL Rare represent common and rare behaviors after RL, respectively.
visualizing token-level entropy in the responses of Qwen2.5-Base-7B (Yang et al., 2024a) on mathematical reasoning tasks (MAA, 2025). As shown in Figure 2, we observe that high-entropy tokens consistently correspond to different reasoning dynamics compared to low-entropy ones. Based on this observation, we categorize exploratory reasoning-related content—including both tokens and sentences—to support the following analysis1.
Pivotal Tokens Figure 2 shows that pivotal reasoning tokens (e.g., first, recall, thus) tend to have higher entropy. These tokens serve as logical connectors, marking decision points where the model determines the flow and structure of reasoning. To quantify this observation, we compute the average entropy of commonly occurring pivotal tokens across all responses and compare it to that of the remaining tokens. These include causal terms (e.g., because, therefore), contrastive markers (e.g., however, although), sequential terms (e.g., first, then), and reasoning verbs (e.g., suggest, demonstrate). Results on the right of Figure 2 confirm a statistically significant increase in entropy for these pivotal tokens. Similar observations have also been noted in concurrent work (Wang et al., 2025a; Qian et al., 2025), where such tokens are referred to as forking tokens or information peaks.
Reflective Actions Reflection is a form of meta-cognition that involves examining generated information, evaluating the underlying reasoning, and adapting future behavior accordingly (Shah et al., 2025). In this work, we focus on self-reflection, where the model assesses and comments on its own outputs. This is illustrated in the visualization in Figure 2, where the LM assigns higher entropy to
sentences such as “Let’s verify if this is correct...”
To quantify this behavior, we segment each response into sentences, compute the average entropy for each one, and use regular expressions to identify reflective actions—specifically, sentences containing keywords such as “verify” or “check”. As shown in the comparison in Figure 2, these reflective sentences consistently exhibit higher average entropy, suggesting that self-reflection tends to occur under greater uncertainty. To the best of our knowledge, this is the first analysis linking entropy to self-reflection in LMs.
100 Base RL Other .
888
-50
-100 -100 -50 0 50 100 t-SNE1
Rare Behaviors Emergent During RL We further examine whether under-explored or emergent behaviors—those rarely exhibited by the base model—are associated with distinct entropy patterns during RL. In the visualization (Figure 2), such behaviors include converting logarithmic systems into systems of linear equations, which are less frequently observed in the base model’s outputs. To quantify this, we perform RL training on the base model (see Section 4 for configurations), and define rare behaviors as sentences that semantically isolated from the base model’s output distribution. We embed all response sentences using SBERT (Reimers & Gurevych, 2019) and, for each RL-generated sentence, compute the average distance to its $k = 5$ nearest base-model neighbors. Sentences in the top $10 \%$ of this distance metric are labeled as rare. Behavior clusters are visualized in Figure 3. As shown in the comparison in Figure 2, these rare behaviors exhibit higher entropy, revealing a strong correlation between semantic novelty and predictive uncertainty.
# 3 METHOD
Our analysis reveals a strong correlation between entropy and exploratory reasoning in LMs, motivating us to actively encourage high-entropy actions during training. To this end, we propose an advantage shaping method that augments the per-token advantage with a term based on its entropy. This entropy-based term serves as a robust, self-regulating signal that guides learning without altering the original gradient flow of the base RL algorithm.
Let $q$ denote a question sampled from a dataset $\mathcal { D }$ , and let $o = \left( o _ { 1 } , o _ { 2 } , \ldots , o _ { | o | } \right)$ be the corresponding output response generated by a policy model $\pi _ { \boldsymbol { \theta } }$ . Our method is compatible with mainstream policy optimization algorithms such as Proximal Policy Optimization (PPO; Schulman et al., 2017b) and Group Relative Policy Optimization (GRPO; Shao et al., 2024). We begin by briefly reviewing these methods before introducing our entropy-based advantage shaping method.
# 3.1 RL BASELINES: PPO AND GRPO
Proximal Policy Optimization (PPO) PPO optimizes the policy by maximizing the following clipped surrogate objective:
$$
\mathcal { T } _ { \mathrm { P P O } } ( \theta ) = \mathbb { E } _ { q \sim \mathcal { D } , \ o \sim \pi _ { \theta _ { \mathrm { o d d } } } ( O | q ) } \left\{ \sum _ { t = 1 } ^ { | \varrho | } \operatorname* { m i n } \left[ \rho _ { t } ( \theta ) A _ { t } , \mathrm { c l i p } ( \rho _ { t } ( \theta ) , 1 - \epsilon _ { \mathrm { l o w } } , 1 + \epsilon _ { \mathrm { h i g h } } ) A _ { t } \right] \right\} ,
$$
where $\begin{array} { r } { \rho _ { t } ( \theta ) = \frac { \pi _ { \theta } \left( o _ { t } | q , o < t \right) } { \pi _ { \theta _ { \mathrm { o l d } } } \left( o _ { t } | q , o < t \right) } } \end{array}$ denotes the likelihood ratio between the current and old policy models, and $A _ { t }$ is the advantage, typically computed using Generalized Advantage Estimation (GAE; Schulman et al., 2015. We omit the length normalization term, as our implementation adopts a token-level policy loss without per-response normalization. The loss is averaged across all tokens in a training batch to mitigate implicit length bias Liu et al., 2025; Zeng et al., 2025. The clipping range $\epsilon _ { \mathrm { l o w } }$ and $\epsilon _ { \mathrm { h i g h } }$ stabilizes policy updates by preventing excessively large changes. While standard PPO uses symmetric clipping (i.e., $\epsilon _ { \mathrm { l o w } } = \epsilon _ { \mathrm { h i g h } }$ ), recent work (Yu et al., 2025) suggests that slightly increasing $\epsilon _ { \mathrm { h i g h } }$ can help avoid entropy collapse.
The gradient of the PPO objective is (we omit min and clip operations under the single-update-perrollout assumption (Shao et al., 2024)):
$$
\nabla _ { \theta } \mathcal { J } _ { \mathrm { P P O } } ( \theta ) = \mathbb { E } _ { q \sim \mathcal { D } , o \sim \pi _ { \theta _ { \mathrm { o l d } } } ( O | q ) } \left[ \sum _ { t = 1 } ^ { \lfloor o \rfloor } A _ { t } \nabla _ { \theta } \log \pi _ { \theta } ( o _ { t } \mid q , o _ { < t } ) \right] .
$$
Group Relative Policy Optimization (GRPO) GRPO is an alternative to GAE-based PPO that avoids learning a separate value function by using the average reward of multiple sampled outputs, produced in response to the same question, as the baseline. Formally, for each question $q$ , a group of $G$ outputs $\left\{ o _ { 1 } , o _ { 2 } , \ldots , o _ { G } \right\}$ is sampled from the old policy $\pi _ { \theta _ { \mathrm { o l d } } }$ , a reward model is then used to score the outputs, yielding $G$ rewards $\{ r _ { 1 } , r _ { 2 } , \dots , r _ { G } \}$ correspondingly. These scores are then normalized as:
$$
\widetilde { r } _ { i } = \frac { r _ { i } - \operatorname* { m e a n } ( \{ r _ { 1 } , r _ { 2 } , . . . , r _ { G } \} ) } { \mathrm { s t d } ( \{ r _ { 1 } , r _ { 2 } , . . . , r _ { G } \} ) } .
$$
Recently, GRPO has been widely used in outcome-supervised settings (Guo et al., 2025), where the normalized reward is assigned at the end of each output $o _ { i }$ , and every token in $o _ { i }$ receives the same advantage, i.e., $A _ { i , t } = \tilde { r } _ { i }$ . The policy is then optimized using the PPO objective in Equation 2 with these group-relative advantages. A KL penalty term between the trained policy and a reference policy may be added to the loss (Schulman, 2020).
# 3.2 ENCOURAGING EXPLORATORY REASONING VIA ENTROPY-BASED ADVANTAGE
Entropy-Based Advantage Shaping To encourage exploratory reasoning, we propose an entropy-guided advantage shaping method. The key idea is to inject an entropy-based term into the advantage function during policy optimization.
For each token $o _ { t }$ in an output $o$ , the entropy of the current policy over the vocabulary $\nu$ is:
$$
\mathcal { H } _ { t } = - \sum _ { v \in \mathcal { V } } \pi _ { \theta } ( v \mid q , o _ { < t } ) \log \pi _ { \theta } ( v \mid q , o _ { < t } ) .
$$
We then define an entropy-based advantage term $\psi ( \mathcal { H } _ { t } )$ and use it to shape the advantage:
$$
\psi ( \mathcal { H } _ { t } ) = \operatorname* { m i n } \left( \alpha \cdot \mathcal { H } _ { t } ^ { \operatorname* { d e t a c h } } , \frac { | A _ { t } | } { \kappa } \right) , \quad \mathrm { w h e r e ~ } \alpha > 0 \mathrm { ~ a n d ~ } \kappa > 1 ,
$$
$$
A _ { t } ^ { \mathrm { s h a p e d } } = A _ { t } + \psi ( \mathcal { H } _ { t } ) .
$$
Here, $\alpha$ is the scaling coefficient, and $\kappa$ controls the clipping threshold. This clipping ensures that the entropy-based term $\begin{array} { r } { \psi ( \mathcal { H } _ { t } ) \leq \frac { | A _ { t } | } { \kappa } } \end{array}$ , so it does not dominate the advantage. Moreover, when $A _ { t } < 0$ , this constraint ensures that adding the entropy-based term does not reverse the sign of the advantage—thus preserving the original optimization direction. Crucially, the entropy term $\mathcal { H } _ { t } ^ { \mathrm { d e t a c h } }$ is detached from the computational graph during backpropagation, acting as a fixed offset to the original advantage. This adjusts the magnitude of the update without altering the gradient flow. As a result, the policy gradient retains a format similar to that of PPO in Equation 2, where only the advantage $A _ { t }$ is replaced with the shaped one:
$$
\nabla _ { \theta } \mathcal { I } _ { \mathrm { P P O } } ^ { \mathrm { s h a p e d } } ( \theta ) = \mathbb { E } _ { q \sim \mathcal { D } , o \sim \pi _ { \theta _ { \mathrm { o d d } } } ( O | q ) } \left[ \sum _ { t = 1 } ^ { | o | } ( A _ { t } + \psi ( \mathcal { H } _ { t } ) ) \nabla _ { \theta } \log \pi _ { \theta } ( o _ { t } \mid q , o _ { < t } ) \right] .
$$
Our shaping method can be seamlessly integrated into existing RL training pipelines using only a single line of code. Specifically, after computing the advantages with PPO or GRPO, we add the entropy-based advantage term before calculating the policy loss, as follows2:
Entropy-Based Advantage Shaping (PyTorch Implementation)
# Compute advantages as in PPO or GRPO
adv $\mathbf { \Psi } = \mathbf { \Psi }$ compute advantages(...)
# Apply entropy-based term for advantage shaping
adv $+ =$ torch.min(alpha $\star$ entropy.detach(), adv.abs()/kappa)
# Use the shaped advantages to compute the policy loss
loss $\mathbf { \Sigma } = \mathbf { \Sigma }$ compute policy loss(adv, ...)
Robustness of Entropy-Based Advantage: Avoiding Over-Encouragement Prior work (Chen et al., 2025) attempts to enhance reasoning by rewarding the policy based on the frequency of reasoning-like tokens, but this leads to reward hacking—the policy model repeatedly generates such tokens to exploit the reward without performing true reasoning. In contrast, our method naturally avoids such over-encouragement due to the intrinsic tension between entropy and confidence. As shown in Figure 4, our method initially assigns high advantage to tokens with high-entropy distributions but gradually reduces the entropy-based advantage as model confidence increases over training iterations.
Formally, let $k$ denote the training iteration and $t$ denote the token position within the output response. The policy model parameters are updated via gradient ascent:
$$
\theta _ { k + 1 } = \theta _ { k } + \eta \nabla _ { \theta } \mathcal { I } ( \theta _ { k } ) ,
$$
where $\eta$ is the learning rate, and the policy gradient $\nabla _ { \boldsymbol { \theta } } \mathcal { I } ( \boldsymbol { \theta } _ { k } )$ (Equation 7) uses the shaped advantage Askh,taped = Ak,t + ψ(Hk,t) which is positively correlated with the detached entropy $\mathcal { H } _ { k , t } ^ { \mathrm { d e t a c h } }$ (Equation 5). When the original advantage $A _ { k , t } > 0$ , higher entropy leads to a stronger update on the selected token $o _ { t }$ , largely increasing its likelihood $\pi _ { \theta } ( o _ { t } \mid \cdot )$ and thus sharpening the output distribution. According to the entropy definition in Equation 4, a sharper distribution lowers entropy, which in turn reduces the entropybased advantage $\psi ( \mathcal { H } _ { t } )$ and weakens subsequent updates. This self-regulating effect is empirically validated in Figure 7.
Figure 4: Dynamics of Entropy-Based Advantage. High entropy initially largely amplifies the advantage, accelerating confidence gain and leading to reduced entropy-based shaping in subsequent steps.
Comparison with Entropy Regularization In traditional RL, it is common to add an entropy regularizer to the gradient to prevent the policy from becoming overly deterministic (O’Donoghue et al., 2016). To clarify the distinction between our method and entropy regularization, we present a comparison in Table 1.
Entropy regularization explicitly adds an entropy term $\textstyle \sum _ { t } { \mathcal { H } } _ { t }$ to the objective, scaled by a coefficient $\beta$ . Since $\mathcal { H } _ { t }$ depends on the current policy $\pi _ { \boldsymbol { \theta } }$ , this introduces an additional gradient component $\nabla _ { \boldsymbol { \theta } } \mathcal { H } _ { t }$ , encouraging higher-entropy policies during training.
In contrast, our method modifies the advantage function by adding a clipped entropy term $\mathcal { H } _ { t } ^ { \mathrm { d e t a c h } }$ , which is detached from the computation graph. As a result, $\nabla _ { \boldsymbol { \theta } } \mathcal { H } _ { t } ^ { \mathrm { d e t a c h } } = 0$ , and the entropy term influences optimization only through the adjusted advantage values. Thus, our method preserves the original RL optimization dynamics. This makes it fundamentally distinct from—and even orthogonal to—entropy regularization.
Table 1: Comparison of gradient behavior between entropy regularization and our entropy-based advantage shaping. We present simplified expressions that omit PPO’s min/clip operations and batch normalization. $\mathcal { I } _ { \mathrm { P P O } } ( A _ { t } ^ { \mathrm { s h a p e d } } )$ denotes the PPO objective computed with shaped advantages.
# 4 EXPERIMENT SETTINGS
Backbone Models We conduct experiments on two base models: the general-purpose Qwen2.5- Base-7B (Yang et al., 2024a) and its domain-adapted variant Qwen2.5-Math-Base-7B (Yang et al., 2024b). We also initially attempted RL training from Llama-series LMs (AI $@$ Meta, 2024) using vanilla GRPO, but observed that the LMs abandoned intermediate reasoning chains within just a few training iterations. This observation aligns with (Gandhi et al., 2025) that Llama LMs inherently lack reasoning behaviors and likely require pre-training on reasoning traces prior to RL training.
RL Training Configuration Our training data are sourced from DAPO (Yu et al., 2025). We use output reward that assigns $+ 1$ for correct final answers and -1 otherwise. We conduct experiments on
GRPO and PPO using the veRL framework (Sheng et al., 2024). To build strong baselines, we adopt several techniques from (Yu et al., 2025) and (Yue et al., 2025b), including Clip-Higher, Tokenlevel Loss, Critic-Pretraining, and Group-Sampling. Detailed hyperparameters are in Appendix B. Building on these RL baselines, we apply our proposed entropy-based advantage. We fix $\kappa$ to 2 throughout all experiments, and set $\alpha$ to 0.4 for GRPO and 0.1 for PPO.
Evaluation Benchmarks and Metrics We evaluate on AIME 2025/2024 (MAA, 2025), AMC 2023 (MAA, 2023) and MATH500 (Hendrycks et al., 2021), using a rollout temperature of 0.6, a maximum response length of $8 K$ tokens, and top- $p$ sampling with $p = 0 . 9 5$ . Each dataset is evaluated multiple times, and we report the average accuracy. Following (Yue et al., 2025a), we also assess reasoning ability boundaries using the $P a s s @ K$ metric: for each question, if at least one of $K$ sampled model outputs passes verification, $P a s s @ K = 1$ ; otherwise 0. To mitigate variance, we adopt the unbiased estimation method proposed in (Chen et al., 2021). For the small and challenging benchmark AIME 2024/2025 (30 examples per year), we scale $K$ to a large value of 256. For the larger and less challenging benchmarks AMC 2023 (83 examples) and MATH500 (500 examples), we set $K = 1 2 8$ and $K = 1 6$ , respectively, because LMs already achieve near-perfect results with small $K$ , and their size makes large $K$ computationally expensive.
# 5 RESULTS
As shown in Table 2, our method consistently outperforms the baselines across benchmarks and RL algorithms, achieving superior average performance even compared to strong existing approaches (Cui et al., 2025a; Liu et al., 2025; Chu et al., 2025). Moreover, this advantage extends to $P a s s @ K$ —a metric for estimating the reasoning capacity of LMs. As shown in Figure 5, our method continues to deliver improvements even at large $K$ values, where most baselines plateau.
On benchmarks such as AIME2024, AMC2023, and MATH500, we observe a similar phenomenon reported in (Yue et al., 2025a): although RL-trained models consistently outperform their base models in terms of average $P a s s @ 1$ performance, the base models can surpass RL-finetuned ones in $P a s s @ K$ when $K$ becomes sufficiently large. This indicates that conventional RL fine-tuning may inadvertently limit the exploratory capacity of the model. Our method effectively mitigates this issue. Notably, on AIME2025—the most challenging benchmark in our evaluation, released after the training data cutoff of the base models—our method not only outperforms the RL baselines but also exceeds the performance ceiling of the base model. This highlights the potential of our approach to break through the inherent limitations of base models and push the boundaries of LM reasoning.
Table 2: Pass@K and Average Performance. $\dagger$ : results from (Chu et al., 2025). “ $^ { \circ } +$ GRPO” and $ { ^ { 6 6 } } { + \mathrm { P P O } ^ { { 7 } } }$ indicate RL training from the base models, while “w/ Entropy Adv.” denotes applying our entropy-based advantage to the corresponding RL algorithms. $\Delta$ denotes the performance difference between without and with applying our method.
Figure 5: Pass ${ \ @ { \bf K } }$ Performance of the LMs with different RL algorithms.
# 6 ANALYSIS
We conduct a detailed analysis to understand the impact of our method on RL training for LMs. Specifically, we track key metrics—including reward, response length, entropy, and our entropybased advantage—throughout training (Figure 6 and 7), as well as reasoning dynamics during testing (Figure 8 and 9). Furthermore, we provide a comprehensive comparison between our method and traditional entropy regularization.
# 6.1 RL TRAINING PROCESS
Training Reward As shown on the left of Figure 8, we observe steady upward trends across all three methods. Notably, RL with Entropy-based Advantage yields slightly higher rewards in the later stages of training, indicating a stronger and more sustained improvement over time.
Response Length Figure 8 (middle) indicates that while the response length of RL baseline shows a steady increase before 1000 steps, it slightly declines thereafter. In contrast, augmenting the RL baseline with our entropy-based advantage sustains the upward trend in response length beyond 1000 steps, surpassing both the RL baseline and RL with Entropy Regularizer. This may reflect stronger reasoning capabilities, as the LM tends to spend more time (i.e., generate more tokens) to explore and reach the correct answer (Guo et al., 2025).
Entropy Both the RL baseline and our method exhibit a decreasing trend throughout training, reflecting increasing model confidence. However, neither shows signs of entropy collapse. This is likely due to the use of the “clip-higher” technique in the RL baseline, which prevents the gradients of low-probability tokens from being clipped. Specifically, at step 2000, the entropy of the RL baseline is 0.34, and that of our method is 0.17. As a reference, in an ablation where “clip-higher” is removed, entropy drops to 0.03—a level typically considered as entropy collapse (Yu et al., 2025; Cui et al., 2025b).
In contrast, although adding an entropy regularizer to the RL training objective increases entropy during training, it shows a sudden spike after step 1500, indicating unstable optimization. The corresponding testing performance comparison between our method and entropy regularization is shown in Table 3, highlighting our method’s superiority in promoting stable training while improving reasoning performance.
Figure 6: Reward, Response Length and Entropy during the RL training process. The base model is Qwen2.5-Base, and the RL baseline is GRPO. “RL w/ Entropy Reg.” adds an entropy regularizer to the training objective. “RL w/ Entropy Adv.” applies entropy-based advantage.
Table 3: Comparison of Pass ${ \ @ { \bf K } }$ and Average Performance between entropy regularization (Entropy Reg.) and our entropy-based advantage shaping (Entropy Adv.)
The co-occurrence of decreasing entropy and increasing response length suggests that the model becomes more confident in its exploratory reasoning, even when generating longer responses—an observation we will analyze further in the subsection on exploratory reasoning dynamics.
Entropy-Based Advantage Figure 7 shows the ratio of the entropy-based advantage to the original advantage. As training progresses and the model gains confidence, the entropy-based advantage decreases. This supports our hypothesis that the intrinsic tension between model confidence and entropy naturally encourages exploration in uncertain regions, while gradually reducing the entropy-based advantage once sufficient confidence is achieved—thereby preventing over-encouragement.
Figure 7: Ratio of Entropy-Based Advantage to the original advantage (i.e., $\frac { \psi ( \mathcal { H } _ { t } ) } { | A _ { t } | } )$ .
# 6.2 EXPLORATORY REASONING DYNAMICS
We further analyze the reasoning dynamics of the RL-trained models on the testing benchmarks to validate whether encouraging high-entropy actions during training can effectively enhance the model’s exploratory reasoning capabilities.
Pivotal Tokens and Reflective Actions As shown in Figure 8, applying our entropy-based advantage successfully reinforces the model’s ability to generate pivotal tokens and reflective actions. These regions exhibit much lower entropy, indicating increased model confidence when producing such actions. Consequently, we observe significantly higher counts of pivotal tokens and reflective actions compared to the baseline.
Response Length and Repetition Rate On the right side of Figure 8, we also observe a substantial increase in response length across testing benchmarks. Additionally, we record the n-gram-based
Pivotal Token Reflective Action Length and Repetition 1.2 0.4 0.25 1.0 # Pivotal Tokens (K) ,Enat5roapnydAdav6. cann50.o23t s0 tr=y 3o•L0 SLte’urestpv’tgpsrauyolteursasyesnsta:othRthleLhies # Response Tokens (K) EntrRoLpyw/Adv. Repetition Rate (Dashed) 0•S.7 Icf A + 2 RL 0.5T 0.1 (uniq RL 0.8 0.00 0.2 40 0.0 RL RL w/ n0d u10m00 h2e00s0q RL RL w/ n0 t1:000 2000 0 1000 2000 Entropy Adv. Policy Update Step 2Entrop2y Adv. Policy Update Step Policy Update Step
# Question and Ground Truth
Question: A list of positive integers has the following properties:
• The sum of the items in the list is 30
• The unique mode of the list is 9
• The median of the list is a positive integer that does not appear in the list itself.
Find the sum of the squares of all the items in the list. Ground Truth: 236
# Response from RL 725 Tokens
To find the sum of the squares of all items in the list that satisfies the given properties, let’s analyze the requirements step-by-step:
1. The sum of the integers in the list must be equal to 30. (...)
Let’s guess the list might be something like:
• Suppose the list has an odd number of elements (...) Let’s try another configuration:
• Let’s try a list with sum 30: [1, 9, 9, 9, 7]: Sum is 30 correct.
• Let’s check the median is not in the list and unique mode is 9 is unique correctly.
• The final valid list sum correctly would be: Let’s do correct summing and checking properties correctly: (...) One possible valid list correctly is: [1, 9, 9, 9, 7]: Sums correctly. Correct sum properties and correctly summed properties correctly would sum correctly.
Answer: 251
# Response from RL w/ Entropy Adv. 3023 Tokens
To solve this problem, we will systematically analyze the given conditions and construct the list of positive integers that meets all the criteria.
1. Sum of the items in the list is 30: (...)
# Step-by-Step Analysis:
Step 1: Understanding the Mode (...) Step 2: Understanding the Median (...) Step 3: Setting Up the List $( . . . )$
# Case Analysis:
Case 1: $n$ is odd: If $n$ is odd, (...) Case 2: $n$ is even If $n$ is even, (...)
# Constructing the List
Let’s start by trying small values for $n$ and see if we can satisfy all conditions.
Trying $n = 5$
If $n = 5$ , the median is $( . . . )$ Let’s try some values: $( . . . )$
Trying $n = 6$ (...)
Trying $n = 7$ (...)
Trying $n = 8$ (...)
After testing several more combinations, let’s try $n = 6$
again with a different strategy: (...)
This works. The list is: (...)
Now, we need to find the sum of the squares of all the
items in the list: (...)
Answer: 236
reipnetetigteirosnthrattmeeoetfsgalelntheeractrietderirae.sponses and find that our method yields much longer responses while m1a.inStuaminoifntgheaitrempsetiintithoenlirsatties 3c0o:m(..p.)arable to that of the RL baseline, demonstrating its ability to scale effSetecpti-vbeyl-ySteatp tAenstaltyismis:e without increasing redundancy.
CaStsep S2:tuUdndyerstaFnidgiunrge h2e pMredsieant(s. )example responses from the RL-trained models. Compared to the baSstelpi n3:e,SeottuirngmUepththoedLpisrto(d...u)ces more accurate and mathematically rigorous solutions. The model expliCcaitsleyAlnisatlsysipsr:oblem constraints, performs systematic case analysis (e.g., odd vs. even list lengths), anCdasdey1n:anmiiscoadldl:yIfadnjiussotsddi,ts( .a.)pproach when initial attempts fail. For instance, it iterates through candidCaste 2v:alnuisesev(en.gI.f, $n = 5 , 6 , \ldots { }$ while ensuring constraints are satisfied at each step. This structured and persistent reasoning process leads to valid final answers, whereas the baseline often overlooks key conditions and produces incorrect solutions.
# 7 RELATED WORK
Exploration in Reinforcement Learning Exploration has long been a central theme in RL (Li et al., 2025), addressed through theoretical frameworks (Cai et al., 2020; Agarwal et al., 2020; Ishfaq et al., 2021), as well as empirical heuristics (Burda et al., 2019; Pathak et al., 2017; Raileanu & Rockta¨schel, 2020; Henaff et al., 2022). Motivated by the use of entropy to guide exploration (Haarnoja et al., 2018; Schulman et al., 2017b; Ziebart et al., 2008), we investigate its role in LM reasoning by treating entropy as an advantage-shaping signal to reinforce exploratory reasoning behaviors. A concurrent work (Gao et al., 2025) also studies exploration-driven reasoning but adopts a different approach by designing custom metrics rather than using entropy. Other concurrent studies incorporate an entropy regularizer (He et al., 2025; Wang et al., 2025b) to the training objective, while our method focuses on the advantage function, providing an orthogonal perspective.
Training Signals in Reinforcement Fine-Tuning Reinforcement fine-tuning of language models can leverage supervised and/or unsupervised training signals (Shao et al., 2025). Supervised methods, such as RLHF (Ouyang et al., 2022) and RLVR, rely on reward signals derived from human feedback or verifiable correctness, and have proven effective in aligning model behavior and solving deterministic tasks. In contrast, unsupervised approaches reduce dependence on human annotations by leveraging consistency-based signals (Prasad et al., 2024; Zuo et al., 2025) or entropy minimization (Zhang et al., 2025; Agarwal et al., 2025). Our work focuses on unsupervised signals with a specific emphasis on exploration, employing entropy to shape the advantage and encourage exploratory reasoning. | Balancing exploration and exploitation is a central goal in reinforcement
learning (RL). Despite recent advances in enhancing language model (LM)
reasoning, most methods lean toward exploitation, and increasingly encounter
performance plateaus. In this work, we revisit entropy -- a signal of
exploration in RL -- and examine its relationship to exploratory reasoning in
LMs. Through empirical analysis, we uncover strong positive correlations
between high-entropy regions and three types of exploratory reasoning actions:
(1) pivotal tokens that determine or connect logical steps, (2) reflective
actions such as self-verification and correction, and (3) rare behaviors
under-explored by the base LMs. Motivated by this, we introduce a minimal
modification to standard RL with only one line of code: augmenting the
advantage function with an entropy-based term. Unlike traditional
maximum-entropy methods which encourage exploration by promoting uncertainty,
we encourage exploration by promoting longer and deeper reasoning chains.
Notably, our method achieves significant gains on the Pass@K metric -- an
upper-bound estimator of LM reasoning capabilities -- even when evaluated with
extremely large K values, pushing the boundaries of LM reasoning. | [
"cs.CL"
] |
# 1 INTRODUCTION
Directed fuzzing is an approach that aims to reach a target site of a program under test, e.g., a target line of code, by iteratively generating inputs named seeds. Due to its directed nature, it plays a vital role in various software testing and debugging tasks, e.g., patch testing [11, 22], crash reproduction [10, 14, 16], and vulnerability detection [26, 29, 31]. For direct fuzzers, the time it takes to reach a target site is a key performance metric. However, existing approaches, i.e., graybox fuzzers [3, 5, 10, 12, 21], and whitebox ones [14, 22], suffer from poor efficiency due to their slow or imprecise seed generation process.
The state-of-the-art directed fuzzing is directed graybox fuzzing, which achieves high seed-generating speed [3]. However, graybox approaches suffer from poor performance due to the low precision of the generated seeds since new seeds are generated by randomly mutating existing ones. Many useless seeds are generated in the random process, wasting execution time. For example, as shown in Figure 1, if we want to satisfy the condition 𝑖𝑛𝑝𝑢𝑡 $= 1 2 3 4 5 6 7 9 0$ in a graybox fuzzer, we need to mutate at most $2 ^ { 3 2 }$ times, any intermediate outcome is considered useless.
Another approach is directed whitebox fuzzing. Whitebox approaches are based on symbolic execution. They can leverage the internal structure of the program and generate precise solutions by solving constraints. However, existing whitebox approaches rely on interpretation-based symbolic engines, which incur high runtime overhead [4].
# Listing 1: Mutation Example
In this paper, we propose using compilation-based concolic execution to achieve directed fuzzing, i.e., Directed Concolic Execution (DCE). This approach addresses the low-efficiency issue of the graybox approach and the low-precision issue of the whitebox approach. Compared to graybox approaches, DCE can generate precise seeds by solving constraints in the path conditions. Compared to whitebox approaches, DCE reduces runtime overhead by moving the interpretation overhead to compilation time through instrumentation.
However, two challenges have to be addressed to achieve highperformance and practical directed concolic execution.
The lack of global information. An efficient DCE needs the assistance of global information, i.e., the internal structure of the program and the global runtime information, to restrict the search space and implement effective search strategy [12, 21]. However, concolic execution is concretely executed over an input, which only maintains the current execution status. As a result, a naive DCE cannot effectively direct the execution of the program, causing high search overhead or failed searches.
The intrinsic drawbacks of symbolic execution. While using symbolic execution can improve the precision of input generation, we still need to address the inherent challenges of symbolic execution. For instance, the loop statement introduces a state explosion problem due to its circular execution flow. Interprocedural analysis necessitates additional effort to achieve data-sensitive and control-flow-sensitive characteristics. Furthermore, the indirect call requires an adapted points-to-analysis specifically designed for the LLVM framework. These problems can cause DCEs to be stuck or lost in the middle of a program, and thus lead to failed path-finding.
We present ColorGo, a whitebox directed fuzzer that overcomes these challenges. First, to gather global information, we utilize the code structure information provided by the compiler as static information. The compiler obtains the internal structure of the program during the process of analyzing and translating the source code.
We use this static information to limit the search scope in terms of inter-procedural control-flow graph (iCFG) reachability, which we refer to as static coloration. This static process is completed during compilation, eliminating any runtime overhead. Next, to supplement the global information with the runtime information from concolic execution, we perform incremental coloration at runtime, focusing on the feasibility of path constraints. Our goal is to reduce the exploration space and avoid unnecessary searches. Once the coloration is completed, we employ early stopping and deviation basic block identification as part of our proposed efficient search strategy. Finally, to address the inherent limitations of symbolic execution, we specifically target them based on the characteristics of concolic execution, i.e., target line feedback, partial function model, and reverse edge stopping.
To evaluate the effectiveness of our design, we implement ColorGo on top of the LLVM framework. We compare it with state-ofthe-art directed fuzzers and evaluate it on three types of real-world programs. Our experiments show that ColorGo achieves $5 0 \times$ , $1 0 0 \times$ speedup for reaching target sites and reproducing vulnerabilities. Besides, we conduct an ablation study and shows the effectiveness of individual components in our design.
In summary, we make the following contributions in this paper:
(1) We propose a directed whitebox fuzzer that combines lightweight program analysis and compilation-based concolic execution to efficiently generate input for reaching specific code regions.
(2) We implement a practical system called ColorGo on top of the LLVM framework, which addresses the inherent limitations by combining the characteristics of concolic execution, achieving both high precision and scalability.
(3) We conduct experiments on real-world programs (jasper, lame, binutils), demonstrating significant performance improvements compared to the state-of-the-art directed graybox fuzzers.
In the rest of the paper, we first elaborate on our idea of compilationbased Directed Concolic Execution (Section 3). We then present ColorGo in detail (Section 4) and compare its performance with state-of-the-art implementation (Section 5), showing that it is orders of magnitude faster than the benchmark in testing real-world software.
# 2 BACKGROUND AND MOTIVATION
In this section, we introduce two commonly used techniques for directed fuzzing, i.e., directed graybox fuzzing and directed symbolic execution, and discuss their limitations. We then introduce our motivations to use concolic execution to overcome these limitations. Finally, we discuss the new challenges of achieving directed concolic execution.
# 2.1 Backgroud
Directed Graybox Fuzzing. Directed grabox fuzzing is the most widely adopted approach in the literature of directed fuzzing. The fuzzing process of graybox approaches can be divided into two phases, i.e., exploration and exploitation. During the exploration phase, a graybox fuzzer covers as many program paths as possible by iteratively mutating seeds that trigger new paths. After a user-specific time period, the fuzzer enters the exploitation phase to focus on specific code areas. Specifically, graybox fuzzers use lightweight instrumentation methods to calculate the quality of seeds, e.g., distance [3, 10] and similarity [5, 20]. Intuitively, if a seed executes on a path that is closer to the target site, then seeds generated from it are also more likely to be close to the target. Therefore, existing graybox fuzzers give high-quality seeds higher priorities for mutation, and generate more inputs from them.
However, directed graybox fuzzers suffer from poor performance because of the imprecise seed generation. Seeds with high priorities are selected for random mutation since they will possibly generate new seeds that can satisfy the desired path conditions. Unfortunately, a lot of seeds that do not help promote directed fuzzing are generated due to the inaccurate priority and the randomness of seed mutation. These seeds lead to irrelevant execution, which is time-consuming and reduces the fuzzing performance.
Directed Symbolic Execution. Different from the randomized directed graybox fuzzing, directed symbolic execution (DSE) precisely generates inputs. It casts directed fuzzing to a step-by-step constraint-solving process. By thoroughly analyzing the program and extracting structural information, DSE can determine which constraint to solve and generate input that is closer to the target by solving this constraint.
However, existing approaches [14, 22] rely on interpretationbased symbolic execution engines, e.g., KLEE [4], which suffer from high state management overhead. Specifically, these symbolic execution engines are virtual machines over LLVM bitcode. They iteratively fetch each instruction, execute the instruction symbolically, and update the symbol states in the memory model. During the process, they fork the execution trace on each branch condition, generating huge execution states. The heavy virtual machine and state management mechanism incur heavy runtime overhead on both computation and memory. Meanwhile, The forking system needs to manage massive information for each execution state, which causes the problem of state explosion.
# 2.2 Motivation
# 2.2.1 Directed Concolic Execution.
To address the aforementioned problems, and take both precision and efficiency into account, our method adopts compilation-based concolic execution. Concolic Execution is similar to dynamic symbolic execution, as it executes concrete execution over one input and conducts analysis only over the explored path. To introduce symbolic characteristics, concolic execution treats variables related to input as symbolic variables and maintains corresponding symbolic expressions. To achieve efficient symbolic execution, concolic execution does not employ a global manager that records a vast space of states. Instead, concolic execution implicitly maintains concrete states through the native CPU. After execution, all statuses of an execution trace are recorded as the new input. This stateless implementation reduces computational complexity and memory space usage, resulting in unprecedented efficiency close to native execution and fundamentally eliminating the state explosion problem. However, the stateless nature also introduces the problem of lack of global information, which is used to guide the fuzzing process.
There are primarily two methods to instrument the program in concolic execution, either at compilation time, e.g., through an LLVM pass, like SYMCC, or at execution time using dynamic binary translators (DBT), like SymQEMU and QSYM. DBT does runtime code manipulation when a program executes, works like an observer between application and operating system, and performs JIT translation, which incurs a non-negligible overhead at runtime. For instance, as SymFusion[8] mentioned, SymQEMU could be $6 . 5 \times$ slower than SymCC on a simple code snippet. Our work is based on compilation-based concolic execution for its runtime performance advantage. We instrument the program under test at the level of the compiler’s intermediate representation, which allows us to bypass the complex semantics of the source code. Consequently, our work is compatible with all source languages that can be compiled to the intermediate representation. As Katch proposed, symbolically interpreting the program is several orders of magnitude slower than native execution, while the instrumented programs have a comparable execution time to their native counterparts. To achieve high precision, compilation-based concolic execution only needs one concrete execution, at the cost of one irrelevant execution, but reduces massive irrelevant explorations caused by imprecise inputs. Besides, the code analysis results provided by the compiler offer us a convenient way to access global information.
Directed Concolic Execution, a directed fuzzing tool built upon compilation-based concolic execution, which addresses both the issues of low precision and low scalability, is a key component of our approach. However, the design is not as straightforward as the concept. Complex program semantics present challenges in tracking symbolic expression, and there are some inherent limitations of symbolic execution, such as handling indirect calls, interprocedural analysis, and loop unrolling.
# 2.2.2 Problem statement.
In this section, we provide some examples of limitations of concolic execution to expand the challenge we mention in Section 1. Before this, We need to illustrate the definition of relevant code. Given some targets, we identify the relevant code according to the reachability on the inter-procedure control flow graph and the feasibility of the path constraint. The search scope is all relevant code.
Indirect Call. A call graph is utilized to determine the reachability of each basic block and target site, and it is combined with the control flow graph to construct an inter-procedure control flow. Constructing an Interprocedural Control Flow Graph (iCFG) poses a significant challenge, mainly because it’s challenging to infer the targets of indirect control transfer instructions, particularly indirect calls. Indirect calls are calls through register or memory operands. While modern static analysis tools, such as the SVF [28], can support indirect call target inference when constructing control flow graphs using points-to analysis, the LLVM compiler cannot natively support this feature.
Take Figure 1 as an example, as Figure 1a shows, the 𝑎𝑑𝑑 function is called indirectly first through a function pointer 𝑓 𝑢𝑛𝑐𝑃𝑡𝑟 , and the corresponding IR shows the pointer variable 𝑓 𝑢𝑛𝑐𝑃𝑡𝑟 is stored in memory. When we process the instruction call in Line 8 in Figure 1b, we can not directly resolve the indirect instruction operand to get the actual target 𝑎𝑑𝑑. In contrast, for the instruction call in Line 10
int main(){ int (\*funcPtr)(int,int); funcPtr $\mathbf { \tau } = \mathbf { \tau }$ &add; int result $\mathbf { \tau } = \mathbf { \tau }$ funcPtr(3,4); add(3,4) ; return 0;
1 (a) C source Code.
define dso_local i32 @main() #0 { ${ \mathfrak { g } } _ { 1 } \ =$ alloca i32,align 4 $8 2 =$ alloca i32(i32,i32)\*,align 8 $8 3 =$ alloca i32,align 4 store i320, $\dot { \textbf { 1 } } 3 2 * \textbf { 8 1 }$ ,align 4 store i32 (i32,i32)\* @add,i32 (i32,i32)\*\* %2,align 8 $8 4 =$ load i32 (i32,i32)\*,i32 (i32,i32)\*\* %2,align 8 $8 5 ~ =$ call i32 %4(i32 3,i32 4) store i32 %5,i32\* %3,align 4 $8 6 ~ =$ call i32 @add(i32 3,i32 4) ret i32 0 (b) LLVM IR.
l int f(int $x ) \nmid$ l int f(int $x ) \nmid$ $x = = 2$ $2 \star x$ ; if( $x = = 2$ scanf(%d,&a); scanf(%d,&a); $f ( \textsf { f } ( \textsf { a } ) + \textsf { 1 } = \textsf { 5 } ) \{$ $i + ( + ( \textrm { a } ) + \textrm { 1 } = = \textrm { 5 } ) \{$ } } (a) Example of data-sensitive analy- (b) Example of control-sensitive sis. analysis.
in Figure 1b which conducts a direct all, we can determine that the target function is 𝑎𝑑𝑑 directly without additional analysis.
That is, although we construct an iCFG involving indirect call through additional points-to-analysis, and find a path from function 𝑚𝑎𝑖𝑛 to 𝑎𝑑𝑑 (function 𝑚𝑎𝑖𝑛 indirectly calls function 𝑎𝑑𝑑) in the iCFG, we can not color irrelevant basic block correctly at compilation time, because we can not location the call instruction in 𝑚𝑎𝑖𝑛 whose operand is the function 𝑎𝑑𝑑.
Interprocedural Analysis. Symbolic execution can be categorized into two types of analysis: intraprocedural analysis and interprocedural analysis. Intraprocedural analysis only considers statements within a procedure, whereas interprocedural analysis requires the inclusion of procedure calls to conduct whole-program analysis. Interprocedural analysis can be implemented in two ways, data-flow sensitive inter-procedural analysis and control-flow sensitive inter-procedural analysis.
Let’s consider Figure 2a as an example. In order to reach target line 11, we need to solve the constraint in Line 10, which issues a sub-procedure call. However, the intraprocedural analyzer is unable to analyze the operations performed by the function $f ( )$ . As a result, it replaces the symbolic value of $f ( a )$ with its concrete value. Therefore, the path constraint that we formulate becomes 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 $^ +$ $1 = = 5$ . Without any symbolic variables, no test input is generated for this equation during execution. As a result, it fails to reach the branch in Line 10, and so does the directed fuzzing. When we conduct data-flow sensitive inter-procedural analysis, the symbolic variable can pass through the statements. Because of the branch statement in Line 2, the return value depends on the concrete value of $x$ . For example, if the condition $x ! = 2$ is true, the return value is a concrete value of 1, otherwise, the return value is a symbolic value of $2 x$ ; Unfortunately, we only need to solve the constraint $f ( a ) + 1 ! = 5$ when it is not satisfied, that is, the return value of $f$ is not equal to 4, thus the value of $\mathbf { x }$ is not equal to 2, then we run to Line 3, return a concrete value of 1. We do not have an opportunity to solve the constraint $2 * a + 1 = = 5$ and generate the wanted input $a = 2$ . However, if Line 2 did not have a branch and $f$ directly returned $2 * x$ , the situation would be simpler and solvable. Consequently, data-flow sensitive inter-procedural analysis would handle this scenario well. In this case, we can observe how control flow affects inter-procedural analysis. Let’s see a more strict case shown in Figure 2b, where both sides of the branch return a concrete value. Although the branch condition in Line 2 clearly indicates that $x = 2$ , data-flow sensitive analysis cannot determine this. In such cases, control-flow sensitive analysis is necessary. By combining controlflow sensitive analysis with the branch condition, we can encode it into the return value, i.e., $( x = = 2 )$ & 4 $\oplus$ ( $( x ! = 2 )$ & 1 .
Compared to symbolic execution, concolic execution’s lack of global information presents a significant challenge, making interprocedural analysis considerably more difficult. This issue is particularly critical in the context of directed fuzzing, which contrasts with coverage-guided fuzzing. Directed fuzzing requires a higher level of precision in path exploration, and any missteps can lead to lost or stuck during exploration. The challenge is to balance precision with concolic execution’s limitations.
# Listing 2: A loop example.
Loop Unrolling. Loop and recursion are frequently used in programs, but traditional symbolic execution struggles to handle them effectively. The state explosion problem arises when symbolic execution forks a new path at every branch point (including the loop branch). This means that without a limiting measure, the number of paths grows infinitely at an exponential rate. Concolic execution’s stateless nature avoids state explosion, but the problem of loops persists.
Figure 3: Control-flow graph of the loop example.
Consider the example shown in Figure 3, where line 7 is the fuzzing target. A naive way to mark the relevant path would be to iteratively mark the basic blocks preceding the target basic block. If the target basic block is the red one, then all basic blocks except the end node would be marked as relevant in the search space. As a result, when we execute the switch statement, we lose direction to the final target node, because all successors of the switch statement are marked as relevant in the circular execution flow. Therefore, we need to design a special method for handling loop and recursion, as the naive method performs well for other statements except loop statement.
# 3 COLORGO
In this section, we present the design of ColorGo, a directed whitebox fuzzer employing concolic execution and exploration space coloration for efficient crash reproduction. It addresses the low scalability of directed symbolic execution and the low precision of directed graybox fuzzing by introducing concolic execution into the directed fuzzing domain and overcoming the lack of global information and the inherent limitation of symbolic execution. As a result, it achieves high scalability.
# 3.1 Overview
The overall architecture of ColorGo is depicted in Figure 4. In brief, we spread works in both compilation time and run time. At compilation time, we process source code with files that contain target lines and stack trace information, generating colored iCFG and instrumented programs. At runtime, we iteratively execute the generated instrumented program on the input set, at the same time generating new inputs added to the input set, until reach the target site. We now describe some important components of our approach. The detailed implementation will be described in the next section.
Figure 4: Architecture of ColorGo.
# 3.2 Incremental Coloration
ColorGo is a tool that performs incremental coloration to restrict search scope and improve search performance. It uses global information to achieve this. The aim is to identify relevant code that needs to be explored, to avoid wasting time on paths that won’t help to reach the target sites.
To accomplish this, we convert directed fuzzing into a one-source, multi-targets graph search problem on iCFG. This involves marking the search scope using a process called coloration. The marking process is divided into two phases: static coloration and dynamic coloration.
Static coloration is performed at compilation time. Given target lines(e.g., main.c:5) and a function call chain extracted from the stack trace, we do static coloration in an LLVM pass, where we can exploit the knowledge available in the compiler as part of global information. For each function, we color the target basic blocks which include the target line’s corresponding instructions using debug information, or call instruction whose operand is the target function; Then, we iteratively color the predecessor basic blocks of target basic blocks. Finally, we get a connected subgraph of origin iCFG, which is called colored iCFG. The step is straightforward except we should specifically handle the indirect call and loop statement mentioned in Section 2.2.2. We integrate a conservative points-to-analysis tool [1] to translate indirect function call into corresponding target line and feedback to the beginning of our framework. Besides, we stop the iterative coloration when detecting the reverse edge, i.e., edges from the back basic block to the front basic block.
Dynamic coloration is performed at runtime and uses path constraint feasibility information as a supplement to global information.
While graybox fuzzer [12] designs carefully to balance the precision and efficiency when pruning infeasible paths. The concolic execution natively supports runtime infeasible path pruning. At each key branch point, the instrumented program extracts the path constraint that points to the colored side and sends it to the backend solver to derive a solution. If the path constraint is infeasible, its subtree in colored iCFG is sliced away and won’t be explored again, which is called incremental coloration.
The coloration information will be later used in directed concolic execution to restrict search space and achieve a high-performance search strategy, we discuss this in the next section.
# 3.3 Compilation-based Concolic Execution
The key component of ColorGo is the directed concolic executor, which is implemented as an instrumented program.
In this work, we choose compilation-based concolic execution for these reasons: Firstly, compared with the runtime instrumentation, the injected code can run seamlessly with the application code, eliminating the need for switching between the target and an interpreter or an attached observer, achieving low run-time overhead. Secondly, we require the assistance of code analysis information and high-level knowledge generated by the compiler to generate global information to guide the coloration. Thirdly, compared to source-to-source translation, the compiler intermediate representation (IR) level instrumentation simplifies the integration of concolic execution capability as we only need to handle a limited instruction set.
We instrument the direct logic into the program when doing compilation. The most important question is how our directed logic works. For the search strategy, we borrow the concept deviation basic block from Windranger [10], where the execution trace starts to deviate from the target sites. For example, in the middle Colored iCFG in Figure 4, node 1 is not a deviation basic block, for its both siblings are colored nodes. But node 2 is a deviation basic block for one of its siblings is not a colored node. We only generate constraint solving when current execution trace starts to deviate from target sites. Take the right Colored iCFG in Figure 4 as an example, to reach the target site we conduct two times of execution, which is marked on the arrows as number 1, 2. In every deviation basic block we generate a new input on which the program will reach the colored basic block. After one time of "corrections", the program executes on 𝑛𝑒𝑤𝑖𝑛𝑝𝑢𝑡 and will finally reach target red node. We call this search strategy Fast Depth First Search (FDFS), to reach the target node with minimal time for input generation and program execution. Suppose we do not use this method, a naive DFS generate input towards every colored side, which results in generating an unnecessary solution at the first branch point. This time wasting is more severe in real-world programs. For search scope restriction, we terminate the execution when runs to the end node of iCFG, i.e., both siblings are uncolored, or at branch points with an unfeasible path condition.
# 3.4 Interprocedural Analysis
Unlike coverage-guided fuzzing which negates the branch conditions along the way, directed fuzzing needs to execute a more complex logic to selectively explore path. As discussed in Section 2.2.2, we need to implement both data-flow sensitive and control-flow sensitive methods. The data-flow sensitive analysis follows SYMCC. When do function calls, we propagate the symbolic function parameter throughout the subprocess instructions to construct the corresponding symbolic expression in the called function. Finally, we register the return expression for later use. However, the controlflow sensitive feature is harder to realize in the concolic executor. At each branch point, the concolic executor only sends the current branch condition to the symbolic backend, and the complete path constraint is collected during runtime. Therefore, if we want to encode path constraints to return value during compilation time, we need to conduct complicated static analysis. This is similar to the function summary technique in symbolic execution. The function summary can be defined as a disjunction of formulas, which is present in the form that combines a conjunction of constraints on the inputs and a conjunction of constraints on the outputs. For example, the summary for the function $f$ in Figure 2b could be $\mathit { \check { x } } = 2 \land r e t = 4$ ) $\vee$ $\mathit { \check { x } } \neq 2 \land r e t = 1$ ). SMART [? ]uses static analysis to generate function summaries to assist the inter-procedural analysis of symbolic execution, which effectively alleviates the path explosion problem caused by too many function calls. However, This heavy global static analysis and status management obey the design concept of concolic execution, and result in low scalability.
To balance the scalability and effectiveness, we propose a partial function model. While modeling all user-defined functions in the program introduces great complexity to our design, we only model functions in C standard library, which is considered simple but important. To normally reason most of the program, we build a summary for some important functions in C standard library (e.g., strlen and strchr). We will discuss this in detail in Section 4.
# 4 IMPLEMENTATION
We discuss the detailed implementation of ColorGo in this section. We built ColorGo using LLVM pass, written from SYMCC. The pass consists of roughly 500 edited lines of $C + +$ code. In this pass, we process LLVM intermediate representations one by one at both the module and function levels. To build the instrumented program, the instrumentation performs only once along with the compilation. To conduct directed concolic execution, the instrumented code will be executed over and over again until reach target site or all colored path are explored. The instrumented code reserves the behavior of the original program but invokes constraint solver when deviating from target execution path to generate new input that corrects the execution. We propagate the symbolic expression along the instructions by adding around each LLVM IR with calls to symbolic handling implemented in the run-time support library as SYMCC does. Our directed characteristic is mainly reflected on the time we initiate constraint solving. We build a concolic executor as a scheduler that picks the next input to execute on, written in a shell script, and comprises another 200 lines. The low code volume shows the flexibility of our system, thus it’s easy to deploy a new directed logic based on our implementation with little work.
# 4.1 Instrument at compilation time
To derive instrumented binary, we conduct instrumentation at compilation time. The input of our approach source code files, target lines, and functions in the stack trace. Note that the target lines in a feedback version by points-to-analysis [1]. In general, We will compile the source code of the program under test to an instrumented binary.
First, we read the input and register the target lines and functions for each function in a map. Second, for each function, we process each instruction in order, if we identify that the instruction maps to target line according to debug info or the instruction is a call/invoke instruction and its operands is target function, the basic block to which the instruction belongs would be marked as target basic block. Then we do a back propagation in control-flow graph which iteratively mark the predecessors of target basic block. Finally, we will mark the root node in CFG. After the coloration of all functions, we get a connected subgraph of origin iCFG which is called Colored iCFG. We specially handle loop statements to avoid coloring all basic blocks in the loop body, to be specific, we stop the propagation when detecting inverse edge (from back basic block to front basic block), which can effectively solve the loop pollution problem we introduce in Section 2.2.2.
Strictly speaking, we will process all instructions twice at compilation time, at the first time we get colored iCFG and the second time we insert calls to symbolic backend to generate new input according to the colored iCFG. The execution trace diverges at every branch point, thus, we should insert check logic around every conditional branch/switch statement. If a function has an empty target basic block set, we will skip the check for there is no distinction among all basic blocks, which is a common case in subprocess.
For conditional branch statements, the check logic acts belike: If both sides point to a non-colored basic block, execution terminates early. If both sides point to the colored basic block, skip check. If one of the sides points to a non-colored basic block and the another one points to a colored basic block, we send the symbolic expression of branch constraint, concrete value of branch constraint, and a boolean value which represent the wanted value of branch constraint to the symbolic backend, and symbolic backend will check the equality of concrete value and wanted value, if equal, add this branch constraint according to the concrete value to path constraint, if not, initiate a constraint solving to satisfy the equation of path condition and the wanted value.
For switch statement, the check logic acts like the conditional branch statement in general, but there is a little difference: when constructing the constraint, we can not directly extract the case constraint from the operand, especially for the default case, which needs to be handled manually. If at least one of the sides points to a non-colored basic block, we will produce solutions for each (not one of) case which points to a colored basic block.
# 4.2 The Placement of the instrumentation in the Compiler Pipeline
The placement of the instrumentation pass is nontrivial. We place the pass in early the pipeline to achieve maximum structure similarity to the source code, in order to best map to the input target information. For instance, the compiler may merge some functions when doing optimization, after which we can not extract the origin function information and it is hard to map the optimized version to the original execution flow. So directed fuzzer is lost here if we instrument the optimized version.
# 4.3 Explore at runtime
After the compilation, we derived a symbolized binary that drives the program to the target site. At runtime, we just repeatedly execute the binary and collect generated input. The input helps to correct current execution traces to a specified path leading to target sites. Recall the Section 3.2, we cast directed fuzzing into one source, multi-targets graph search problem on iCFG. At compilation time, we define the search scope by coloration statically and embed deviation-correction and early-termination logic into the program at compilation time. Now the final question comes: How do we schedule the newly generated input to realize our Fast Depth First Search (FDFS) strategy?
The answer is straightforward, we maintain a stack as the input pool. Every newly generated input is pushed to the top of the stack and tends to be executed immediately. The FDFS acts like a persistent detective who wants to explore deep into the program as fast as possible. It terminates as soon as know there is no way to target sites, i.e., at branch point whose siblings are all uncolored, or when current path constraint is infeasible. It only corrects the execution when necessary (in deviation basic block) because correction means re-executing the program. Besides, we avoid generating repeated input by a map, which is practical when at branch point more than one side are colored and generating inputs run to each other over and over again when executing.
Table 1: Real-world benchmark programs used in the evaluation.
# 4.4 Compilation Boundary and Function Model
The compilation-based approach also has its intrinsic drawback, that is, not all programs can be recompiled. For example, some system libraries and third-party libraries are not typically recompiled by application developers. The symbolic execution would degrade to concrete execution when running uninstrumented code. Latest work [8] proposes a hybrid instrumentation approach for concolic execution which instruments internal code at compilation time and external code at runtime. But SYMFUSION incurs an average of 3X slowdown. In fact, when doing fuzzing, we only focus on program under test, and the most of symbolic state loss because of uninstrumented code can be relieved by function model, which is also preserved by SYMFUSION. We model important functions in C standard library and additionally conduct our data-flow and control-flow sensitive analysis in function models to produce function summary (i.e., encode the path condition into the return expression). For example, to model function 𝑐𝑜𝑛𝑠𝑡𝑐ℎ𝑎𝑟 $\ast s t r c h r ( c o n s t c h a r \ast s t r , i n t c )$ , which find char $c$ in string 𝑠𝑡𝑟 and return the position of 𝑠𝑡𝑟 where $c$ first happens, we set the return expression as ( $( s [ 0 ] = = c ^ { \prime }$ ) & $s [ 0 ] ) \oplus ( ( s [ i ] = = c )$ & $s [ i ] { \big \rangle }$ ).
# 5 EVALUATION
In this section, we evaluate ColorGo using real-world programs and answer the following questions:
RQ1: How fast can ColorGo reach target sites?
RQ2: How fast can ColorGo expose vulnerabilities?
RQ3: How does every component in ColorGo contribute to the overall performance?
RQ4: What’s the runtime overhead introduced by ColorGo’s instrumentation?
# 5.1 Evaluation Setup
BaseLine. We compare ColorGo with a state-of-the-art directed graybox fuzzer AFLGo. Which is publicly available by the time of writing this paper.
Evaluation Criteria. We use two criteria to evaluate the performance of different fuzzing techniques.
Time-to-Reach (TTR) is used to measure the time fuzzer used to generate the first input on which the program can reach the target site.
• Time-to-Expose (TTE) is used to measure the time fuzzer used to generate the first input on which the program can reproduce known venerability.
Evaluation Datasets. We use real-world programs from the following datasets as the evaluation benchmarks:
• UniBench [17] is a recent dataset proposed for evaluating fuzzing techniques. It consists of 20 real-world programs from 6 different categories, categorized based on the input file type. From this dataset, we selected some programs to measure the Time-toReach (TTR) of baseline and our work. These benchmarks are used to address RQ1.
Table 2: Time-to-Reach results on programs from UniBench.
Table 3: Time-to-Expose results on AFLGo test suite.
AFLGo Test Suite [2] is a collection of programs with n-day vulnerabilities, which was used in the experiments of AFLGo [3]. This test suite has been utilized in multiple research studies [5, 10] to evaluate DGF techniques. These benchmarks are used to address RQ2-4.
Experiment Settings. We conducted our evaluations on the machine equipped with Intel Xeon Gold 5218R CPU with 20 cores, using Ubuntu 20.04.6 LTS as the operating system. During experiments, each fuzzer instance runs in a docker container [9] and binds to one CPU core.
The baseline DGF was repeated 10 times with a time budget of 24 hours. Our method does not contains randomness, and therefore statistical evaluation is unnecessary. We only evaluate our method once. The detail of the real-world benchmark programs we used is shown in Table 1.
# 5.2 Performance on Reaching Target Site
We conduct the evaluation on two kinds of open-source realworld programs from UniBench. Table 2 shows the results.
• Jasper is a collection of software (i.e., a library and application programs) for the coding and manipulation of images. This software can handle image data in a variety of formats. One such format supported by Jasper is the JPEG-2000 format defined in ISO/IEC 15444-1. Our target covers three formats of image, including jpc, bmp, and jp2.
• LAME is an MP3 encoding tool. The goal of the LAME project is to use the open-source model to improve the psycho acoustics, noise shaping, and speed of MP3.
The metric we use to measure the performance is the time cost to reach the selected target sites. Additionally, we provide the mean solve time which represents the solver used to produce a solution. On all target sites, ColorGo outperforms benchmark fuzzer and achieves the shortest $\mu \mathrm { T T R } .$ . Overall, even discard data that AFLGo timeout $\left( > 2 4 \mathrm { h } \right)$ . In terms of mean TTR, ColorGo outperforms DGF (AFLGo) by ${ \bf 5 0 \times }$ . The result shows that our concolic execution, which takes both precision and efficiency into account, has better capability to reach the target site than DGF.
We use the same initial inputs in UniBench for all experiments, the number 16 of runs in this table shows the demand of sorting the inputs in the initial queue, while our work tries them in order.
We add a new metric called Total Execution Time (TET) which separately counts the time used on program execution, the subtraction of TTT and TET represents time for concolic executor to build up, including some file I/O operations and the maintenance of input queue. We can see more than half of the time spent on the starting process in our experiment, but when the total time is longer, the proportion of starting time is smaller.
# 5.3 Performance on Exposing specific vulnerability
In this section, we evaluate the vulnerability reproduction performance by the time used to trigger specific crushes. Vulnerability reproduction is more in line with the actual application scenario, and the vulnerability report always includes stack trace of the error, which records the function call stack. The function call chain further restricted the search scope at the level of Call Graph. We search the information of CVEs we reproduce on the official website and record the function call stack for our later use. Table 3 shows the results.
The program we selected to evaluate our work is:
Binutil. The GNU Binutils are a collection of binary tools. We evaluate $^ { \mathrm { c + + } }$ filt, a filter to demangle encoded $C + +$ symbols.
We add a new metric called Early Termination Executions (ETE) to illustrate the effectiveness of our coloration. The percentage of early termination runs in all runs is $1 0 0 \%$ , which proves our coloration does help to avoid wasting time on irrelevant code exploration.
Except for CVE-2016-4488, which is considered very easy to expose $( \mu \mathrm { T T E } < 1 s )$ , ColorGo significantly outperforms other tools by $\mathbf { 1 0 0 \times }$ faster to expose them. The result shows that ColorGo performs best when the path is highly specified. In such cases, we can maximize the effectiveness of precise seed generation.
Table 4: Mean Execution Time on AFLGo test suite. ET $\mathbf { \Sigma } = \mathbf { \Sigma }$ early termination
# 5.4 Impact of Different Components
To investigate the impact of different components in ColorGo, we disable each component individually and conduct experiments on the same targets selected from AFLGo Test Suite as in Section 5.2.
5.4.1 Impact of Search Scope Restriction. We conducted an experiment to study the impact of search scope restriction. To do this, we disabled the early termination mechanism, which allowed the program to execute outside the colored space. The out-of-colored execution is useless and time-wasting and may produce new input which results in irrelevant execution. The disabled one follows almost the same execution path as the original one. We compare the average execution time to show the speedup for each execution of our method. Table 4 shows the results. We can observe that disabling the early termination causes an increase $( > 1 0 \% )$ on TTE, which means the search scope restriction has a significant impact on the performance of ColorGo. The increased degree of TTE depends on the depth of our target. If the target is located near the beginning of the program, the effect of early termination will be more significant. For CVE-2016-4490, by conducting coloration and path pruning, we reduced the average execution time by $5 0 \%$ . Besides, the CVE-2016-4488 failed to be reproduced because of the huge change in execution path caused by disabling part of the coloration, which also demonstrates our coloration can help the path-finding of directed fuzzer.
5.4.2 Impact of Search Strategy. To validate the effectiveness of our search strategy, we compare our FDFS with the implementation of naive DFS which disables the concept of deviation basic block, which issues constraint solving towards every colored side along the execution path. The result is presented in Table 5, it is clear that the number of useless solutions has increased a lot $( 1 0 0 \times )$ as we collect all potential input along the way in coloration scope. The increase of useless solutions and inputs results in the growth of the number of executions, which significantly degrade the performance. The degradation of performance proves our FDFS outperforms naive DFS a lot.
# 5.5 Instrumentation Overhead
Concolic execution obtains symbolic capability by instrumentation, which causes additional runtime overhead. To evaluate the runtime overhead, we run the same input against three versions of the benchmark program in Section 5.2. One is the vanilla version without any instrumentation, and the other is instrumented by ColorGo, where we add symbolic expression propagation and constraint solving but disable the early termination to ensure that we run the same paths on two versions. Additionally, we also present the default data we obtain in Section 5.2 by experimenting with the native implementation of ColorGo as a reference. The results are shown in Table 4, we compare the default mean execution time, mean execution time disabled early termination, and mean execution time of pure execution. We have observed that ColorGo can cause up to a $67 \%$ runtime overhead which reduces to $6 2 \%$ on average. If early termination is used, this percentage could be decreased to $5 0 \%$ . The cost of instrumentation is negligible compared to the cost of interpretation.
# 6 RELATED WORK
There are two threads of work in the literature related to ColorGo, i.e., directed graybox fuzzing and hybrid fuzzing. In this section, we introduce these works accordingly.
# 6.1 Directed Graybox Fuzzing
Nowadays, many new works emerge to optimize directed graybox fuzzing, including fitness metrics [5, 29, 32, 35] to conduct seed prioritization, make fuzzing optimization including input optimization [12, 13, 33, 36], power scheduling [5, 19, 20, 35], mutator scheduling [5, 18, 27, 33], and mutation operations [13, 29, 32, 33], to make directed fuzzing more directed.
Regarding search scope restriction, we prove that there is an intersection between whitebox directed fuzzing and graybox directed fuzzing, and there may be other opportunities to learn from each other for eventual performance gains. To enhance our search strategy design and make more informed decisions about which constraints to solve and which inputs to execute, we can assign a weight to each edge in the iCFG. This takes into account various influencing factors. This weighting system is akin to the fitness metrics used in Directed Greybox Fuzzing (DGF). DGF employs a fitness metric to gauge how closely the current fuzzing aligns with the fitness goal. While early iterations of DGF only considered distance on the iCFG, numerous variants have been developed to refine the fitness metric. For instance, TOFU [30] defines its distance metric as the number of correct branching decisions required to reach the target. RDFuzz [32], on the other hand, combines distance with the execution frequency of basic blocks to prioritize seeds. AFLChurn [35] assigns a numerical weight to a basic block based on its recent changes or frequency of alterations. WindRanger [10], meanwhile, factors in deviation basic blocks.
The similarity metric is proposed by Hawkeye [5], which is the degree of overlap between the current status and target status from a certain aspect, including execution trace similarity [5], statement sequence similarity [19, 20], and so on. Similarity metric is also used for specific bug detection such as use-after-free bug and other memory-related bugs [6, 23].
Deep learning has also played a role in vulnerable probability prediction at the level of function and basic block [15, 18, 34]. The probability-based metric allows for the combination of seed prioritization and target identification. This enables directing fuzzing towards potentially vulnerable locations, without being dependent on the source code.
Table 5: Performance of different search strategies on AFLGo test suite.
These adaptations demonstrate the evolution and sophistication of the fitness metric, enabling more nuanced and effective fuzzing strategies.
# 6.2 Hybrid Fuzzing
Hybrid fuzzing [6, 16, 19, 24, 25] combines symbolic execution and graybox fuzzing, to utilize the advantages of each technique. In this scenario, symbolic execution acts as an assistant by solving condition of branch that is hard to cover by graybox fuzzing. Hybrid fuzzing also aims to combine the precision of DSE and the scalability of DGF to mitigate their individual weaknesses by selectively using symbolic execution, under the observation that DGF tends to explore branches with simpler path constraints while DSE is geared towards solving complicated path constraints. However, it does not solve the problems of both techniques fundamentally as our work has done. Two components intertwining in the system also introduce complexity and make the system more cumbersome and inefficient. | Directed fuzzing is a critical technique in cybersecurity, targeting specific
sections of a program. This approach is essential in various security-related
domains such as crash reproduction, patch testing, and vulnerability detection.
Despite its importance, current directed fuzzing methods exhibit a trade-off
between efficiency and effectiveness. For instance, directed grey-box fuzzing,
while efficient in generating fuzzing inputs, lacks sufficient precision. The
low precision causes time wasted on executing code that cannot help reach the
target site. Conversely, interpreter- or observer-based directed symbolic
execution can produce high-quality inputs while incurring non-negligible
runtime overhead. These limitations undermine the feasibility of directed
fuzzers in real-world scenarios. To kill the birds of efficiency and
effectiveness with one stone, in this paper, we involve compilation-based
concolic execution into directed fuzzing and present ColorGo, achieving high
scalability while preserving the high precision from symbolic execution.
ColorGo is a new directed whitebox fuzzer that concretely executes the
instrumented program with constraint-solving capability on generated input. It
guides the exploration by \textit{incremental coloration}, including static
reachability analysis and dynamic feasibility analysis. We evaluated ColorGo on
diverse real-world programs and demonstrated that ColorGo outperforms AFLGo by
up to \textbf{100x} in reaching target sites and reproducing target crashes. | [
"cs.CR",
"cs.SE"
] |
# 1 Introduction
The increase in data accessibility and complexity of organisational information systems has given rise to a persistent problem of information silos [43] wherein knowledge workers have to navigate across different systems and forms of information artefacts to perform their tasks. The diversity of information artefacts, ranging from physical to digital, underscores their crucial role in the organization and manipulation of data and knowledge. For process knowledge workers, including process analysts, process users and process modellers, business process models and business rules are two commonly used artefacts. These two artefacts enable process knowledge workers to represent complex business requirements, as well as implement and improve processes. A notable illustration of the inherent complexity can be observed in the workflows of business analysts and process analysts. For instance, in the scenarios of company mergers and restructurings multiple variants of business processes and business rules need to be consolidated into a single process to eliminate redundancies and create synergies [23]. Even in a business-as-usual environment, if the artefacts are presented separately (i.e., related business rules often may not be part of the business model [50]), they may cause a disconnect in shared understanding, and potentially result in conflicts, inefficiencies and even compliance breaches [44,47,50]. The challenges facing process knowledge workers involve more than just accessing and understanding information from diverse artefacts. Advanced cognitive and analytical skills are needed to ensure comprehensive understanding, including effectively foraging and processing various artefacts from business process models and business rule repositories to achieve specific objectives. These foraging and processing processes involve seeking, filtering, reading, and extracting information and iteratively developing a mental model that serves as a foundation for comprehension and performance [31].
To overcome these challenges, previous studies have underscored the necessity of comprehensively integrating business rules into business process models [44], and various forms of integration have been proposed (e.g. diagrammatic integration, integration through text annotation, and linked rules). However, prior research has primarily focused on novice workers using students as proxy [10,11,44], which offers limited insights into the sensemaking processes that expert knowledge workers engage in. Yet, a deeper understanding of how expert knowledge workers forage and process information in integrated process models is key to adequately supporting the development of new process-oriented tools and systems that can more effectively support decision-making processes.
Drawing on the foundational theories of sensemaking and cognition [31,39], our research seeks to delve into sensemaking practices on how knowledge workers forage and process information in the varied forms of integrated representation of business process models and rules. It aims to unearth the underlying factors that drive these various sensemaking behaviours. To this end, we present the outcomes of empirical research conducted within a controlled laboratory study setting to investigate how process knowledge workers perform tasks based on integrated modelling of business processes and rules (i.e., using text annotation, diagrammatic, and linked rules). Specifically, we investigate experts’ sensemaking practices in information foraging and processing phases, and compare the findings with existing literature. By leveraging verbal protocol analysis [16] with eye-tracking metrics, we reveal empirical insights into knowledge process worker behaviours in information foraging and processing phases. This exploration paves the way for offering personalized support mechanisms to process knowledge workers through a deeper understanding of sensemaking practices in various settings and improved development of new process-oriented tools and systems.
In the following sections, we first review the research background of sensemaking and cognition as a lens to study process model understanding. Section 3 introduces our study design and the data analysis methods. Section 4 presents the results and discussion, and finally Section 5 summarizes the contribution of the paper, limitations of the study, and future extensions of this work.
# 2 Literature Review
# 2.1 Sensemaking and Cognition
Over the decades, sensemaking has been an active area of study in diverse disciplinary backgrounds, from collective organizational contexts (e.g., [45,22]) to individual settings (e.g., [27,35]). More recently, there has been an increased focus on understanding how sensemaking operates in the era of increasingly complex information artefacts (e.g., [9,31,46]). Despite the differences between the proposed models from the literature, all attempts describe the iterative process of individual or collective construction of knowledge. A number of models have been proposed to capture sensemaking as multiple loops [31,46], which consider a fundamental pattern between the interactions of information foraging and processing to schematize the knowledge into a mental model. For example, the Representation Construction Model [31] has two major loops of sensemaking: (1) the information foraging loop, which includes seeking, filtering, reading, and extracting information processes, and (2) the information processing loop, which includes iterative development of representational schemas to provide a basis for understanding and performance.
The individual settings of sensemaking are more relevant to our work, where the focus is on cognitive mechanisms that underpin individual sensemaking. Cognitive constructs of attention and memory have a natural and strong affinity to the two phases in sensemaking models, and cognitive load theory [39] provides proven mechanisms through which these constructs can be operationalized [8,39]. For example, attention and search behaviour have been measured through eye-tracking devices, which can capture data on visual scanning (eye movement) and attention (eye fixations) [12]. This data, in turn, can be used for various behavioural measurements, such as cognitive load, visual association, visual cognition efficiency, and intensity [6,32].
To the best of our knowledge, existing sensemaking studies are focused on qualitative or perceptive measures with limited use of behavioural and performance measures. Additionally, prior work that used quantitative analysis for studying sensemaking processes was limited to novice workers using university students as proxy [10,11,44]. The study of sensemaking practices of novices only provides limited understanding and is not fully reflective of the settings in which these sensemaking processes are undertaken. Hence, we make use of quantitative (eye-tracking devices as observation tool) methods to guide the exploration of qualitative (Cued Retrospective Think-Aloud (CRTA) interviews of expert users [41]) in a controlled laboratory study. This combination of methods provides novel and objective means to capture and expose sensemaking behaviours and explore the interactive process of how expert knowledge workers forage and process information in various modelling integration approaches.
# 2.2 Integrated Modeling of Business Processes and Business Rules
Our study considers the specific context of business process and business rule modeling – two complementary approaches for modeling business activities, which have multiple integration methods [21] to improve their individual representational capacity. The integration methods can be categorized into three approaches with distinct format and construction, namely: text annotation, diagrammatic integration, and link integration [11]. Text annotation and link integration both use a textual expression to describe the business rules and connect them with the corresponding section of the process model. Text annotation integration is a way of representing business rules in business process models by adding textual descriptions of rules – e.g. in BPMN, using the BPMN text annotation construct. With link integration, visual links can explicitly connect corresponding rules with the relevant process section. Diagrammatic integration relies on graphical process model construction, such as sequence flows and gateways, to represent business rules in the process model. Each of these methods has strengths and weaknesses, and thus a potential impact on a knowledge worker’s understanding of a process [44]. Despite the use of rigorous quantitative analysis in related works [10,11,44], we found that quantitative analysis alone was not sufficient to fully capture the nuanced behaviors involved in sensemaking. This observation underscores the necessity for rich qualitative insights to thoroughly understand sensemaking behaviors.
# 2.3 Process Model Understanding
Prior research has focused on a variety of factors that affect the understanding of a process, including process model factors [14] and human factors [26]. Process model factors relate to the metrics of the process models, such as modularization [33], block structuredness [48], and complexity. Studying the impacts of these involves investigation of the number of arcs and nodes [26], number of gateways [33], number of events [34], number of loops [14], and number of concurrencies [25], length of the longest path [25], depth of nesting [17], and gateway heterogeneity [25]. Human factors relate to the factors of process model users, such as individual’s domain knowledge [40], modeling knowledge [14] and modeling experience [26].
To evaluate cognitive engagement and improve comprehension of process models, the think-aloud approach has served as a pivotal means to understand user interactions and cognitive processes during task completion [5,18,19,49]. The approach has several methods, including Concurrent Think-Aloud (CTA), Retrospective Think-Aloud (RTA), and Cued Retrospective Think-Aloud (CRTA) [41]. The CRTA approach integrates elements of RTA and CTA, and mitigates the memory-related limitation in RTA and potential disturbance on task completion of CTA. By ensuring participants’ natural interaction patterns are preserved during completing the tasks as well as providing them with concrete and taskspecific stimuli cues (e.g. screen recording of completing the task), CRTA can facilitate participants to recall their thought process more accurately. In addition, cognitive load and visual cognition have been used as measures of process model understanding [26], with the use of eye tracking technology to capture eye movement and gaze patterns [6,29,30,37]. For example, researchers used eye tracking to investigate the visual cues of colouring and layout with performance in process model understanding [30], and the impact of the task type (local or global) on process model comprehension during information search and inference phases [37], as well as with the use of RTA to explore reading patterns and the strategies in DCR-HR [2].
Upon reviewing the literature, sensemaking emerges as a promising new perspective that has yet to be fully explored in the context of process model understanding. In light of these considerations, our study leveraged eye-tracking data as an observational tool and a cue during interviews, guiding participants to reflect on their recorded gaze behaviors, thus enhancing the recall and depth of their thought processes. This integration facilitates and advances a deeper qualitative exploration into the complex sensemaking behaviors of participants, offering enriched insights into their cognitive engagement with integrated process models, as “the best cues will likely come from the participants themselves” [7].
# 3 Study Design
In this study, we used a laboratory study method [3] and a between-subject design with purpose-built platforms. To capture the insights of sensemaking behaviors, we first collected eye-tracking data while participants performed the laboratory study. We did this to develop a “cue” to be used in the main method reported on in this paper — the cued retrospective think-aloud method (see Section 2.1), which involved using a semi-structured protocol for each expert. Cued with eye gaze movement recordings, CRTA enabled the participants to verbalize and explain their sensemaking practices and strategies during information foraging and processing. This approach facilitated an in-depth qualitative analysis of the participants’ information needs and intentions, as well as the challenges and difficulties they encountered, providing valuable insights into their cognitive processes.
# 3.1 Participants
Our study is specifically focused on experts. All participants in our study were academic researchers with prior experience as practitioners in business process management. They were from the information systems and computer science disciplines in two universities and a research institute. They were required to have both prior work experience as practitioners and research experience using BPMN of over four years. In line with acceptable numbers of participants in qualitative studies [19,24,28], 15 experts participated in this study, with 5 participants engaging in each integration approach.
# 3.2 Laboratory Study Materials and Procedure
While the main focus of this paper is the rich qualitative data offered by CRTA, the laboratory study data also consists of a pre-study questionnaire, eye tracking data, task performance and log data, and a post-study questionnaire using the NASA-TLX [20] to collect perceived task load. Our business process modeling language of choice was BPMN 2.0, due to its wide adoption and its standing as an international standard. The scenarios of the model and rules originated from a car insurance diagram included in OMG’s BPMN 2.0 documentation5. For expert groups, we used the three integration approaches (one for each treatment group). We ensured, through multiple revisions, that we created informationally equivalent models for all three integration approaches. Due to space limitations, the models cannot be included in the paper, but the complete laboratory study instruments are available for download6. We ensured all confounding factors were constant. In particular, the model was adjusted to ensure consistency of format for each of the integration approaches (i.e., using text annotation, diagrammatic, and linked rules). In total, there are three questions in the laboratory study. The questions differed in terms of the modeling constructs a participant will have to review to answer them. The model constructs for Q1 and Q2 include sequence and AND gateways, while Q3 include sequence, AND gateways, and XOR gateways. This diversity allowed us to gain further insights into the relationship between integration approaches and task complexity (reflected by the coverage of the model required to answer a particular question). Moreover, questions differed in terms of their span and each question is related to different process areas and business rules (a participant may have to navigate only a specific section of the process model to answer the question for Q1 and Q2 (local question), or the whole process for Q3 (global question)) (see Section 3.3 on details of process areas related to each question answer).
The complete procedure consists of six steps, described as follows. (Step 1) We first used a screening form to recruit potential participants. Then we provided a pre-study questionnaire to eligible participants. To ensure group balance, we used a pre-study questionnaire to capture participants’ prior knowledge and basic demographics, which we used to distribute participants across groups to avoid accidental homogeneity. (Step 2) We set up the laboratory study environment with eye tracking device. For each participant, we provided training on the instructions of the laboratory study, eye tracker device and calibration. (Step 3) After calibration, participants were first provided with a BPMN tutorial and were then offered a model using one of the three rule integration approaches.
We encouraged each participant to ask questions during the tutorial session, to ensure their readiness for the laboratory study. (Step 4) In the laboratory study, all participants had to answer 3 questions. We did not set a limit on the laboratory study duration nor a word count limit on participants’ answers. We recorded the gaze movements while experts were working on the tasks. (Step 5) Upon task completion, participants were provided with a post-study questionnaire using the NASA-TLX [20] to collect perceived task load. (Step 6) For each participant, we replayed the recording of the task completion process with their gaze movements, and we asked the following semi-structured questions to understand their behaviours in line with our objectives of the study. (1) Would you summarize the process for answering the questions? And explain why you worked in this way. (2) Please watch the replay video, can you identify the point in the video when you knew the answer (indicate the point in time when identifying the answer)? Explain what made you realize the answer, giving specific details (e.g., which activity/rules make you feel you know the answer?) and how confident are you in your answer to this question? (scale 1-5) (3) Regarding the difficulty level of these three questions, did you find any questions that are easier or harder? (4) Overall, can you comment on difficulties or challenges you experience with regard to understanding the model and rules when answering the questions?
# 3.3 Setting
The eye tracking data was collected through a Tobii Pro TX300 eye tracker $^ 7$ , which captures data on fixations, gazes, and saccades with timestamps. The laboratory study was conducted in full-screen mode and complete models were displayed without the function of zooming in or scrolling8. The visibility of the text and diagrams was examined carefully, with all text and diagrams being clear from a distance of 1.2 meters. All laboratory studies were conducted in the same lab with the same eye tracker.
# 3.4 Analysis Approach
To uncover the significant differences in knowledge workers’ behaviour between the three representations, we conducted the analyses on verbal protocol collected in cued-retrospective think-aloud interviews and complemented them with insights derived from the eye-tracking data. In line with sensemaking foundations (please see Section 2.1), we segment the laboratory study into two phases, namely the information foraging phase and the task-specific information processing and answering phase. The information foraging phase for a particular task commences when the participant first fixates on the laboratory study screen, and the information processing phase commences when the participant starts to type the answer in the question area for the first time.
Verbal Protocol Analysis. To analyse the user insights collected from the cued-retrospective think-aloud interviews, we used NVIVO $1 2 ^ { 9 }$ for verbal protocol analysis and we followed the procedure outlined by Gioia et al [16], to show the “dynamic relationships among the emergent concepts that describe or explain the phenomenon of interest and one that makes clear all relevant datato-theory connections” [16]. The verbal protocols were transcribed and provided to two independent coders for analysis to reduce bias in our analysis. We started the analysis following an inductive approach and then transitioned into a more abductive approach, to ensure “data and existing theory are now considered in tandem” [16]. The analysis process included four steps. In the first step, we aimed to “let the data do the talking”, so we inductively analysed the verbal protocols to identify key processes and strategies on how participants foraged and processed information to solve each task. Then we conducted open coding [38] based on the key themes and foci that emerged from the first step, and then the authors discussed the codes with the independent coder iteratively until an agreement on the first-order coding was reached. In the third step, we aimed to answer “whether the emerging themes suggest concepts that might help us describe and explain the phenomena we are observing” [16]. Using existing theories of sensemaking and cognition combined as a theoretical frame of reference, we reflected on the way in which the second-order concepts represented or related to knowledge workers’ sensemaking behaviours in information foraging and information processing. We reviewed the existing literature and theories to analyse and develop concepts that explain the data and we did not allow prior theoretical concepts and assumptions to restrict our interpretations.
Eye Tracking Analysis. In this study, the eye tracking data served as a preliminary observation tool that guided deeper qualitative exploration in CRTA interviews into the sensemaking behaviors of participants, where participants could reflect on their recorded gaze behaviors, enhancing the richness of the qualitative data by connecting it to observable actions. To reveal the insights on nuanced differences in how experts allocated attention across three types of integration approaches during tasks of varying complexity, we conducted complementary analyses of insights from eye tracking metrics.
We analyzed the efficiency of locating the answer during the information foraging phase on relevant areas (a measure of visual association of Areas of Interest (AOIs) [6]). The timestamp of locating the answers is indicated by each expert participant during the cued-retrospective think-aloud session. The duration of locating the answer begins when the participant starts each question until the timestamp when they indicate they found the relevant information on the model area. The AOIs were used for analysis and were invisible to participants. As shown in Fig. 1, for models featuring text annotation and diagrammatic integration, the screen was divided into 8 areas: seven different process model areas and a question area (which showed one question at a time). For models featuring link integration, there was an additional ninth area for rules, which displayed the corresponding business rules when participants clicked on each “R” icon in the model. Each question answer is related to different process areas. For local questions Q1 and Q2, the answer is related to area 6 and area 2, respectively. For Q3 (global question), the answer is related to areas 1, 5 and 7.
Fig. 1. Visual design in laboratory study – text annotation integration approach
# 4 Results and Discussion
As discussed in Section 3.4, we followed the analysis procedure outlined by Gioia et al [16]. Our findings shown in the data structure (please view Fig. 2) revealed that expert knowledge workers have distinct sensemaking loops and strategies during exploratory information foraging, focused information foraging and tasksspecific information processing and answering processes. To suit different information needs, they optimized and switched strategies during these processes. We also found these sensemaking practices were not mutually exclusive, and usually were interrelated and embedded with each other and could be used concurrently based on the different information needs. In addition, all participants expressed that using their gaze recordings was an effective approach for them to remind and explain their attention.
Exploratory information foraging. All the experts (15/15) indicated that at the beginning, they would quickly start scanning either from the business process model (structure-oriented) or the task (task-oriented) to understand the meta-information about the context and structure of the business process model and question. E.g., “I think, before I even like really worked on the first question. I’ve had the whole process to get an overview. And then I started going module by module (each activity group).” (P15). Participants explained that they wanted to understand the big picture from the meta-information before zooming into details. This insight is also supported by all experts presenting similar attention on the model area during the first information foraging process in Q1 irrespective of the integration group10.
Fig. 2. Data structure of expert process knowledge workers’ sensemaking practices
Focused information foraging. All experts (15/15) indicated that after they understood the meta information they would focus on the details, and we observed that they used various strategies during interpretation, analysis, selection, and evaluation to forage the information on the process model and business rules.
Interpretation and analysis. The findings revealed several main strategies for gathering and interpreting information, encompassing task-driven, modeldriven, and structure-relations driven approaches during the process of foraging information. In the first task (Q1, local question), the majority of the participants (11/15) indicated that they first read through the task and abstracted the key information from the question (task-driven strategy). E.g., “So, in general, I need to look at the question first to locate what kind of information $I$ need to process” (P12). They motivated this preference for a task-specific strategy and for locating keywords by the complexity of the process model and the capacity of their memory. They argued that they would forget the details of the process model if reading the model first. E.g., “So, I mean, the diagram is relatively complex and a lot of information is out there. If we look at the diagram first, it is possible that we read it from left to right, after we read it, we might even forget what we have read on the left part. So there will be a messy and catastrophes so that is why I choose to identify the key words. Yeah, the verbs are usually very important in phrases. So I believe that we can work out the question successfully.” (P6). While going through the details of the process model, the participants following a task-specific strategy mentioned that they considered the relevance between the task, process model, and business rules (structure relations-driven) (e.g., “I just looked down on the question, see whether there is a relation or not and then I will continue” (P5). In addition, they revisited the tasks after they read through the whole process to remind them of the tasks.
We also observed the model-driven strategy in Q1 during focused information foraging, i.e., four participants mentioned they would read through the model first and then read the task. They explained that they wanted to get familiar with all the details in the process model and then focus on the question. “I think is like that like a habit. Or do things like this, it’s quite common like you don’t familiar with something, you need to get through the total thing, and you need to get familiar with all the diagram, and then you need to focus on the question, what the questions is asking about.” (P8). All of them mentioned after they read the task, they considered the relevance between the model and the tasks (structure relations-driven).
Despite the differences in strategies (either tasks-specific or model-driven) during the interpretation and analysis of focused foraging information, all experts (15/15) reported that they read through all details when understanding the model and business rules in Q1. E.g. “I’m going everywhere right now because it’s the first time that I’m looking at this. And this is the first question. So I feel like I have to go through everything almost at this point and understand what is happening.” (P5). Meanwhile, while comprehending the model, all experts (15/15) indicated that they would both abstract the key part of the information from the model and remember its logic and structure (structure-relations driven strategy). E.g., “I just remember the key parts of the model, so for example, this is not a key part, here, so I know that the key points are these three, so most of the work is done here.” (P5). The insights align with the findings of eye tracking data, where all the experts had similar efficiency in locating the answer for the local question (Q1)11.
Selection and evaluation. To select and evaluate the relevant information in different tasks, all experts (15/15) indicated they used different strategies on each task based on their information needs and work habits. These strategies include keyword location, the process of elimination, reverse tracking, and comparison of information and structure between model and task, and within the process model. They expressed that using process elimination, identifying the end of the relevant area, and reverse tracking information can facilitate them downsizing the process to locate the relevant information. Determining the end of the relevant area instead of reading through the whole model again is the preference in answering local questions. They also would travel reversely and track the required sequence flows, activities and rules (e.g. “I will go back to graph reversely to see what kind of path I need to go through to get that result.” (P12)). They explained that these strategies enabled them to better locate relevant information areas and eliminate irrelevant parts, e.g., “I think I like work with the process of elimination... So that helps you look at smaller number of things, rather than going back again, and again, on the full chart. Every time if I go through every step, it will take a lot more time.” (P13).
In Q2 and Q3, we noticed they started to optimize their strategies in foraging and processing information based on different information needs for different integration tasks. All experts (15/15) used a task-specific strategy when working on Q2 and Q3. They expressed that they read questions first and adopted different strategies based on the evaluation of the tasks. E.g., “it’s a different question. It’s a different question requires a different strategy.”(P3). After interpreting and analysing task types, all experts expressed they used a different strategy for local and global questions to select and evaluate the relevant information based on their prior experience in practice. In local questions (Q1 and Q2), the dominant strategy is from end to start (reverse tracking), but in global question Q3, all participants expressed they worked from start to end. E.g., “actually, yes, question one and two are similar. Question three is a different approach. Question one and two, I work from end to start, but question three I work from start to finish.” (P3).
All of the experts (15/15) expressed that, since they understood the process model in Q1, they directly used the strategy of targeting the keyword to locate the relevant information on the model and rules from their memory in Q2 and Q3. They further stated that they read the model and business rules more efficiently and selectively to evaluate and target the information based on the specific information needs required from the question, instead of foraging for all details.
Based on expert participants’ insights, we assume that the initial focused information foraging during the process model and rule understanding played a vital role in building a mental model in their working memory, which will directly influence their attention when completing the following tasks. E.g.,“for this one, I already knew the process a little bit. And so I could directly search for keywords, determine eligibility for the car, here eligibility, it’s quite in the centre of the screen.” (P15). All participants expressed that while locating the relevant area to answer questions, they assessed the information in multiple rounds of comparison and evaluation to ensure their answers were relevant. For global question Q3, all participants (15/15) mentioned that evaluating the task made them realise the answer requires more areas and rules when compared to the other two local questions, so they went through the whole process model and rules. E.g., “Because here the question was not concerned with one specific outcome like determine eligibility, but it was concerned with overall the whole process, what is the minimum, so I had to go through the whole process.” (P15).
As task complexity increases, all experts presented similar efficiency in locating the answer in the global question (Q3)12.
Task-specific information processing and answering. All experts (15/15 indicated that once they located and confirmed the relevant information in each question, they started to type the answers. We observed during this process of answering and assessment, all experts focused on the identified relevant area to synthesize and evaluate relevant information and integrated the information with any prior knowledge or external knowledge. All participants expressed that when they evaluated their answer, they would only target some relevant areas directly based on their memory, but they would not read through all models or rules to confirm again, and they only read it based on the information needed to verify and ensure that they did not overlook anything (e.g., “for the minimum number, you kind of have to go through the whole process...it’s based on the situation so $I$ looked into each rule again quickly, so to make sure that I don’t overlook anything and then I will short case.” (P15). | A range of integrated modeling approaches have been developed to enable a
holistic representation of business process logic together with all relevant
business rules. These approaches address inherent problems with separate
documentation of business process models and business rules. In this study, we
explore how expert process workers make sense of the information provided
through such integrated modeling approaches. To do so, we complement verbal
protocol analysis with eye-tracking metrics to reveal nuanced user behaviours
involved in the main phases of sensemaking, namely information foraging and
information processing. By studying expert process workers engaged in tasks
based on integrated modeling of business processes and rules, we provide
insights that pave the way for a better understanding of sensemaking practices
and improved development of business process and business rule integration
approaches. Our research underscores the importance of offering personalized
support mechanisms that increase the efficacy and efficiency of sensemaking
practices for process knowledge workers. | [
"cs.IR",
"cs.HC",
"cs.SE"
] |
# 1 Introduction
Transformer-based Large Language Models (LLMs) have revolutionized natural language processing, excelling at tasks ranging from text generation and translation to question answering and summarization. Despite these advances, a fundamental understanding of how these models store and recall information, particularly factual or structured knowledge, remains limited. Clarifying these mechanisms is crucial for optimizing model performance and enabling efficient, real-world deployment. One impactful example is healthcare, where transformer-based models could assist clinicians through wearable devices such as smart glasses or watches (Gupta et al., 2024; Wu et al., 2024; Balloccu et al., 2024). Due to privacy and reliability, the preferred system would be a local on-edge, requiring minimal computation but with the capacity to memorize all relevant facts in the specific healthcare area.
Recent theoretical and empirical studies have sought to quantify the memorization capacity of transformers. Kim et al. (2023) introduced mathematical bounds for memory capacity, demonstrating that transformers could memorize $O ( d + n +$ $\sqrt { n N } )$ parameters, where $d , n , N$ correspond to embedding dimensions, dataset size, and model size, respectively. Additionally, Kajitsuka and Sato (2024) proved, that $\tilde { O } ( \sqrt { n N } )$ parameters are not only sufficient, but also necessary for some types of transformers. Mahdavi et al. (2024) extended this work by analyzing the effects of multi-head attention on memorization, revealing the interplay between architectural components and the model’s ability to store and recall information. The experiments in Härmä et al. (2024) used randomly generated sequences of numbers to evaluate the memorization capabilities of the transformer models on unstructured data. Most capacity studies use synthetic datasets because accurate capacity measurement becomes very difficult in the case of uncontrolled free text content.
The experiments reported in the current paper use sequential data generated from the knowledge graph, which, while controlled, has some of the hierarchical and relational complexity of realworld text content. More specifically, small-scale decoder-only transformer models (Brown et al., 2020) were trained to memorize structured sentences derived from the Systematized Nomenclature of Medicine (SNOMED) knowledge graph (KG) (El-Sappagh et al., 2018), a comprehensive medical ontology, which encodes semantic relationships between medical concepts, offering a rich dataset to explore memory mechanisms under realistic conditions. Exact memorization of selected relations would be critical, for example, in the healthcare use cases described above. Our aim is not to generalize to all LLMs or domains, but rather to offer a practical, reproducible framework for measuring memorization on realistic KG data. The relative task simplicity is by design: more complex or less-controlled tasks would conflate memorization with generalization, making it difficult to draw clear, interpretable conclusions about model capacity.
To measure the memorization of the transformer models, the Maximum Attainable Capacity (MAC) method was used. It evaluates the practical limit of samples a model can retain when trained on a large dataset. Our approach leverages structured datasets consisting of static triplets and longer sequences simulating graph traversal paths, capturing relationship patterns between concepts. These datasets allowed us to empirically analyze how model architecture, training configurations, dataset size, and complexity influence training dynamics and final memorization performance.
This work serves as a proof-of-concept, showing that structured data in the real world can evaluate memorization in practice. Firstly, we introduce a reproducible pipeline for converting large ontologies into tokenized datasets suitable for memorization studies. Secondly, we evaluate how transformers’ architecture influences capacity, building on prior theoretical insights. Lastly, we highlight cases where models fail to memorize all samples despite sufficient capacity, motivating future studies into training dynamics and error patterns.
Our findings do not aim to establish universal scaling laws or generalization behavior but to provide a reproducible framework for studying memory-limited models under realistic constraints.
# 2 Methods
# 2.1 Data
# 2.1.1 Data Source and Preprocessing
To evaluate transformer memorization and retrieval capabilities, we used SNOMED KG, which encodes medical concepts and their relationships as nodes and edges of a graph. It was accessed using the owlready2 library (Lamy, 2017), filtering out non-informative or overly specific properties to ensure meaningful relationships. Unlike graph transformers that use GNNs (Shehzad et al., 2024), we focus on a universal architecture, transforming the graph into (1) triplets (concept-property relationships, see 2.1.2), and (2) sequences, simulating graph traversal paths (see 2.1.3).
# 2.1.2 Triplets Generation
A dataset of the form (Concept, Property, Related Concept) was created, capturing semantic relationships in the SNOMED KG (see Figure 1A). It involves graph initialization and the exclusion of non-informative properties, followed by the triplets extraction: for each concept in the KG, all allowed properties and their associated related concepts are retrieved. If multiple related concepts existed for a (Concept, Property) pair, one was randomly chosen to ensure uniqueness.
# 2.1.3 Sequences Generation
The sequence generation simulated graph traversal to encode both local and global structures (Figure 1B). The extended graph excluded banned properties and added reverse edges for bidirectional traversal; labels were standardized. Sequences of the form $( \mathsf { n o d e } _ { 1 }$ , $\mathsf { e d g e } _ { 1 }$ , $\mathsf { n o d e } _ { 2 }$ , . . . , $\mathsf { n o d e } _ { n - 1 }$ , $\mathsf { e d g e } _ { n - 1 }$ , $\mathsf { n o d e } _ { n }$ ) were generated by selecting a random starting node, creating a subgraph by breadth-first search (BFS) with a set depth and randomly traversing unique edges. Every time, check that the same (node, edge) pair is not already visited before. The traversal stopped once it reached a pre-defined random edge limit or when no valid neighbors remained. This process was repeated for the desired number of sequences.
# 2.2 Transformers training
Decoder-only transformers with variations in architecture were implemented. Each unique element (node or edge) was assigned a unique integer (ensuring that repeated elements were consistently tokenized), followed by learned positional encoding. The architecture included an embedding layer to map tokenized inputs into continuous vector representations, transformer decoder layers with multihead attention mechanisms, and a linear output for token prediction.
For all experiments, the task was to predict a concept based on the previous concepts and relations. The accuracy was evaluated as: #correc_t_predictions – the proportion of correctly predicted related concepts to the total number of predictions. Additionally, Maximum Attainable Capacity (MAC) was used as a more suitable metric to measure the capacity of the model. MAC is a computationally efficient alternative to the Maximum Library Size (MLS) method. While MLS involves iteratively training models on progressively larger datasets to determine the largest library size that can be
# (A)1. Graph and exclusion list initialization
# 2. Go through every triplet, check conditions
1-a-3 1-a-2 }\`1-a\` occurs 2 times, choose randomly
1-c-4 } exclude, since \`c\` is banned
1-b-5 } include, this triplet is okay
35-d-6 } these 2 are fine: <node>+<prop> are unique
3. Final list
1-a-2
1-b-5
3-d-6
5-d-6
(B)1. Initial graph: banned properties deleted
2. Extended graph: reversed properties added
3. Select subgraph, using BFS: - from randomly selected node (here - \`1\`), - depth (amount of hops) here = 2
# 4. Create sequence by traversing subgraph:
remember to check the uniqueness of <node $^ +$ edge>
Stop when:
- # edges in sequence $>$ predefined limit
- no valid neighbors
Figure 1: Algorithms of triplets (A) and sequences (B) data generation.
do not include: limit of length 5. Final sequence: \`1\` - \`a\` - \`2 - \`reversed d\` - \`3\` - \`reversed d\` - \`4\` fully memorized, MAC is measuring the maximum number of samples that a model can memorize, provided with a large library. Previous research has shown a strong correlation between MLS and MAC (Härmä et al., 2024), making MAC an effective and time-efficient choice for this study.
To minimize the effect of randomness, each experiment was repeated 10 times for the first two setups and 3 times for the third and fourth setups, reporting the mean and double standard deviation. Training accuracy was evaluated at every other epoch for all configurations.
Models were implemented in PyTorch v1.13.1+cu117 (Paszke et al., 2017) and Transformers v4.30.2 (Wolf et al., 2019), trained with cross-entropy loss and Adam optimizer (learning rate 0.001) (Kingma and Ba, 2017). All other were default unless specified. In total, 546 models were trainded on NVIDIA A100 GPU with 16GB memory, totaling approximately 3, 100 hours of training time. Model sizes ranged from 2.9 to 44.5 million parameters, primarily varying with embedding size and layer count, but also influenced by vocabulary size.
# 2.3 Code availability
All code pertinent to the methods and results presented in this work is available at: https: //github.com/um-dacs-nlp/capacity/.
# 2.3.1 Triplets memorization
Three experimental setups were designed for the triplets dataset. In all cases, the prediction of a related concept was based on a unique conceptrelation pair, making correctness unambiguous.
In the first setup, dataset sizes ranged 50,000 to 100,000 samples. The model architecture consisted of a single transformer layer (embedding size 128, 4 attention heads, Rectified Linear Unit (ReLU) activation function (Agarap, 2019), batch size 64, 500 epochs). This setup focused on evaluating memorization performance under a fixed architecture while varying dataset sizes.
The second setup varied both architecture and activations: transformer layers (1, 2, or 4), and activation functions (ReLU, Gaussian Error Linear Unit (GELU) (Hendrycks and Gimpel, 2023), Randomized Leaky Rectified Linear Unit (RReLU) (Xu et al., 2015), and Softmax (Boltzmann, 1868)), with dataset sizes of 50,000, 70,000, or 100,000. To ensure fair comparisons, the total number of model parameters was kept constant across configurations by adjusting the embedding size (d_model parameter in PyTorch implementation of Transformers) proportionally to the number of
layers, using the formula: embedding_size $\ b =$ base_number_of_parameters with a base n_layers
number of parameters of 128. This approach en
sured that variations in performance could be at
tributed solely to architectural differences rather
than changes in the total parameter count. For this
setup, however, the batch size was increased to 128
and models were trained for 1000 epochs, since it
was required for achieving a plateau.
The third setup examined the interplay between model depth and embedding size, while keeping other hyperparameters the same: number of layers was set to 1 or 2 and base numbers of parameters for embedding sizes varied in $\{ 1 6 ; 3 2 ; 6 4 ; 1 2 8 \}$ (calculated as in the second experiment), with dataset sizes of 1,000, 10,000, 50,000, and 100,000. Only the Softmax activation function and 4 attention heads were used. To ensure fair comparisons, the configurations were designed to evaluate the impact of increasing the embedding sizes and depth of the model on the performance of the memory. The total parameter count was recalculated for each configuration using the same formula as in the second experiment. For this setup, the batch size was 128 and the training lasted 500 epochs.
# 2.3.2 Sequences memorization
The sequence memorization dataset used the same tokenization process as triplets, with additional steps for standardization: zero-padding at the end to a uniform length served both as a filler and a marker for sequence termination. A node mask was applied to distinguish the node from edge tokens for metric computation. Notably, each node was predicted based on all preceding tokens in the sequence, meaning the last node in a sequence benefited from the most context. This setup provided deeper insights into the transformer model’s ability to handle more structured data and its patterns.
The experimental setup was consistent with the triplet setups: embedding size 64, 4 attention heads, batch size 128, and 400 training epochs. Models with 1, 2, or 4 layers were tested, using RReLU and Softmax activations. Dataset sizes were 20,000, 50,000, and 100,000 sequences, each containing 4–6 nodes (3–5 edges), built from subgraphs extracted via BFS with a depth of 5 hops.
For this experiment, accuracy and capacity were measured similarly to the triplet-based experiments, with slight adaptations to account for the sequential structure of the data. Accuracy was defined as the proportion of correctly predicted tokens at node positions to the total number of node predictions in the dataset and is equal to all nodes across all sequences, excluding starting points. The total correct predictions also represent the MAC.
# 3 Results
# 3.1 Dataset Size Influence
Figure 2 illustrates capacity and accuracy trends across dataset sizes in the first setup. Smaller datasets learn quickly, with both metrics rising rapidly in the first 5–6 epochs and reaching maximum capacity by epoch 20. Larger datasets improve little in the first 15 epochs but later reach higher final accuracy and capacity. This suggests a threshold existence ( $\left( \sim 7 0 { , } 0 0 0 \right)$ rows for this case), beyond which the training process changes and a lot more epochs are required for full memorization.
The final accuracy and capacity (Table 1) indicate that although smaller datasets initially achieve higher accuracy, their capacity remains well below the size of the dataset (e.g., 50,000 rows yield only 46,811 samples). In contrast, larger datasets, such as 100,000 rows, significantly improve memorization (86,776 samples), highlighting the model’s ability to use more data. The progressive increase in capacity suggests that the size of the dataset plays a crucial role in optimizing memorization; however, the reasons behind the unlearned data, despite the available capacity, remain unclear.
Table 1: Final results after the full training process for the first setup (data sizes, for triplets dataset).
# 3.2 Architectural Variations Influences
In the second setup, the batch size was increased from 64 to 128, Since larger batch sizes seem to reduce gradient noise and improve memorization. As a result, one-layer models converged faster and reached higher capacity than in the first setup.
Softmax consistently outperformed other activation functions, yielding the highest average capacity, fewer outliers, and more stable training. Notably, four-layer models with Softmax achieved
Figure 2: Trends in training accuracy (upper) and capacity (lower) for the first setup (different data sizes, for triplets dataset). Left: first 30 epochs; right: full training process of 500 epochs.
Figure 3: Trends in training capacity for the second setup (different data sizes, activation functions, and numbers of layers for triplets dataset). Left: first 30 epochs; right: full training process of 1000 epochs.
Capacity during training process for different data sizes, activation functions, and numbers of layers 100 100 2030405060708090Capacity (in thousands) 890 nuEmxbper omf leanyte2r:s; 70 data size (in thousands) 60 1; 570 2; 570 4; 570 50 1; 100 2; 100 4; 100 40 activation function 30 ReLU 20 GELU RReLU 10 10 softmax 0 0 0 5 10 15 20 25 30 0 200 400 600 800 1000 Epochs Epochs
capacities comparable to one- or two-layer models without sacrificing convergence speed (Figure 3), suggesting its scalability with depth.
In contrast, ReLU and RReLU showed moderate performance, but suffered from increased variability and decreased capacity as the layers increased, aligning with the findings of Paik and Choi (2023) and Chen and Ge (2024). These activations exhibited inconsistent learning patterns, with unexpected slowdowns in capacity improvements (Fu et al., 2024). GELU followed a similar trend, though it performed better in the early training stages with larger datasets.
As previously, the size of the dataset significantly affected training: larger sets required longer warmup phases, initially achieving lower capacities than smaller datasets under the same conditions. This suggests the existence of distinct learning phases where improvements depend on architectural depth, dataset size, and activation function.
Furthermore, adding more layers did not improve performance; instead, it slowed training and reduced final capacity, likely due to the simplicity of the dataset, where additional layers do not provide any advantage in capturing patterns. Although deeper architectures benefit more complex datasets (He et al., 2024), their impact can be reduced for data with simple relationships.
# 3.3 Number of Parameters Influence
The third experiment further confirmed that, for simple datasets, learning dynamics depend on embedding size, not the number of layers. Models with the same embedding size but different layer counts exhibited nearly identical accuracy improvement. For instance, as shown in Figure 4, a onelayer model with 16 parameters (embedding size is 16, light green) converged at almost the same rate as a two-layer transformer with 32 parameters (embedding size is 16 per layer, dark blue). Similar trends were observed for models with embedding sizes of 32 and 64, regardless of layer count.
Figure 4: Trends in training accuracy (upper) and capacity (lower) for the third setup (different data sizes, numbers of parameters, and numbers of layers for triplets dataset). Left: first 50 epochs; right: full training process of 500 epochs. Light color corresponds to 1 layer, dark – to 2; number of parameters is a total number for all layers: green – 16, blue – 32, violet – 64, red – 128; embedding size can be computed by dividing it by layer count.
These results highlight that embedding size is the key factor influencing learning speed, while adding layers without increasing embedding size neither accelerates convergence nor improves final capacity. In fact, additional layers often slow the training, as evidenced by the faster growth of accuracy of one-layer models (Figure 4). Smaller embedding sizes further reduced the learning speed, consistent with previous experiments. However, all configurations ultimately reached similar accuracy, highlighting that the simplicity of the dataset allows embedding size to dominate training dynamics.
The final capacity values remained nearly identical across configurations, regardless of embedding size or layer count: with a dataset size of 1,000 samples, the capacities for the one- and two-layer models were nearly accurate. Similarly, at 10,000 and 50,000 samples, one-layer models achieved $^ { 9 , 8 7 4 \pm 1 1 }$ and $4 6 , 9 3 9 \pm 1 0 5$ , while two-layer models reached $9 , 8 7 5 \pm 7$ and $4 6 { , } 9 1 1 \pm 1 1 7$ , respectively. However, at 100,000 samples, a capacity "barrier" emerged. Two-layer transformers with an embedding size of 8 (16 total parameters) showed the capacity drop to $8 5 , 9 3 5 \pm 1 5 3$ , compared to $\sim 8 8 , 2 0 0$ for other configurations, while one-layer models maintained a higher capacity of $^ { 8 8 , 2 4 0 \pm 6 2 }$ This suggests that larger datasets, smaller embeddings, and deeper architectures may introduce limitations due to slower convergence or suboptimal capacity utilization.
# 3.4 Insights from Sequence Datasets
In the fourth setup, model capacity was evaluated by testing its ability to memorize each node in a sequence using the full preceding sequence of nodes and edges (instead of triplets), involving 34,908, 85,972, and 167,965 predictions for datasets of 20, 50, and 100 thousand sequences, respectively.
Compared to triplet datasets, models trained on sequences achieved near-perfect memorization in significantly fewer epochs, plateauing within 150 epochs (Figure 5). The sequential structure likely
168 168 206010012014050701101301508635Capacity (in thousands) 10012014011013015086 Experiment 4: number of layers; data size (in thousands) 1; 20 2; 20 4; 20 1; 5100 2; 5100 4; 5100 365750 ReLU softmax 210 0 5 10 15 20 25 30 0 50 100 150 200 250 300 350 400 Epochs Epochs
sped up learning but increased training time because of more information per sequence. Training showed greater capacity fluctuations over epochs, probably reflecting the increased complexity of the dataset, as sequences encode more intricate patterns than triplets. Nonetheless, models demonstrated exceptional memorization, achieving $1 0 0 \%$ capacity for the 20 thousand sequence dataset and over $9 9 . 5 \%$ for 50 and 100 thousand sequences.
As before, RReLU converged more slowly than Softmax, however, the final capacities were nearly identical for one- and two-layer models: with 100 thousand sequences, RReLU achieved $1 6 6 , 9 3 4 \pm 2 4 3$ (one layer) and $1 6 6 , 9 9 5 \pm 1 1 8$ (two layers), while Softmax reached $^ { 1 6 6 , 9 9 2 } \pm 1 1 0$ and $^ { 1 6 6 , 9 8 5 \pm 9 0 4 }$ , respectively. In deeper models (4 layers), RReLU showed lower final capacities and greater fluctuations $( 1 6 5 , 2 7 1 \pm 1 , 0 6 8$ vs. $1 6 6 , 8 2 5 \pm 3 1 9$ for Softmax). This contrasts with previous findings (Shen et al., 2023), which reported that ReLU outperformed Softmax. The discrepancy may suggest that the relative effectiveness of activation functions depends on the dataset structure and task, warranting further investigation. Nonetheless, even with increased sequence complexity, all models demonstrated rapid adaptation and strong memorization.
# 4 Discussion
This study examined how decoder-only transformer models memorize structured data derived from a real-world medical ontology. Our focus was not on generalization, but on a controlled analysis of memorization, presenting a proof-of-concept framework that bridges theoretical insights and practical evaluation. The complete SNOMED KG contains more than a million relations, integrating diverse fields of medicine (e.g., substances, diseases, and anatomical structures). However, in mobile applications, e.g. small transformers in smart glasses or smartwatches, models must efficiently retain only targeted subsets of information. For example, smart glasses for a cardiac surgeon or a smartwatch with a personal dietary coach might require a domainspecific LLM that memorizes about 10 to 100,000 items. As discussed in Kajitsuka and Sato (2024); Härmä et al. (2024), isolating memorization is a valid objective that reveals how much a transformer can reliably store under different architectural configurations. Our methodology reflects this: we analyze how dataset characteristics and architectural choices affect convergence and memorization, independent of generalization ability or test-time reasoning.
To ensure clear capacity measurement, we deliberately focused on tasks where ground-truth memorization can be unambiguously defined. Increasing complexity would blur the line between memorization and generalization, making interpretation less fair and direct.
# 4.1 Effect of Dataset Structure
Smaller datasets led to faster convergence but lower capacity, whereas larger datasets required longer warm-up but achieved higher memorization. Beyond a certain size, the training slowed significantly, indicating optimization bottlenecks. The fact that some samples remain unlearned even with sufficient capacity points to possible optimization barriers or local minima (see Limitations).
Sequence-based datasets outperformed triplets, achieving near-perfect memorization with fewer epochs. Sequences improved learning by capturing relationships and patterns in the data, though they also led to increased training fluctuations, aligning with Ju et al. (2021). This suggests that longer traversal sequences could further improve memorization in domain-specific medical applications.
The complexity of the sequence datasets was controlled through BFS depth and edge count, allowing capture of both local and global structures from the SNOMED graph (e.g., transitions between anatomical concepts and related procedures), while avoiding trivially linear or purely synthetic patterns. Randomness was balanced with structural constraints such as bidirectional edges and node uniqueness, reflecting how medical knowledge is typically reasoned over in practice (e.g., from symptom to diagnosis to treatment).
# 4.2 Architectural Influence
Embedding size was the main factor in learning speed and capacity, and adding layers often reduced performance, probably due to the data simplicity. This supports the findings that many transformer layers are redundant and can be pruned without loss He et al. (2024). Although we did not directly analyze redundancy, our results suggest that pruning could further optimize capacity.
For larger datasets, smaller embeddings struggled to reach full capacity, particularly in deeper architectures, suggesting that increasing embedding size is more beneficial than adding depth, at least for structured domain-specific memorization.
Softmax led to greater stability and capacity, while ReLU-based activations showed higher variability and performance drops in deeper models, which is consistent with, e.g., Paik and Choi (2023); Chen and Ge (2024). However, this contrasts with Shen et al. (2023), who found ReLU advantageous, emphasizing that activation effectiveness may be highly dependent on the structure of the dataset, the initialization of the model, or the formulation of the task.
For deployment in limited edge devices, our results suggest favoring shallow architectures (1 to 2 layers) with wider embeddings, which consistently demonstrated better memorization per parameter. This configuration offers a practical trade-off for applications where total parameter count and energy use are constrained, such as wearables or lowpower clinical decision support tools. | This paper studies how the model architecture and data configurations
influence the empirical memorization capacity of generative transformers. The
models are trained using synthetic text datasets derived from the Systematized
Nomenclature of Medicine (SNOMED) knowledge graph: triplets, representing
static connections, and sequences, simulating complex relation patterns. The
results show that embedding size is the primary determinant of learning speed
and capacity, while additional layers provide limited benefits and may hinder
performance on simpler datasets. Activation functions play a crucial role, and
Softmax demonstrates greater stability and capacity. Furthermore, increasing
the complexity of the data set seems to improve the final memorization. These
insights improve our understanding of transformer memory mechanisms and provide
a framework for optimizing model design with structured real-world data. | [
"cs.CL"
] |
# Introduction
Recent advances in large language models (LLMs) have led to significant breakthroughs in Text-to-SQL — the task of translating natural language questions into SQL queries [1, 2, 3, 4]. Such advancements have the potential to democratize advanced usage of relational databases by empowering non-expert users to query data intuitively, without manually crafting complex SQL statements. However, deploying agentic LLM-based Text-to-SQL systems in production environments requires more than leveraging state-of-the-art LLMs [5, 6, 7, 8, 9] with advanced agentic inference-time scaling algorithms [10, 11] — it also necessitates an efficient inference serving infrastructure. Specifically, such infrastructure must manage workflows consisting of multiple interdependent LLM inference requests with stringent latency and throughput demands, particularly when deployed on enterprise GPU clusters exhibiting some computational heterogeneity. In this paper, we address the challenge of efficiently scheduling and executing agentic multi-stage LLM-based Text-to-SQL workloads within computational heterogeneous, multi-tenant serving environments.
An efficient serving infrastructure for agentic LLM-based Text-to-SQL workflows is crucial for realworld deployments, particularly in enterprise production environments characterized by stringent servicelevel objectives (SLOs) for query response times. Meeting these SLOs is challenging due to the inherently multi-stage nature of agentic LLM-based Text-to-SQL paradigms, where each user-issued Text-toSQL query triggers multiple interdependent LLM inference requests and subsequent database interactions. For instance, a single natural language query may generate several candidate SQL statements concurrently, each produced via distinct LLM prompts. If execution errors occur, the system iteratively invokes the LLM to refine the query, potentially requiring multiple rounds of corrections — sometimes up to ten iterations. While this multi-stage pipeline is essential to achieve high accuracy, it significantly increases computational complexity, latency sensitivity, and scheduling complexity.
Consequently, a production-level system serving numerous concurrent end-to-end Text-to-SQL queries must effectively schedule each LLM inference request from different stages onto heterogeneous GPU resources, which comprise multiple LLM model instances with potentially varying capacities for processing LLM requests. Such a scheduler must judiciously assign tasks, determining both the allocation of LLM inference requests to appropriate GPU instances and their execution order inside each model instance, to ensure adherence to per-query deadlines while maximizing overall system throughput. Addressing this scheduling challenge is vital: poor scheduling decisions can dramatically degrade response times, negatively impacting user experience and undermining the practical utility of natural-language-driven database interfaces. The scheduling problem is inherently non-trivial, as simplistic scheduling approaches suitable for less complex workloads fail to manage the dynamic dependencies, latency variability, and resource heterogeneity inherent in Text-to-SQL serving.
Optimizing LLM inference request scheduling for agentic Text-to-SQL workloads is particularly challenging due to several interrelated complexities, as we enumerated below:
• LLM inference requests dependencies: The state-of-the-art Text-to-SQL agents inherently involve multiple interdependent stages, each possessing distinct urgency levels. Later-stage tasks, such as the final SQL validation, cannot commence until the preceding stages are completed. Consequently, delays in early inference stages diminish the available slack for subsequent tasks, increasing the risk of end-to-end deadline violations.
• Heterogeneity of LLM inference requests: The LLM inference requests in the Text-to-SQL workflow exhibit substantial variability in different stages, driven by differences in the length of the query prompt and the number of output tokens generated. Such heterogeneity makes execution latencies unpredictable, complicating effective scheduling.
• Heterogeneity of model instance serving capacity: Enterprise production environments commonly leverage heterogeneous GPUs with different computational capabilities1. Consequently, the throughput and latency for an LLM inference request can vary significantly depending on the GPU specifications on which the model instance(s) execute.
• SLO constraints in multi-tenant scenario: Production deployments must handle continuous streams of concurrent end-to-end Text-to-SQL queries from multiple users, each with a corresponding different SLO. Thus, the scheduling strategy must be able to accommodate various priorities in a production environment.
Due to these intertwined factors, simple scheduling approaches, such as round-robin dispatching or firstcome-first-served (FCFS) queues, implemented by popular LLM serving systems, would be ineffective in practice. These naive methods overlook critical task dependencies, latency variability, and GPU heterogeneity, leading to suboptimal resource utilization, frequent SLO violations, and degraded system responsiveness under realistic workloads.
Notice that existing LLM serving frameworks primarily target independent LLM inference tasks, neglecting complex end-to-end workflows with multiple stages and their inherent dependencies. Generalpurpose schedulers typically treat requests in isolation, lacking effective coordination of dependent subtasks or enforcement of end-to-end deadlines. Recent work exploring adaptive batching [16], priority-aware request allocation [17, 18], and GPU load balancing [19] often assumes independence among LLM inference tasks and inadequately addresses heterogeneous workflows. Consequently, scenarios that require task preemption to avoid deadline violations remain unaddressed. The unique combination of multi-stage pipeline dependencies and GPU resource heterogeneity in agentic LLM-based Text-to-SQL serving is still largely unexplored, highlighting a critical gap in current serving infrastructures. To address these challenges, we introduce HEXGEN-TEXT2SQL, a novel framework designed to efficiently schedule and execute agentic Text-to-SQL workloads in heterogeneous GPU-serving environments, supporting multi-tenant queries. Our contributions can be highlighted with the following summarization:
Contribution 1. We propose HEXGEN-TEXT2SQL, a novel framework for agentic LLM-based Text-toSQL serving, guided by a careful analysis of the workflows. The design explicitly accommodates the multi-stage nature of Text-to-SQL pipelines, managing inter-stage dependencies while exploiting parallelism across independent subtasks. By structuring HEXGEN-TEXT2SQL to support the agentic LLM reasoning loop, we enable efficient progression of sequential and parallel inference tasks, reducing idle times between dependent stages. The architecture incorporates a global coordination layer and per-instance execution management to seamlessly orchestrate the flow of tasks across heterogeneous GPU resources. This analysis-driven design establishes a robust foundation for meeting stringent SLOs in multi-tenant environments.
Contribution 2. We design a novel two-level scheduling algorithms that efficiently coordinate LLM inference requests across a pool of model replicas that comprise multiple LLM model serving instances with diverse LLM request processing capabilities. At the global level, a workload-balanced dispatcher assigns each incoming LLM inference task to the most suitable model instance, accounting for processing capabilities and current load on that model instance. The local priority queue in each model instance employs an adaptive urgency-guided queueing policy that dynamically prioritizes tasks based on their remaining deadline slack and estimated execution time. This hierarchical scheduling strategy ensures that urgent inference stages can preempt less critical tasks when necessary, allowing HEXGEN-TEXT2SQL to meet strict per-query SLOs even under heavy multi-tenant workloads. We further employ a simulator-driven approach to determine some key hyperparameters, which makes the algorithms robust across diverse workload patterns. Together, these scheduling innovations enable HEXGEN-TEXT2SQL to fully harness heterogeneous hardware parallelism while achieving consistently low latency and high throughput.
Contribution 3. We conduct a comprehensive experimental evaluation of HEXGEN-TEXT2SQL to demonstrate its performance on realistic agentic LLM-based Text-to-SQL workloads. Our experiments deploy HEXGEN-TEXT2SQL on a heterogeneous GPU cluster and compare it against state-of-the-art LLM serving systems. The results show that HEXGEN-TEXT2SQL consistently meets strict service-level objectives, significantly reducing query response times and improving throughput compared to existing solutions. Specifically, HEXGEN-TEXT2SQL reduces latency deadlines by up to $1 . 6 7 \times$ (average: $1 . 4 1 \times )$ and improves system throughput by up to $1 . 7 5 \times$ (average: $1 . 6 5 \times )$ compared to vLLM under diverse, realistic workload conditions. We also observe that HEXGEN-TEXT2SQL ’s scheduling strategies ensure efficient resource utilization and robust performance in multi-tenant scenarios with diverse query complexities. Overall, the study confirms that HEXGEN-TEXT2SQL ’s architecture and algorithms translate into substantial improvements in end-to-end Text-to-SQL serving performance.
Figure 1: Text-to-SQL workflow and HEXGEN-TEXT2SQL system architecture. The Text-to-SQL workflow provides the inter-stage dependency. Incoming LLM inference requests are dispatched by a global coordinator to model instances based on workload balance and task suitability. Each model instance manages its queue using an urgency-guided priority mechanism.
# 2 Preliminaries
State-of-the-art Text-to-SQL serving systems face unique challenges in serving end-to-end latency-sensitive queries, requiring coordinated execution across multiple stages with interdependent LLM inference requests. In this section, we first formalize the key concepts underlying our serving system design: Section 2.1 decomposes the Text-to-SQL workflow into its constituent stages, highlighting the sequential dependencies and multi-stage pattern that necessitate specialized scheduling; Section 2.2 analyzes limitations of the current scheduling and queuing policy in existing LLM serving systems, demonstrating why general-purpose schedulers fail to meet the end-to-end latency requirements of the agentic LLM-based Text-to-SQL workflow.
# 2.1 Agentic LLM-based Text-to-SQL Workflow
The agentic LLM-based Text-to-SQL workflow involves several key stages to transform a natural language query into an executable SQL statement. As shown in Figure 1, we summarize the key stages in the stateof-the-art agentic Text-to-SQL paradigm (mainly following Chess [20]) as below:
• Schema linking: Given metadata and detailed column descriptions for each table, the LLM identifies and aligns entities mentioned in the user’s natural language query to relevant tables and columns within the database schema. This step is essential to accurately ground the user’s query in the underlying database structure. • SQL candidates generation: Utilizing schema alignments from the previous step, the LLM generates candidate SQL queries from the natural language query. Multiple LLM inference requests are executed concurrently, employing different prompts and illustrative examples, to produce a diverse set of candidate queries. This parallel approach aims to capture multiple plausible interpretations of the user’s intent. • Self-correction: Candidate SQL queries are executed against the database, and any resulting execution errors prompt iterative refinement throughout subsequent LLM invocations. The system can iteratively refine queries up to a predefined limit (e.g., 10 iterations), systematically improving their accuracy and correctness.
• Evaluation: After self-correction ensures syntactic correctness, the LLM generates multiple unit tests based on natural language derived from the original query. The finalized SQL candidates are evaluated against these tests, selecting the query that successfully passes the largest number of cases. These tests verify both the semantic accuracy and functional correctness of the SQL candidates, ensuring alignment with the user’s original intent.
# 2.2 LLM Serving Queue Management
Current popular LLM serving frameworks, such as vLLM [21], Text Generation Inference (TGI) from Hugging Face [22], and TensorRT-LLM from NVIDIA [23], employ various scheduling and queueing strategies optimized for general-purpose LLM serving. For example, vLLM utilizes continuous batching [24] with a first-come-first-served (FCFS) policy, allowing new LLM inference requests to join ongoing batches during decoding. TGI groups incoming LLM inference requests based on prompt length and generation parameters to optimize batch formation. TensorRT-LLM implements an ”in-flight” batching strategy, managing concurrent LLM inference requests through the scheduling policies implemented in Triton [25], including FIFO and priority-based queuing. While effective for standard LLM tasks, these frameworks are not inherently designed to handle the complexities of multi-stage, dependency-aware agentic workflows like Text-to-SQL.
Advanced queuing methods, such as Queue Management for LLM serving (QLM) [26] and the Virtual Token Counter (VTC) [27], have been proposed to address specific performance and fairness requirements QLM introduces priority scheduling to meet Service Level Objectives (SLOs) by prioritizing urgent LLM inference requests and employing techniques like preemption and state swapping; VTC ensures equitable resource allocation among users by tracking the number of tokens served and prioritizing those with lower consumption. However, these mechanisms still can not inherently account for the nuances of the Text-toSQL pipeline, such as varying computational costs across different stages or the impact of query latency on downstream database performance.
Concretely, we summarize the limitations that the existing queueing strategies face when applied to agentic Text-to-SQL workflows:
• Lack of LLM inference dependency awareness: Current queuing systems do not manage dependencies between sequential stages (e.g., schema linking before SQL generation), leading to potential inefficiencies in processing multi-stage queries.
• Request scheduling ignores heterogeneity: existing queueing policies treat all LLM inference requests uniformly, failing to account for the diverse resource requirements of different Text-to-SQL queries and LLM requests capacity among model instances with varying computation power, which can result in suboptimal resource utilization and difficulty meeting SLOs.
• Inadequate SLO management in multi-tenant environments: General-purpose LLM queues often lack the fine-grained control needed to prioritize heterogeneous LLM inference requests within the Text-toSQL workflow, making it challenging to meet per-Text-to-SQL-query SLOs in multi-tenant settings.
To address these challenges, a novel queueing method tailored to the multi-stage and dependency-aware nature of Text-to-SQL is necessary. Such a system should optimize end-to-end performance and ensure stringent SLO adherence in multi-tenant environments.
# 3 HEXGEN-TEXT2SQL
In this section, we first introduce the design principle of Text-to-SQL LLM agentic workflow, then demonstrate the framework design.
# 3.1 Design Principle of Text-to-SQL Serving
Given the analysis of the existing limitations of the current LLM serving system in Section 2.2, we discuss the design principles of an agentic LLM-based Text-to-SQL system, each addressing critical challenges inherent in agentic Text-to-SQL workflows.
Principle 1. Explicit multi-stage dependency management. The agentic Text-to-SQL workflow decomposes each end-to-end user query into sequential and parallel stages, including schema linking, candidate generation, self-correction, and evaluation, each of which triggers single or multiple LLM inference requests. An effective orchestration of these requests in different stages is essential to ensure correctness and optimize performance. An efficient system should explicitly model inter-stage dependencies, enforcing the completion of prerequisite stages before initiating dependent ones. For parallelizable tasks, such a system should be able to dispatch subtasks concurrently and efficiently manage their completion, thereby minimizing idle time and enhancing throughput. This structured approach should be able to mitigate errors from out-of-order execution and address the complexity of orchestrating multi-stage workflows.
Principle 2. Heterogeneity-aware LLM inference request allocation. In production environments, hardware heterogeneity is commonplace — deployments often encompass a mix of GPUs with varying computational capabilities, memory capacities, and performance characteristics. This diversity could arise from incremental hardware upgrades or cost considerations. Consequently, efficient resource utilization requires a scheduling strategy that is aware of these differences between multiple LLM serving model instances. Noted that we assume that the allocation of multiple model instances could be effectively determined by existing systems [28, 29, 30], where the allocation can optimize the end-to-end SLO or throughput over a set of heterogeneous GPUs while each model instance could exhibit different serving capacities. An efficient system could address this by decoupling global task assignment from local execution management. For example, a global coordinator can evaluate the computational requirements of each task, such as memory footprint, expected execution time, and parallelism potential, and assign them to the most suitable hardware resources. This global perspective ensures optimal load balance and prevents resource contention. At the local level, each model instance can manage task prioritization and execution, adapting to real-time workload fluctuations. This scheduling ensures that both high-capacity and lower-capacity hardware are utilized effectively, maximizing overall system throughput and performance.
Principle 3. SLO guarantees. In production-level Text-to-SQL serving systems, adhering to each end-toend SLOs is paramount to ensure consistent and predictable performance, especially in multi-tenant environments where diverse workloads coexist. As we state in Section 2.1, each stage in the agentic Text-to-SQL workflow contributes to the overall latency experienced by the user. Therefore, it’s crucial to manage and schedule these stages with an awareness of their individual and collective impact on the SLOs. An ideal LLM inference request scheduling should be able to prioritize based on the remaining time budgets and estimated execution durations for each end-to-end Text-to-SQL query. Moreover, in multi-tenant scenarios, different users or applications may have varying SLO requirements. An ideal serving system should account for this by maintaining per-SLO tracking and adjusting scheduling policies to meet these differentiated objectives. This fine-grained control should be able to prevent scenarios where the performance of one tenant adversely affects others, thereby maintaining fairness and predictability across the system.
Collectively, we believe these principles enable us to effectively manage complex workflows, optimize resource utilization, and meet performance guarantees, thereby addressing key limitations in existing Text
to-SQL serving systems.
# 3.2 Framework Design
Following the design principles introduced earlier, HEXGEN-TEXT2SQL is a distributed system designed to efficiently serve multi-stage, LLM-based Text-to-SQL inference workloads in heterogeneous GPU clusters. Figure 1 provides an overview of the proposed system. HEXGEN-TEXT2SQL ’s serving architecture is built around a centralized global coordinator and multiple GPU-backed LLM model instances, reflecting the system’s three guiding principles. As a distributed Text-to-SQL serving system, HEXGEN-TEXT2SQL is explicitly designed to handle multi-stage inference workflows under heterogeneous GPU deployments while meeting strict per-query SLO deadlines. To achieve these goals, HEXGEN-TEXT2SQL intelligently dispatches and prioritizes LLM inference tasks in a two-level scheduling design. First, a global coordinator assigns each incoming LLM inference request to an appropriate model serving instance, accounting for the computation estimation of the LLM request and the availability (i.e., queuing status) of each model instance’s variability. Second, each model instance manages its own priority queue of tasks, dynamically ordering pending inference steps by urgency.
We next describe how each principle is embodied in HEXGEN-TEXT2SQL ’s design and why it is crucial for correct and efficient Text-to-SQL execution in depth.
Multi-stage dependency management. End-to-end Text-to-SQL queries execute as agentic workflows consisting of multiple dependent stages (e.g., schema linking, SQL generation, error correction, validation) that must occur in following their inherited dependence — Later stages cannot begin until earlier ones complete, and any delay in an early stage eats into the time budget of subsequent stages. To handle these strict inter-stage dependencies, HEXGEN-TEXT2SQL treats each incoming end-to-end Text-to-SQL query as a workflow of dependent LLM inference requests rather than as an isolated one. The global coordinator maintains an explicit representation of each query’s pipeline status, and only dispatches an LLM inference request for execution when its predecessors have finished, ensuring the correct order of execution. For instance, once the schema linking step of a query completes, the coordinator immediately dispatches the next stage (i.e., generating SQL candidates); if a stage involves parallelizable subtasks, it dispatches all of them concurrently across available model serving instances to accelerate that stage’s completion. This explicit dependency tracking guarantees correctness (each LLM step sees the proper inputs from prior steps) and prevents resource waste on tasks that would ultimately be invalidated by unmet prerequisites. The HEXGEN-TEXT2SQL monitors each end-to-end Text-to-SQL query’s progress and updates scheduling parameters whenever a stage completes. In particular, the remaining end-to-end deadline for a query is propagated to its pending stages — shrinking their allowed execution windows in the local priority queue in the assigned model instances and thereby increasing their priority in the system. By dynamically adapting to the workflow’s state in this way, HEXGEN-TEXT2SQL minimizes idle gaps between stages and ensures that downstream tasks do not miss deadlines due to upstream delays. This principled management of multi-stage dependencies ultimately improves both performance and reliability: queries complete faster by avoiding needless waiting, and the risk of cascading deadline violations is sharply reduced.
LLM inference request allocation. Production GPU clusters for LLM serving are often heterogeneous where a standard scheduling algorithm [29, 30, 28] should be able to organize multiple model instances, which leads to optimal global SLOs or throughput while each model instance may exhibit different LLM serving capacities. On the other hand, the LLM inference requests in agentic Text-to-SQL workflows are also naturally heterogeneous in resource demand: depending on the input and output length, LLM inference requests launched from different stages can have widely varying execution times. HEXGEN-TEXT2SQL addresses this variability through a heterogeneity-aware scheduling that judiciously allocates each LLM inference request to the best-suited model instance. Instead of naive round-robin or FIFO assignment (which would ignore critical differences in hardware speed or current load and lead to SLO violation), the centralized coordinator in HEXGEN-TEXT2SQL employs a workload-balanced dispatching policy. For each incoming LLM inference request (which may correspond to a stage of some end-to-end Text-to-SQL query), the coordinator evaluates all available model instances and selects a target instance based on two factors: (i) the expected execution time of this task on that model instance given its request processing capabilities, and (ii) the current workload at that instance (e.g., queue length or utilization). Concretely, HEXGENTEXT2SQL ’s scheduler maintains an empirical performance model for each GPU type and Text-to-SQL inference step. HEXGEN-TEXT2SQL estimates how quickly a given LLM inference request (e.g., a prompt of a certain length and prediction of the output length) would run on each candidate model instance, and it is aware of each device’s backlog of work. Using this information, the coordinator computes a composite suitability score for each model instance, balancing the desire to send the task to the fastest possible model instance versus the need to avoid overloading any single model instance. The task is dispatched to the model instance with the highest score, i.e., the one offering the best trade-off between low expected latency and light current load. By dynamically routing LLM inference requests in this heterogeneity-aware manner, HEXGEN-TEXT2SQL achieves far better resource utilization and tail latency control than static or load-agnostic schemes. Heavier or latency-sensitive queries tend to run on more powerful model instances, while lighter tasks can fill in capacity on slower or busier devices, resulting in a balanced cluster workload. This global coordination not only improves overall throughput but also contributes to SLO compliance — it prevents situations where a slow model instance becomes a bottleneck or a fast model instance sits idle, thereby reducing the likelihood of queries missing their deadlines due to suboptimal placement. In summary, heterogeneity-aware dispatching allows HEXGEN-TEXT2SQL to capitalize on available hardware diversity for performance, while ensuring no end-to-end query’s latency SLO is jeopardized by an inappropriate assignment.
Adaptive multi-tenant priority scheduling. To meet strict SLOs in a multi-tenant environment, HEXGENTEXT2SQL couples its intelligent dispatching with per-query urgency-aware scheduling. Each LLM serving model instance runs an adaptive priority queue that continually re-prioritizes pending LLM inference requests according to their urgency. Rather than processing LLM requests strictly in arrival order, each model instance always executes the task that is most urgent — defined in terms of the end-to-end Text-to-SQL query’s deadline and remaining execution time. This design directly enforces per-end-to-end Text-to-SQL query SLO guarantees: it ensures that when the system is under load, those LLM inference requests that are closest to violating their end-to-end time budgets are serviced first, minimizing deadline misses. Concretely, when a new Text-to-SQL query enters the system, HEXGEN-TEXT2SQL assigns a target deadline to it based on its SLO. This total deadline is then apportioned across the query’s multiple stages to derive an individual time budget for each LLM inference request. The budget allocation considers the average expected duration of each step so that, for example, a computationally expensive stage is given a larger share of the total time. The prioritized queue at the model instance uses this to calculate an urgency for the task, which grows as the task’s waiting time increases or its deadline draws near. End-to-end Text-toSQL queries with very little slack time and non-trivial execution length will thus have the highest urgency values. The priority queues continuously update each task’s urgency in real time: while a LLM inference request waits in queue, its slack diminishes (increasing SLO pressure), and whenever a preceding step of the same query finishes, the remaining sub-deadlines for later steps are recomputed to account for lost time. Given this adaptive strategy, the most time-critical LLM inference request is always at the front of the queue when the model instances can take new LLM inference request. This urgency-driven scheduling mechanism is vital for meeting latency targets under heavy loads and unpredictable conditions in a multi-tenant scenario. As a result, HEXGEN-TEXT2SQL can guarantee a high SLO attainment rate — the vast majority of queries finish before their deadlines — while still keeping the model instance busy with as many requests as possible. The combination of global heterogeneity-aware dispatch and local urgency-aware execution allows HEXGEN-TEXT2SQL to deliver reliable performance, i.e., meeting per-end-to-end Text-to-SQL query deadlines.
By systematically embedding these principles into its design, HEXGEN-TEXT2SQL provides reliable, efficient, and scalable Text-to-SQL inference serving, directly addressing the unique demands of productiongrade LLM-based workflows.
# 4 Scheduling Algorithm
In this section, we provide an in-depth introduction to the formulation and implementation of the global coordinator and local priority queue. We first formulate the scheduling problem as below:
Problem formulation. Formally, consider a sequence of Text-to-SQL queries $\{ Q _ { 1 } , Q _ { 2 } , . . . \}$ that arrives following some distribution $\mathbb { P } _ { Q }$ , i.e., $Q _ { i } \sim \mathbb { P } _ { Q }$ ; each query noted by $Q _ { i }$ includes a set of LLM inference requests represented by $\{ q _ { i , 1 } , q _ { i , 2 } , . . . , q _ { i , j } , . . . q _ { i , n _ { i } } \}$ along with an end-to-end SLO denoted by $T _ { i } ^ { \mathrm { S L O } }$ . Given a set of $N$ model instances, $\mathbf { M } = \{ m _ { 1 } , m _ { 2 } , . . . m _ { N } \}$ , where model instance $m$ can process an LLM request $q _ { i , j }$ with different processing time $t _ { i , j } ^ { m }$ (including both queuing and computation time at the assigned model instance $m$ ), the goal of our scheduling problem is to find some LLM request allocation $\phi$ , where LLM inference request $q _ { i , j }$ is executed by model instances $m _ { i , j } \in \mathbf { M }$ , where $\phi \left( q _ { i , j } \right) = m _ { i , j }$ , which maximize the probability that each end-to-end Text-to-SQL query $Q _ { i }$ arrives from the distribution $\mathbb { P } _ { Q }$ can be processed before its SLO:
$$
\arg \operatorname* { m a x } _ { \boldsymbol { \phi } } \mathbb { P } \left( \sum _ { t _ { i , j } } t _ { i , j } ^ { \boldsymbol { \phi } ( \boldsymbol { q } _ { i , j } ) } \leq T _ { i } ^ { \mathrm { S L O } } \mid Q _ { i } \sim \mathbb { P } _ { Q } \right)
$$
Unfortunately, determining an optimal queuing policy for such scheduling problems with unknown arrival and service time distributions is usually computationally intractable — The inherent uncertainty in job characteristics necessitates dynamic decision-making without complete information, rendering the scheduling problem NP-hard [31]. Thus, to achieve robust and efficient Text-to-SQL serving in heterogeneous GPU clusters and multi-tenant environments, we propose a heuristic-based solution, where we introduce a hierarchical, two-tiered scheduling algorithm that integrates global task dispatching with local deadline-driven prioritization. Concretely, our design includes the following components:
• At the global coordination level (illustrated in Section 4.1), we introduce a workload-balanced dispatcher that dynamically assigns each incoming LLM inference request to the most suitable model instance. This dispatcher jointly considers: (i) LLM request processing capability and (ii) the current workload assignment and queuing status of the model instances, effectively balancing computational loads and maximizing GPU resource utilization.
• At the local priority queue level at each LLM serving model instances (enumerated in Section 4.2), we leverage an advanced adaptive priority queueing method that continuously reorders tasks according to a deadline-aware urgency metric. This prioritization enables high-urgency tasks—those approaching their end-to-end deadlines—to preempt less critical tasks, significantly reducing the risk of SLO violations.
• Additionally, we incorporate a simulator-driven tuning mechanism (discussed in Section 4.3) that periodically adjusts the global dispatching coordination’s hyper-parameters, balancing hardware-task alignment against workload distribution under varying runtime conditions.
Together, these design elements ensure (i) optimal exploitation of heterogeneous model instance serving capabilities, and (ii) reliable compliance with strict per-query latency objectives in dynamic, multi-tenant deployments.
# 4.1 Workload-Balanced Dispatching Policy
To efficiently serve Text-to-SQL workloads over heterogeneous GPU clusters, HEXGEN-TEXT2SQL adopts a workload-balanced dispatching policy that assigns each LLM inference request to the most appropriate LLM serving model instance. This policy jointly considers (i) the execution efficiency of each instance for the incoming LLM inference request and (ii) the current queued inference workloads on each model instance. Additionally, a tunable hyperparameter $\alpha \in [ 0 , 1 ]$ is introduced to dynamically balance the tradeoff between these two factors. We discuss how to tune $\alpha$ in Section 4.3.
Formulate the inference computation cost. For each incoming LLM inference request $q _ { i , j }$ , we first estimate its output length via a function $\hat { L } _ { \mathrm { o u t } } ( q _ { i , j } )$ derived from its input length — our implementation is based on the prediction method introduced by Zheng et al. [32]. Based on the estimation, we get the predicted computational execution cost of $q _ { i , j }$ on model instance $m$ , denoted $\displaystyle t _ { \mathrm { c o m p } _ { i , j } ^ { m } }$ , by the following equation:
$$
{ t _ { \mathrm { c o m p } } } _ { i , j } ^ { m } = { t _ { \mathrm { p r e f i l } } } ^ { m } \left( L ( q _ { i , j } ) \right) + { t _ { \mathrm { d e c o d e } } } ^ { m } \left( \hat { L } _ { \mathrm { o u t } } ( q _ { i , j } ) \right)
$$
where ${ t _ { \mathrm { p r e f i l } } } ^ { m } \left( L ( q _ { i , j } ) \right)$ and ${ t _ { \mathrm { d e c o d e } } } ^ { m } \left( \hat { L } _ { \mathrm { o u t } } ( q _ { i , j } ) \right)$ denote the estimated execution time of the prefill and decoding phase on model instance $m$ based on number of input tokens and the estimated output tokens respectively.
Formulate the (maximal) queueing cost. The expected queuing time cost of $q _ { i , j }$ on instance $m$ , denoted by $t _ { \mathrm { q u e u e } _ { i , j } ^ { m } }$ , is estimated by as the sum of the execution costs of all tasks currently in model instance $m$ ’s queue Θm:
$$
t _ { \mathrm { q u e u e } _ { i , j } ^ { m } } = \sum _ { q _ { i ^ { \prime } , j ^ { \prime } } \in \Theta ^ { m } } t _ { \mathrm { c o m p } _ { i ^ { \prime } , j ^ { \prime } } ^ { m } }
$$
Note that $t _ { \mathrm { q u e u e } _ { i , j } ^ { m } }$ captures the potentially longest time task $q _ { i , j }$ could wait before execution begins if it is dispatched to model instance $m$ .
Select the serving model instance. Given the estimation of inference computation time and queuing time, an ideal instance has a low estimation for both of them. However, a linear combination of these two factors is problematic — the execution time is relatively predictable given the LLM inference query, while the queuing time can be aggressively adjusted based on the urgency we implement within each local priority queue at each model instance. Thus, we define the following non-linear combination as the heuristic score:
$$
\operatorname { S c o r e } \left( q _ { i , j } , m \right) = \left( 1 - \alpha \right) \cdot \frac { \beta } { t _ { \mathrm { q u e u e } } _ { i , j } ^ { m } } - \alpha \cdot t _ { \mathrm { c o m p } _ { i , j } ^ { m } }
$$
For the LLM inference request $q _ { i , j }$ and model instance $m - q _ { i , j }$ is going to be dispatched to the instance with the highest score. Notice that there are two hyperparameters in this heuristic score ( $\overset { \cdot } { \alpha }$ and $\beta$ ); in our deployment, we fix the value of $\beta$ by some trials while dynamically online tuning $\alpha$ during the serving. The weighting factor $\alpha$ determines the degree to which dispatching favors fast execution versus load balancing. When $\alpha = 1$ , only execution speed is considered; when $\alpha = 0$ , only queue depth matters. We empirically determine an optimal $\alpha$ via simulation, evaluating system-level SLO attainment across varying loads and configurations (see Section 4.3).
# 4.2 Local Priority Queue
After LLM inference requests are assigned to model instances by the workload-balanced dispatching policy, each instance manages its requests using a local priority queueing policy. This priority queue dynamically re-ranks LLM inference requests based on the urgency of their corresponding end-to-end Text-to-SQL query
and real-time queueing conditions, enabling timely progression of multi-stage workflows. Formally, to determine the priority of each LLM inference request in the local queue, we allocate a per-request SLO budget $t _ { i , j } ^ { \mathrm { S L O } }$ for $q _ { i , j }$ based on both execution cost and the remaining end-to-end deadline:
$$
t _ { i , j } ^ { \mathrm { S L O } } = \left( T _ { i } ^ { \mathrm { S L O } } - \tau _ { \mathrm { e l a p s e d } } ^ { i } \right) \cdot \frac { \bar { t } _ { \mathrm { c o m p } _ { i , j } } } { \sum _ { k = j } ^ { n _ { i } } \bar { t } _ { \mathrm { c o m p } _ { i , k } } }
$$
where $\tau _ { \mathrm { e l a p s e d } } ^ { i }$ is the time elapsed since the arrival of $Q _ { i }$ at the global coordinator, and $\overline { { t } } _ { \mathrm { c o m p } _ { i , k } }$ is the execution cost of LLM inference $q _ { i , j }$ averaged over all model instances, i.e.,
$$
\bar { t } _ { \mathrm { c o m p } _ { i , j } } = \frac { 1 } { N } \sum _ { m \in \mathbf { M } } { t _ { \mathrm { c o m p } _ { i , j } ^ { m } } } .
$$
This proportional allocation ensures that more time is budgeted for costlier downstream LLM inference requests.
Formulate the urgency metric. Suppose a LLM inference request $q _ { i , j }$ is dispatched to model instance $m$ , we define the urgency $U _ { i , j }$ of $q _ { i , j }$ as the difference between its execution cost and the remaining SLO margin:
$$
U _ { i , j } = { t _ { \mathrm { c o m p } } } _ { i , j } ^ { m } - \left( t _ { i , j } ^ { \mathrm { S L O } } - \tau _ { i , j } \right)
$$
where $t _ { \mathrm { c o m p } _ { i , j } ^ { m } }$ is the estimated execution cost of the task on instance $m$ as we defined in Equation 2, and $\tau _ { i , j }$ denotes the actual queuing delay tracked by the local priority queue since $q _ { i , j }$ entered the local queue. Note that higher urgency indicates a greater risk of SLO violation.
Local priority queue strategy. We implement a dynamic priority adjustment method, where the urgency scores are updated continuously to reflect system dynamics:
• Queue aging: $\tau _ { i , j }$ grows over time as the LLM inference request waits in the queue.
• Workflow progression: When $q _ { i , j - 1 }$ completes, $\tau _ { \mathrm { e l a p s e d } } ^ { i }$ is updated, impacting future SLO allocations.
Thus, the model instance $m$ always selects the LLM inference request with the highest urgency from the local queue:
$$
q ^ { * } = \arg \operatorname* { m a x } _ { q _ { i , j } \in \Theta ^ { m } } U _ { i , j }
$$
where $\Theta ^ { m }$ is the current set of queued LLM inference requests at model instance $m$ . This adaptive strategy prioritizes requests at risk of missing their SLO and accounts for instance-specific execution characteristics.
# 4.3 $\alpha$ -Tuning Process
The hyperparameter $\alpha$ in the dispatching score function (Equation 4) governs the trade-off between execution cost and queuing cost. To adaptively tune $\alpha$ in response to real-time workload conditions, we implement a lightweight online parameter tuning process based on average query latency.
Initialization. At system startup, $\alpha$ is initialized to 0, prioritizing queue length minimization. During the first 100 seconds of operation, HEXGEN-TEXT2SQL uses this policy to serve incoming Text-to-SQL queries. In parallel, the system collects execution traces such as arrival times, queue delays, and stage durations for each LLM inference request. These are then used to simulate various $\alpha$ values on the fly, and the best-performing value $\alpha ^ { * }$ is selected based on average completion time of end-to-end Text-to-SQL queries.
Sliding window-based monitoring and update. After initialization, HEXGEN-TEXT2SQL assumes workload stationarity over short intervals and continues using the current $\alpha ^ { * }$ . The system monitors average latency in a 100-second sliding window. At the end of each sliding window, it computes the mean end-to-end latency $\bar { T } _ { \mathrm { n e w } }$ for all queries served in that period and compares it against the baseline latency ${ \bar { T } } _ { \mathrm { r e f } }$ from the previous window.
To determine whether latency has degraded significantly, we perform a one-sided two-sample $t$ -test:
$$
H _ { 0 } : \bar { T } _ { \mathrm { n e w } } = \bar { T } _ { \mathrm { r e f } } \quad \mathrm { v s . } \quad H _ { 1 } : \bar { T } _ { \mathrm { n e w } } > \bar { T } _ { \mathrm { r e f } }
$$
If the $p$ -value falls below 0.01, the null hypothesis is rejected, indicating statistically significant latency regression. This triggers a re-tuning procedure using the most recent 100-second trace.
Simulation-guided optimization. Re-tuning is carried out using a trace-driven simulator that replays historical Text-to-SQL queries and evaluates performance under different $\alpha$ values. The optimal $\alpha ^ { * }$ is selected by minimizing the average simulated latency:
$$
\alpha ^ { * } = \arg \operatorname* { m i n } _ { \alpha \in [ 0 , 1 ] } \frac { 1 } { N } \sum _ { i = 1 } ^ { N } T _ { i } ( \alpha )
$$
where $T _ { i } ( \alpha )$ denotes the simulated completion time of query $Q _ { i }$ under parameter $\alpha$ . The search follows a coarse-to-fine strategy: an initial sweep over $\alpha \in \{ 0 . 0 , 0 . 2 , \ldots , 1 . 0 \}$ is refined with a finer-grained search in 0.1 increments around the best candidate.
Discussion of tuning overhead. The simulator executes entirely on the CPU and only incurs negligible overhead cost compared to the actual serving. In practice, tuned $\alpha ^ { * }$ values remain stable across adjacent time windows unless there are abrupt changes in workload patterns. This enables robust and low-overhead adaptation to evolving serving conditions, ensuring sustained low-latency performance.
# 5 Evaluation
To evaluate the design of HEXGEN-TEXT2SQL, we ask the following questions to analyze the end-to-end performance of our framework, as well as each component’s contribution towards the overall efficiency improvement:
• What is the end-to-end performance comparison between our Text-to-SQL specialized HEXGEN-TEXT2SQL and a general inference serving system?
• How effective is each component of the scheduling algorithm?
• What are the benefits and cost of $\alpha$ -tuning process?
We state our experiment setup in Section 5.1, and address each question in Section 5.2, Section 5.3, and Section 5.4, respectively.
# 5.1 Experimental Setup
Runtime. Each end-to-end Text-to-SQL query was processed following a multi-agent framework named CHESS that, at the time of this study, represented the state-of-the-art in Text-to-SQL workflows [20]. We perform evaluations in the following setups:
• Hetero-1: This setup consists of two types of GPUs, A100 and A6000, each responsible for serving two model instances. • Hetero-2: This setup consists of three types of GPUs: A100, L40, and A6000. A100 GPUs are responsible for serving two instances, while L40 and A6000 GPUs each serve one instance.
? VLLM ? HEXGEN-FLOWreq_rate=1.0, Hetero-1 req_rate=0.5, Hetero-1 req_rate=1.0, Hetero-1 req_rate=0.5, Hetero-1 req_rate=1.0, Hetero-1100 100 100 100 10080 80 80 8060 60 60 60 6040- 40 40 40 4020 20 20 20 200 0 0 0 07.5 2.5 3.5 4.5 5.5 6.5 7.5 5 9 11 9 11 9 1 11req_rate=1.0, Hetero-2 req_rate=0.5, Hetero-2 req_rate=1.0, Hetero-2 req_rate=0.5, Hetero-2 req_rate=1.0, Hetero-2100 100 100 10080 80 80 8060 60 60 6040 40 40 402 20 20 200 1 0 0 02.5 3.5 4.5 5.5 6.5 3 5 9 11 5 7 9 11 3 5 7 9 11 3 7 9 11SLO Scale SLO Scale SLO Scale SLO Scale SLO Scale SLO ScaleTrace 1 Trace 1 Trace 2 Trace 2 Trace 3 Trace 3
Note that due to the large number of model parameters, we employ a tensor parallelism degree of eight for serving all model instances using vLLM [21], and all model instances adopts continuous batching during inference.
Model and dataset. All LLM inference requests in the CHESS [20] workflow are conducted using the Llama3.1-70B model, which is a representative and popular open-source transformer model. And we follow prior work to generate three workload traces based on the development set from BIRD-bench [33], which is a cross-domain dataset designed specifically for Text-to-SQL evaluation. Our testing traces are subsampled from queries related to financial and formula1 racing databases, incorporating both simple and challenging ones. In particular, Trace 1 subsamples queries purely from the financial database, Trace 2 subsamples queries from the formula1, and Trace 3 contains queries from both databases. Depending on the complexity of the query, the workflow may require between zero and ten rounds of revision to refine the SQL query. To emulate the stochastic arrival pattern of users’ Text-to-SQL queries, we send queries using a Poisson process with arrival rates of 0.5 query per second and 1.0 query per second. This modeling approach captures the inherent randomness in user interactions and aligns with methodologies employed in prior studies [18].
Baseline. To evaluate the end-to-end performance, we compare HEXGEN-TEXT2SQL with the baseline VLLM, a widely adopted inference serving system that uses First Come First Served (FCFS) to manage local queues. We dispatch the LLM inference requests based on the round-robin strategy for the baseline. This naive approach is commonly utilized in existing inference serving systems [34, 35]. For the ablation study, we compare HEXGEN-TEXT2SQL with two baseline systems: (i) $\mathsf { R R } { + } \mathsf { P Q }$ : an intermediate design derived from our framework, implementing round-robin dispatching strategy combined with local priority queue. (ii) $_ { \mathrm { W B + F C F S } }$ : Another intermediate design derived from our HEXGEN-TEXT2SQL architecture, implementing workload-balanced (WB) dispatching with FCFS processing on local queues. We include these two designs as baselines to isolate and assess the impact of our workload-balanced dispatching and local priority queue independently from other enhancements in HEXGEN-TEXT2SQL.
Evaluation metrics. Following the evaluation setup of existing LLM serving frameworks [36, 37], we evaluate system performance based on SLO attainment and system throughput. The SLO is empirically determined based on the single-query processing latency, and we scale it to various multiples (SLO Scale in Figure 2) to assess performance under different levels of operational stringency. We focus on identifying the minimum SLO Scale at which the system achieves $9 5 \%$ and full $( 1 0 0 \% )$ SLO attainment.
Figure 3: End-to-end system throughput comparison.
# 5.2 End-to-End Performance Gain
We evaluate the effectiveness of HEXGEN-TEXT2SQL by comparing its end-to-end performance with a widely adopted inference serving system, VLLM, which employs first-come-first-serve (FCFS) local queueing. The comparison focuses on two critical metrics: SLO attainment and sustained throughput, under diverse deployment traces and query arrival rates.
SLO attainment. Figure 2 presents the SLO attainment curves across multiple traces and heterogeneous configurations, under both 0.5 and 1.0 query per second workloads. Across all test conditions, HEXGENTEXT2SQL achieves up to $1 . 6 7 \times$ and on average $1 . 4 1 \times$ lower latency deadlines over $9 5 \%$ SLO attainment, and achieves up to $1 . 6 0 \times$ and on average $1 . 3 5 \times$ lower latency deadlines over $9 9 \%$ SLO attainment compared with vLLM. For example, in Trace 3 at 1.0 query per second, the minimum latency for HEXGENTEXT2SQL to achieve $9 5 \%$ and $9 9 \%$ SLO attainments are 550 and 600 seconds, whereas vLLM requires 790 and 800 seconds, which are $43 \%$ and $33 \%$ higher.
Throughput comparison. Figure 3 reports the sustained throughput achieved by both systems under a fixed 1.0 query per second arrival rate. HEXGEN-FLOW consistently delivers higher throughput across all traces and hardware configurations. The improvements range from $1 . 5 7 \times$ to $1 . 7 5 \times$ over VLLM, highlighting the effectiveness of our workload-balanced dispatching and urgency-aware local scheduling. For instance, in Trace 3 under Hetero-1, HEXGEN-TEXT2SQL achieves $1 . 7 5 \times$ higher throughput compared to vLLM.
Summary. Together, these results demonstrate that HEXGEN-TEXT2SQL achieves significantly lower latency deadlines over $9 5 \%$ and $9 9 \%$ SLO attainments and nearly doubles throughput relative to a baseline inference serving system. The gains are attributed to the unified design of workload-balanced task dispatching and adaptive urgency-guided queueing, which allows our framework to fully exploit the parallelism and heterogeneity inherent in LLM-powered Text-to-SQL serving.
# 5.3 Ablation Study: Effectiveness of Scheduling
This ablation study quantifies the individual contributions of our two key scheduling innovations: the workload-balanced (WB) dispatching policy and the local priority queue (PQ). Through controlled experiments across two heterogeneous GPU deployments (Hetero-1 and Hetero-2) and varying load conditions (0.5-1.0 query per second), we demonstrate the effectiveness of both components, as well as the impact they made to the order of scheduling.
Performance of the dispatching policy. To examine the contribution from our dispatching policy, We compare the minimum latency required by $\mathrm { W B + P Q }$ and $\mathsf { R R } { + } \mathsf { P Q }$ to achieve $9 5 \%$ and $9 9 \%$ SLO attainments. Figure 4 shows that replacing the round-robin routing strategy with our workload-balanced dispatching policy yields substantial improvements in serving efficiency, assuming both adopt local priority queue to manage their instances’ waiting lists. Under deployments Hetero-1 and Hetero-2, and for both 0.5 query per second and 1.0 query per second arrival rates, the $_ { \mathrm { W B + F C F S } }$ curves consistently shift toward lower SLO scale values compared with the $\mathrm { R R + F C F S }$ curve. Across all test conditions, $\mathrm { W B + P Q }$ achieves up to $1 . 3 8 \times$ and on average $1 . 1 8 \times$ lower latency deadlines over $9 5 \%$ SLO attainment, and achieves up to $1 . 4 \times$ and on average $1 . 2 4 \times$ lower latency deadlines over $9 9 \%$ SLO attainment compared with $\mathsf { R R } { + } \mathsf { P Q }$ . For instance, in Trace 3 Hetero-1 at 0.5 query per second, all queries finish within 500 seconds under $\mathrm { W B + P Q }$ , compared to 700 seconds under $\mathsf { R R } { + } \mathsf { P Q }$ . This corresponds to an approximate $40 \%$ reduction in the aspect of tail latency. These results confirm that our dispatching policy both reduces tail latency and improves GPU utilization in Text-to-SQL workloads.
Figure 4: Ablation study of HEXGEN-TEXT2SQL’s scheduling components, comparing end-to-end SLO attainment rates across: (1) round-robin dispatching $^ +$ local priority queue, (2) workload-balanced dispatching $+$ First Come First Serve, and (3) workload-balanced dispatching $^ +$ local priority queue.
Impact of the dispatching policy. For each arrived LLM inference request, our dispatching policy considers each model instance’s suitability for the job, as well as the workload it carries. We control the trade-off between suitability and workload by adjusting the values of $\alpha$ as mentioned in $\ S 4 . 3$ , and examining the corresponding change in the SLO attainment. We demonstrate the effect of our workload-balanced dispatching policy by benchmarking task distributions before and after applying the policy for Trace 3 under the Hetero-2 configuration. As illustrated in Section 1, before applying the policy, tasks were distributed uniformly across all instances. After applying the policy, the task allocation among different instances changes accordingly. Concretely, A100 GPUs (Instances 1 and 2) handled most of the LLM inference requests from ”SQL Candidates” $( 2 7 . 9 \% )$ , ”Self-Correction” $( 4 7 . 9 \% )$ , and ”Evaluation” $( 2 3 . 3 \% )$ stages. The A6000 GPUs (Instance 3) primarily processed tasks from ”Self-Correction” stage $( 7 1 . 5 \% )$ , while the L40 GPUs (Instance 4) focused on ”Schema Linking” stage $( 3 2 . 3 \% )$ and ”Evaluation” stage $( 5 6 . 3 \% )$ . Consequently, this optimization resulted in a more specialized allocation of tasks among different instances, enabling the dispatching policy to route requests to appropriate model instance while balancing the load across instances, thereby improving overall system performance.
Performance of the local priority queue. To study the ablation effect from local priority queue, we let both scheduling to adopt workload-balanced dispatching, and compare the performance between $_ { \mathrm { W B + F C F S } }$ and $\mathrm { W B + P Q }$ . In figure 4, keeping Workload-Balanced scheduling unchanged, $\mathrm { W B + P Q }$ curves demonstrate advantages over ${ \tt W B + F C F S }$ curves in all three traces, for both query arrival rates. Across all test conditions, $\mathrm { W B + P Q }$ achieves up to $1 . 5 \times$ and on average $1 . 2 \times$ lower latency deadlines over $9 5 \%$ SLO attainment, and achieves up to $1 . 4 \times$ and on average $1 . 2 \times$ lower latency deadlines over $9 9 \%$ SLO attainment compared with $_ { \mathrm { W B + F C F S } }$ . For example, under both deployment settings, when queries arrive in 0.5 query per second, the minimum latency for $\mathrm { W B + P Q }$ strategy in Trace 3 to achieve $9 5 \%$ and $9 9 \%$ SLO attainments are about 400 seconds and 500 seconds, whereas $_ { \mathrm { W B + F C F S } }$ strategy requires 600 seconds and 700 seconds, which are
Table 1: Impact of dispatching policy on task distributions. Stages 1-4 represent the aforementioned Text2SQL stages; I1–I4 represent different GPU instances, where I1 and I2 are A100 instances, I3 is an A6000 instance, and I4 is an L40 instance.
$1 . 5 \times$ and $1 . 4 \times$ higher. Moreover, strategies adopt local priority queue finish $9 5 \%$ of queries around $20 \%$ faster than the FCFS strategies under both deployments settings when query rate is 0.5 query per second. While the extent of improvement from priority queue may vary depending on different heterogenous setting and traces, the superiority of $\mathrm { W B + P Q }$ suggests that dynamically prioritizing inference tasks by their remaining urgency markedly reduces head-of-line blocking and deadline misses, ensuring stable, high-quality service under varying load conditions.
Table 2: Snapshot of a local queue’s state before processing next LLM call. Arrive-at represents the timestamp that the LLM call arrives.
Impact of the local priority queue. To highlight the impact of the priority queue, we present the state of a local priority queue on one of the instances during evaluation. The snapshot, shown in Table 2, is captured immediately before the next LLM inference request to be processed by the instance. From the table, the LLM inference request Request#6 arrive at 64.4, and it has the highest urgency of 26.9, so that it will be processed first by our local priority queue. In contrast, a FCFS strategy will process Request#1 first because it arrives the earlist at 22.4, although it’s urgency score is 14.5, which is smaller than Request#6. According to the profiled execution cost, Request#1 needs 26.7 seconds to finish and Request#6 needs 3.2 seconds to finish. Meanwhile, the SLO allocated for Request#1 is 30.4 seconds, which is larger than its profiled execution cost. On the other hand, Request#6 is allocated 3.3 seconds for SLO, which is more urgent compared to Request#1. This observation is consistent with our urgency score. In conclusion, the local priority queue prioritize the most urgent request first, and maximize the proportion of requests that meet their SLOs to the greatest extent possible.
# 5.4 Empirical Analysis of $\alpha$ -Tuning
Our evaluation demonstrates how dynamic $\alpha$ -tuning adapts to both hardware heterogeneity and workload characteristics to optimize system performance. Through controlled experiments across three distinct workload traces and two heterogeneous GPU deployments, we analyze: (1) how does a well tuned $\alpha$ value improve the $9 5 \%$ SLO attainment rate, and (2) the practical feasibility of simulation-based tuning during live serving. All experiments maintain a constant query arrival rate of 0.5 query per second while varying $\alpha$ from 0 (pure workload balancing) to 0.5 (balanced weighting).
Figure 5: Performance of HEXGEN-TEXT2SQL under various $\alpha$ settings.
Effect of $\alpha$ -tuning. Figure 5 illustrates the changes in performance due to different $\alpha$ while keeping other hyperparameters the same. All experiments conducted on three traces assume the Text-to-SQL query arrives at 0.5 queries per second. The two experiments on Trace 1 show that taking $\alpha = 0 . 1$ and 0.3 gives the best results under Hetero-1 and Hetero-2 settings, respectively. Similarly, experimental results on Trace 2 indicate a better performance of $\alpha = 0 . 2$ under Hetero-1 deployment and $\alpha = 0 . 3$ in the case of Hetero-2. For Trace 3, $\alpha = 0 . 2$ wins over others when there are only 2 types of GPUs. And under the Hetero-2 setting, it’s noticeable that when $\alpha = 0 . 4$ , $9 5 \%$ of queries are finished $14 \%$ faster than the scheduling considering solely on workload balancing factor. From these three sets of results, we conclude that although the influence of task suitability doesn’t outweigh the workload-balancing factor, it still can boost the performance further. Furthermore, the optimal value of $\alpha$ depends not only on the heterogeneous setting but also on the workload of incoming Text-to-SQL queries. Therefore, if the associated overhead is manageable, determining the optimal value of $\alpha$ through simulation is advantageous for enhancing system performance.
Table 3: Overhead of alpha-tuning for different experimental setups and traces.
Overhead of $\alpha$ -tuning. We evaluate the time cost of $\alpha$ -tuning simulations to assess their feasibility during real-time Text-to-SQL inference serving. Each simulation systematically tests different values of $\alpha$ following the $\alpha$ -tuning process mentioned in $\ S 4 . 3$ . As shown in Table 3, across various heterogeneous setups and workload traces, the $\alpha$ -tuning simulation time ranges from 115 to 158 seconds. This overhead is manageable in practice, as it is significantly shorter than the hourly timescale over which real-world workload variations typically occur [37].
# 6 Related Work
In this Section, we provide a brief summary about the recent advances in LLM-based Text-to-SQL approaches in Section 6.1 and then discuss some serving system design and implementation for LLM inference requests in Section 6.2.
# 6.1 LLM Advances for Text-to-SQL
LLMs have emerged as a revolutionary paradigm for the Text-to-SQL task [38, 1], where the core technique lies in effective SQL generation and schema linking.
SQL generation. Some pioneer studies focus on better prompting strategies and LLM adaptations to boost Text-to-SQL performance. Gu et al. [39] propose a structure- and content-based prompt learning method to enhance few-shot Text-to-SQL translation, while Li et al. [40] build an open-source LLM specialized for SQL generation. Other approaches fine-tune or constrain LLM outputs to improve accuracy: Ren et al. [41] propose Purple, which refines a general LLM to make it a more effective SQL writer, and Sun et al. [42] adapt a pretrained model (PaLM) specifically for Text-to-SQL tasks, achieving higher execution accuracy. In addition, Xie et al. [43] propose OpenSearch-SQL, which dynamically retrieves few-shot exemplars from a query archive and enforces consistency checks to align the LLM’s output with the database schema.
Beyond simple LLM adaptations, recent agentic approaches leverage multiple LLM inference requests to collaboratively accomplish the Text-to-SQL tasks. Fan et al. [44] combine a small language model with a large one to handle different sub-tasks of Text-to-SQL, thereby improving zero-shot robustness. Similarly, Pourreza et al. [45] explore task decomposition across models: DTS-SQL breaks the problem into stages handled by smaller LLMs sequentially, and their DIN-SQL approach has an LLM refine its own output through iterative self-correction [46] in the prompt. Another line of research enhances the reasoning process of LLMs to produce correct SQL. One strategy is to incorporate intermediate steps or a reasoning framework during inference. Zhang et al. [47] apply the ReAct paradigm [48] to table question answering, which encourages the LLM to generate and reason with intermediate actions (e.g., decomposition or calculations) before finalizing the SQL query. Mao et al. [49] propose rewriting the user question for clarity and using execution feedback to iteratively refine the generated SQL (execution-guided refinement). To improve the chances of getting a correct query, Pourreza et al. [50] introduce a multi-path reasoning approach (ChaseSQL) that produces diverse SQL candidates and ranks them using a preference model, while Li et al. [51] propose Alpha-SQL that employs a Monte Carlo tree search to explore different query constructions in a zero-shot setting. Techniques have also been explored to optimize the context given to the LLM: Talaei et al. [20] present CHESS, which harnesses contextual information efficiently to guide the LLM’s SQL synthesis without increasing model size or complexity.
Schema linking. Integrating database schema knowledge and domain-specific information into LLM-driven Text-to-SQL is another important procedure. Eyal et al. [52] decompose the natural language question and the corresponding SQL query into semantic sub-units, improving the model’s understanding of how question clauses align with schema elements. Dou et al. [53] incorporate external knowledge (e.g., business rules or formulas) into the parsing process to handle queries that require facts beyond the database content. Several works specifically target schema linking challenges in the age of LLMs. Liu et al. [54] propose SolidSQL, which uses enhanced in-context examples to make an LLM more robust at matching question terms to the schema during generation. To supply relevant schema context on the fly for open-domain Text-toSQL, Zhang et al. [55] develop a retrieval approach that finds the pertinent tables across a large corpus and provides them to the LLM before SQL generation. Yang et al. [56] take a different approach to aid schema linking: they generate a preliminary SQL (SQL-to-schema) to identify which schema items are likely to be involved, and then use that information to guide the final query production.
# 6.2 LLM Inference Request Scheduling
Efficient scheduling of LLM inference requests is crucial in modern AI infrastructure, essential to meeting latency and throughput requirements, particularly under varying system constraints.
In environments with consistent hardware and model setups, scheduling techniques focus on optimizing latency and throughput. Patke et al. [17] introduce QLM, a system that estimates request waiting times to prioritize tasks, ensuring that SLOs are met under load conditions. Gong et al. [57] propose the future scheduler, which uses historical workload patterns and predictive modeling to make informed scheduling decisions, fulfilling SLA guarantees. Fu et al. [58] frame LLM scheduling as a learning-to-rank problem, training models to order queued requests to optimize end-to-end latency and throughput, outperforming traditional heuristics. Agrawal et al. [16] present Sarathi-Serve, a system that adjusts batching and resource allocation to balance throughput and latency, particularly effective for requests of high priority versus low priority. In setups with varying hardware capabilities and model types, recently proposed scheduling strategies adapt to resource heterogeneity: Wan et al. [59] develop BROS, a system that differentiates between real-time and best-effort LLM queries, ensuring interactive queries are prioritized without compromising background batch processing. Jain et al. [19] introduce a performance-aware load balancer that monitors query characteristics and system load to dynamically distribute requests across model replicas and GPU nodes. Gao et al. [60] present Apt-Serve, which employs a hybrid cache combining GPU memory and lower-tier storage to scale LLM serving while maintaining frequently-used model states in faster memory. Ao et al. [61] propose a fluid-model-guided scheduler that allocates inference tasks to approximate an ideal fluid fair share of GPU memory over time, enhancing throughput and reducing tail latency under memory pressure. Sun et al. [62] introduce Llumnix, a dynamic scheduling system that adjusts resource allocation for LLM serving in real time as query load patterns change, demonstrating benefits in single-model scenarios. While significant advancements have been made in no-stage LLM serving, multi-stage pipelines remain less explored. Very recently, Fang et al. [63] investigate the efficiency of multi-LLM application pipelines in an offline setting, using sampling and simulation to optimize inference plans for workflows involving multiple LLMs or sequential model calls.
Despite these efforts, there is a notable gap in scheduling strategies that coordinate end-to-end pipelines with multiple dependent LLM request serving stages. Our approach builds upon existing insights, such as urgency-aware prioritization and hardware-sensitive allocation, but effectively extends them to manage complex, multi-stage Text-to-SQL workflows under strict latency requirements and system heterogeneity with a significant performance boost. | Recent advances in leveraging the agentic paradigm of large language models
(LLMs) utilization have significantly enhanced Text-to-SQL capabilities,
enabling users without specialized database expertise to query data
intuitively. However, deploying these agentic LLM-based Text-to-SQL systems in
production poses substantial challenges due to their inherently multi-stage
workflows, stringent latency constraints, and potentially heterogeneous GPU
infrastructure in enterprise environments. Current LLM serving frameworks lack
effective mechanisms for handling interdependent inference tasks, dynamic
latency variability, and resource heterogeneity, leading to suboptimal
performance and frequent service-level objective (SLO) violations. In this
paper, we introduce HEXGEN-TEXT2SQL, a novel framework designed explicitly to
schedule and execute agentic multi-stage LLM-based Text-to-SQL workflows on
heterogeneous GPU clusters that handle multi-tenant end-to-end queries.
HEXGEN-TEXT2SQL introduce a hierarchical scheduling approach combining global
workload-balanced task dispatching and local adaptive urgency-guided
prioritization, guided by a systematic analysis of agentic Text-to-SQL
workflows. Additionally, we propose a lightweight simulation-based method for
tuning critical scheduling hyperparameters, further enhancing robustness and
adaptability. Our extensive evaluation on realistic Text-to-SQL benchmarks
demonstrates that HEXGEN-TEXT2SQL significantly outperforms state-of-the-art
LLM serving frameworks. Specifically, HEXGEN-TEXT2SQL reduces latency deadlines
by up to 1.67$\times$ (average: 1.41$\times$) and improves system throughput by
up to 1.75$\times$ (average: 1.65$\times$) compared to vLLM under diverse,
realistic workload conditions. Our code is available at
https://github.com/Relaxed-System-Lab/Hexgen-Flow. | [
"cs.DB"
] |
# 1 INTRODUCTION
Nteractive image segmentation requires users to indicate the target by providing simple indications such as boxes [1], [2], scribbles [3], [4], [5], [6], and clicks [7], [8], [9], [10]. Compared with traditional annotation tools like the lasso or brush, interactive models could largely reduce the cost of creating masks, which is important for efficient annotation in the era of big data.
Initial works for interactive segmentation [7], [8], [9] mainly focus on designing delicate model structures or developing better strategies [11], [12] for simulating user interactions. However, training on limited data constrains the power of those solutions as the segmentation target could be “anything”. Recently, Segment Anything Model (SAM) [2] tackles interactive segmentation with a data-centric solution. It iteratively collects masks with humans in the loop, and finally collects billions of high-quality samples and trains a powerful segmentation model. The generalized strong priors make SAM a foundation solution for interactive segmentation and downstream applications.
However, SAM is not a perfect solution for interactive segmentation as it makes some compromises for “automatic segmentation”. First, SAM treats each part of the full image equally without any design of object-centric modeling or detail refinements, which are proven crucial for the fine details in previous interactive segmentation works [9], [12], [13], [14]. The compromise is that SAM considers the computation burden for each independent interaction as it requires densely sampled points for automatic segmentation. Thus, SAM develops a huge encoder to extract the common features for each part of the image and leaves a small prompt encoder to model each interaction. Besides, SAM only supports click and box, and it is hard for the current prompt encoder to encode more flexible user interactions like scribbles and coarse masks1. Different from SAM, we do not consider automatic segmentation but focus on building a state-of-the-art interactive segmentation system.
Fig. 1: Demonstrations for FocalClick-XL . Our method is compatible with various formats of user interactions like clicks, scribbles, boxes, coarse masks, etc., and could predict highlyrefined details for both transparent and solid objects.
Among previous interactive segmentation methods, FocalClick [9] designs an efficient pipeline following the coarse-to-fine fashion. In this work, we make extensions based on this classical framework and propose FocalClickXL for high-quality and unified interactive segmentation. Our main insight is to decompose the interactive segmentation pipeline into different subnets to focus on context, object, and detail-level information. This modular design brings the following advantages.
First, each subnet could be pre-trained sufficiently with large-scale specific data, which assists our FocalClick-XL for high-quality segmentation. Specifically, we leverage the pretrained visual encoder of SAM for the Context-Net, which takes the full image as input for capturing the global context. The Object-Net takes the image patch around the segmenta
1. SAM supports the previous/initial masks as inputs, but shows very limited improvements w/o additional click or box.
tion target and produces the primitive segmentation masks according to the provided user interactions. This part is trained on a combination dataset with object-centric images. The Detail-Net takes charge of refining the predicted details. It takes the small image patches around the boundaries or low-confident regions as input and is trained on a combined dataset of image segmentation and matting samples.
Second, in this decomposed pipeline, only the ObjectNet is interaction-sensitive, thus we could share the majority of knowledge (i.e., Context-Net and Detail-Net) across different interactions. It significantly eases the goal of building a unified framework for various interaction forms. Concretely, we add a single Prompting Layer at the input of the Object-Net to encode different types of interactions. To support novel interaction forms, we only need to train a novel Prompting Layer and keep other parameters frozen.
To mitigate the computation burden brought by the cascaded subnet, we follow the strategy of FocalClick. During inference, the Context-Net runs only once per image. After each round of user interaction, we dynamically zoom in on the object patch and detail patch with a small input size to accelerate the Object-Net and Detail-Net. Besides the exploration of the model structure, we thoroughly investigate how to evaluate each type of interaction and propose a series of evaluation protocols and benchmarks. For example, we developed a deterministic scribble generator that supports evaluating scribble-based methods automatically. Besides, we construct a series of coarse mask sets with different qualities to evaluate the robustness for the task of coarse mask refinement.
As presented in Fig. 1. FocalClick-XL is compatible with various of user interactions and could predict high-quality masks with fine details. Extensive experiments show that FocalClick-XL demonstrates state-of-the-art performance on click-based interactive segmentation benchmarks, and shows competitive performances for various interaction formats. In general, the contribution of this work could be summarized in three folds: First, our work makes FocalClick [9] benefit from large-scale training, thus reaching state-of-the-art performance for click-based segmentation benchmarks. Second, we design a decomposed pipeline that extends FocalClick for novel interaction forms like scribbles, boxes, and coarse masks. Third, we carry out a thorough study of various interactions and formulate well-defined train/val protocols and benchmarks, which would benefit the community in exploring more types of interaction.
Difference from conference version. This manuscript improves the conference version [9] significantly with wider extensions for up-scaled training and support for more interactions. 1) In Sec. 4, we up-scaled the model design and training data for each stage of FocalClick and apply different training strategies in each stage. 2) We extend the user-interaction support for FocalClick from click to wider types, including scribble, box and coarse masks in Sec. 5. At the same time, we thoroughly investigate the evaluation protocol for these new interactions. 3) We give a more detailed experimental analysis for the newly added part and scaling strategies in Sec. 7, and add more discussions and comparisons with more recent works.
# 2 RELATED WORK
Interactive segmentation. Before the era of deep learning, researchers [15], [16], [17], [18] take interactive segmentation as an optimization procedure. DIOS [19] first introduces deep learning into interactive segmentation by embedding positive and negative clicks into distance maps, and concatenating them with the original image as input. It formulates the primary pipeline and train/val protocol for clickbased interactive segmentation. After this, [20], [21] focus on the issue of ambiguity and predict multiple potential results and let a selection network or the user choose from them. FCANet [12] emphasizes the particularity of the first click and uses it to construct visual attention. BRS [22] first introduces online optimization, which enables the model to update during annotation. f-BRS [14] speeds up the BRS [22] by executing online optimization in specific layers. CDNet [10] introduces self-attention into interactive segmentation to predict more consistent results. RITM [7] add the previous mask as network input to make the prediction more robust and accurate. SimpleClick [8] leverage ViT encoders to extract more power features, achieving the stateof-the-art performance. Besides click-based segmentation, other formats of interactions have also been explored. IOG [1] proposes to combine the boxes and clicks together. Other works [3], [3], [4], [5], [6], [23], [24], [25] investigate scribbles as scribbles are more flexible than clicks.
Segment anything series. SAM [2] is a powerful model for segmentation. It is essentially designed for click-based interactive segmentation, and can further be extended to support automatic segmentation or using the box as a prompt. SAM serves as the foundation model for various downstream tasks across different visual tasks [26], [27], [28], [29], [30], [31], [32], [33]. Some previous works make improvements to SAM. For example, SAM-HQ [34] explores to refine the masks produced by SAM. SEEM [35] unifies more modalities of input as tokens. Large numbers of works [36], [37], [38], [39], [40] design adapter layers or LoRAs to adjust SAM to specific tasks. However, the SAM series makes some compromises for automatic segmentation thus sacrificing the performance for interactive segmentation. In this work, we get inspiration from them to unleash the power of largescale training and take a further stage for the specific task of interactive segmentation.
Local inference in interactive segmentation. Some previous methods [7], [10], [14] [7], [10], [14] crop the region around the predicted target for subsequent inference, similar to our Target Crop. However, since they generate final masks on these crops, they must maintain high resolution. In contrast, FocalClick utilizes the Target Crop to locate the Focus Crop without relying on the Segmentor for fine details, allowing us to resize it for higher speed. Other works [13], [41], [42] refine predictions in a coarse-to-fine manner but incur high computational costs. RIS-Net [41] refines using multiple ROIs based on click positions, while EdgeFlow [13] and $9 9 \%$ AccuracyNet [42] focus on boundary refinements. Unlike these, FocalClick strategically selects local patches, significantly reducing FLOPs by decomposing the process into coarse segmentation and refinement, making interactive segmentation more efficient.
Fig. 2: Overall framework of FocalClick. We take the image, two click maps, and the previous mask as input. We use binary disks with radius 2 to represent the click. First, we select the Target Crop around the target object and resize it to a small size. It is then sent into Segmentor to predict a coarse mask. Next, we chose a Focus Crop by calculating the different regions between the previous masks and the coarse prediction to refine the details. At last, Progressive Merge updates the local part that the user intends to modify and preserves the details in other regions.
# 3 METHOD
We begin by introducing our efficient FocalClick pipeline and elaborating on a newly proposed task, Interactive Mask Correction, along with our benchmark. Following this, we present the extensions and enhancements that define FocalClick-XL in the subsequent sections.
# 3.1 FocalClick: Multi-stage Decoupled Pipeline
The core idea of the FocalClick pipeline is to break down a single, heavy inference over the entire image into two lightweight predictions on smaller patches. As illustrated in Fig. 2, the process begins with Target Crop, which selects a patch centered around the target object, resizes it to a smaller scale, and feeds it into the Segmentor to generate a coarse mask. Then, Focus Crop identifies a local region that requires refinement and sends the zoomedin patch to the Refiner for further enhancement. Finally, Progressive Merge integrates these localized predictions back into the full-resolution mask. This iterative refinement process ensures that only a small local region is updated after each user interaction, while all pixels in the final prediction benefit from repeated refinements distributed across multiple rounds.
Target crop. The objective is to eliminate background information that is irrelevant to the target object. To achieve this, we first determine the minimum bounding box that encloses both the previous mask and the newly added click. This bounding box is then expanded by a factor of $r _ { T C } ~ = ~ 1 . 4$ following [7], [14]. Afterward, we crop the relevant input tensors, including the image, previous mask, and click maps, and resize them to a smaller scale for efficient processing.
Coarse segmentation. This step aims to generate an initial rough mask for the target object, which serves as a foundation for locating the Focus Crop and enabling further refinement. The Segmentor can be any segmentation network [43], [44], [45], [46], [47], allowing customization for different scenarios. In our implementation, we adopt state-of-the-art methods such as HRNet+OCR [48], [49] and SegFormer [50] as representative architectures. As illustrated in Fig. 2, we follow the RITM framework [7], incorporating two convolutional layers to adjust the channel and scale of the click maps, followed by feature fusion after the stem layers.
Focus crop. It aims to locate the area that the user intends to modify. We first compare the differences between the primitive segmentation results and the previous mask to get a Difference Mask $M _ { x o r }$ . We then calculate the max connected region of $M _ { x o r }$ that contains the new click, and we generate the external box for this max connected region. Similar to the Target Crop, we make expansion with ratio $r _ { F C } = 1 . 4$ . We note this region Focus Crop. Accordingly, we crop local patches on the input image and click maps. Besides, we use RoiAlign [51] to crop the feature and the output logits predicted by the Segmentor.
Local refinement. It recovers the details of the coarse prediction in Focus Crop. We first extract the low-level feature from the cropped tensor using Xception convs [52]. At the same time, we adjust the channel number of the RoiAligned feature and fuse it with the extracted low-level feature. To get refined predictions, we utilize two heads to predict a Detail Map $M _ { d }$ and a Boundary Map $M _ { b } ,$ and calculate the refined prediction $M _ { r }$ by updating the boundary region of the coarse predicted logits $M _ { l } ,$ as in Eq. 1.
$$
M _ { r } = { \cal S } i g m o i d ( M _ { b } ) * M _ { d } + \left( \frac { } { } \thinspace - { \cal S } i g m o i d ( M _ { b } ) \right) * M _ { l }
$$
Progressive merge. When annotating or editing masks, we do not expect the model to update the mask for all pixels after each click. Otherwise, the well-annotated details would be completely over-written. Instead, we only want to update in limited areas that we intend to modify. Similar to the method of calculating Focus Crop, Progressive Merge distinguishes the user intention using morphology analysis. After adding a user click, we simply binarize the newly predicted mask with a threshold of 0.5 and calculate a different region between the new prediction and the preexisting mask, then pick the max connected region that contains the new click as the update region (the part the green in Fig. 3). In this region, we update the newly predicted mask onto the previous mask, and we keep the previous mask untouched in other regions.
When starting with a preexisting mask or switching back from other segmentation tools, we apply Progressive Merge to preserve the correct details. While annotating from scratch, we active the progressive mode after 10 clicks. Training supervision. The supervision of the boundary map $M _ { b }$ is computed by down-sampling the segmentation ground truth 8 times and resizing it back. The changed pixels could represent the region that needs more details. We supervise the boundary head with Binary Cross Entropy Loss $L _ { b c e }$ . The coarse segmentation is supervised by Normalized Focal Loss ${ { L } _ { n f l } }$ proposed in RITM [7]. For the refined prediction, we add boundary weight (1.5) on the NFL loss, and we note it as $L _ { b n f l } ,$ the total loss could be calculated as Eq. 2.
$$
{ \cal L } = { \cal L } _ { b c e } + { \cal L } _ { n f } + { \cal L } _ { b n f }
$$
# 3.2 Interactive Mask Correction
In practical applications, large portions of annotation tasks provide pre-inferred masks. In this case, annotators only need to make corrections on them instead of starting from zero. Besides, during annotation, when annotators switch to matting, lasso, or polygon tools and switchback, we also expect to preserve the pixels annotated by other tools. However, existing methods predict new values for all pixels after each click. Thus, they are not compatible with modifying preexisting masks or incorporating other tools.
To solve this problem, we make the following attempts: 1) We construct a new benchmark, DAVIS-585, which provides initial masks to measure the ability of mask correction. 2) We prove that our FocalClick shows significant superiority over other works in this new task.
New benchmark: DAVIS-585. Existing works uses GrabCut [15], Berkeley [53], DAVIS [54], SBD [55] to evaluate the performance of click-based interactive segmentation. However, none of them provides initial masks to measure the ability of Interactive Mask Correction. Besides, GrabCut and Berkeley only contain 50 and 100 easy examples, making the results not convincing. SBD provides 2802 test images, but they are annotated by polygons with low quality. DAVIS is firstly introduced into interactive segmentation in [20]. It contains 345 high-quality masks for diverse scenarios. However, as [20] follows the setting of DAVIS2016, it merges all objects into one mask. Hence, it does not contain small objects, occluded objects, and nosalient objects. In this paper, we chose to build a new test set based on DAVIS [54] for its high annotation quality and diversity, and we made two modifications:
First, we follow DAVIS2017 which annotates each object or accessory separately, making this dataset more challenging. We uniformly sample 10 images per video for 30 validation videos, and we take different object annotations as independent samples. Then we filter out the masks under 300 pixels and finally get 585 test samples, so we call our new benchmark DAVIS-585.
Second, to generate the flawed initial masks, we compare two strategies: 1) Simulating the defects on ground truth masks using super-pixels. 2) Generating defective masks using offline models. We find the first strategy has two advantages: 1) It could control the distribution of error type and the initial IOUs. 2) The simulated masks could be used to measure the ability to preserve the correct part of preexisting masks. Therefore, we use super-pixels algorithm2 to simulate the defect. We first use mask erosion and dilation to extract the boundary region of the ground truth mask. We then define three types of defects: boundary error, external FP (False positive), and internal TN (True Negative). After observing the error distribution in real tasks, we set the probability of these three error types to be [0.65, 0.25, 0.1] and follow Alg. 1 to control the quality of each defective mask. To decide the quality range, we carry out a user study and find that users intend to discard the given masks when they have IOU lower than $7 5 \%$ . Considering that current benchmarks use NoC85 (Number of Clicks required on average to reach IOU $8 5 \%$ ) as the metric, we control our simulated masks to have IOUs between $7 5 \%$ and $8 5 \%$ .
# 4 SCALING UP THE MULTI-STAGE PIPELINE
Following the multi-stage design of FocalClick, we scale up the model structure and training data of each stage and further propose a stronger version, termed FocalClick-XL .
The framework of FocalClick-XL is depicted in Fig. 3. FocalClick-XL decomposes the pipeline for interactive segmentation into three stages to model the features in context-, object-, and detail-level individually. The full image is sent into the Context-Net to extract global features. We crop the target region (based on the previous round segmentation result) and feed the target-object patch with interaction embeddings into the Object-Net, this part takes charge of locating the target region indicated by the user interactions and produces a primitive mask. Based on the primitive mask, we further conduct another zoom-in around the lowconfidence region and pass the small patches into the DetailNet to produce high-quality results. At last, the refined predictions and the primitive masks are aligned back to the full image to get the final prediction.
Fig. 3: The extended framework of FocalClick-XL . The overall pipeline is decomposed into three stages that individually focus on context-, object-, and detail-level information. First, the full image is fed into the Context-Net to extract the global feature. Afterward, the global feature is zoomed in around the target object (based on the previous segmentation results). The image patches around the object and the user-interaction embeddings are sent into the Object-Net to get a primitive mask. Next, we zoom in on the low-confidence local regions of the primitive masks into the Detail-Net to refine the prediction. At last, we align back the detail mask and the primitive mask to the full image.
# 4.1 Structure Decomposition
We introduce the detailed model structures of the FocalClick-XL. The pipeline is split into three meta-steps to focus on the information from context-, object-, and detaillevel.
Context-Net. The main objective of the Context-Net is to understand complicated scenes and distinguish the potential objects from the background. In the design, the Context-Net harnesses the weights from the SAM Encoder to capitalize on the robust ability of scene understanding and object priors. To adapt the SAM encoder, we integrate a learnable adapter after each transformer layer. Our adapter layers are designed as bottlenecks consisting of “ConvGeLU-Conv”. The convolutional layers reduce the channels of the feature map and the GeLU activation introduces the non-linearities. Commencing from the second round of interactions, we compute the Region of Interest (RoI) for the target object using predicted masks from the previous round. Subsequently, we employ RoI-Align [56] to crop the RoI from the SAM features, preparing for feature fusion for the Object-Net.
Object-Net. It encodes the provided interactions to segment the user-intended parts. The structure follows SegFormer [50]. The Object-Net receives the zoomed-in region of the full image and interaction maps as inputs. It extracts target-centric features endowed with intricate details and facilitates early fusion between interaction maps and the image. This early fusion significantly amplifies the model’s controllability in modifying the fined details with excessive user interactions. The Object-Net also receives the RoIAligned features from the Context-Net, we perform an elementwise addition to fuse the target-centric feature extracted by the Object-Net with the RoI-aligned feature from Context-Net. This fused feature aims to encapsulate both global and local perspectives. Subsequently, we decode the fused feature for the primitive mask prediction. Specifically, a multi-layer perceptron (MLP) is employed to generate a one-channel binary mask. Notably, this primitive mask has already exhibited competitive performance across various datasets. Throughout the training phase, we utilize normalized focal loss to supervise the primitive mask for click-based interactive segmentation. The Object-Net and the adapter layers of the Context-Net are optimized together on object-centric samples.
Detail-Net. This module further refines the low confidence of the primitive mask of the Object-Net to produce highquality masks. The Detail-Net is designed as a universal module that could act as a plug-in to support different user interactions. Inspired by image matting methods [57], [58], Detail-Net takes a “tri-map” along with the image as input. The tri-map consists of a high-confidence foreground mask, a high-confidence background mask, and a mask for the low-confidence regions. The positive/negative user interactions and the high-confidence part of the primitive mask predicted by the Object-Net are merged to act as the tri-maps. After zooming in on the target object for ObjectNet, the input of the Detail-Net is magnified again around the uncertain region to capture more details. The structure of Detail-Net is a U-shape network with a MobileNetV2 [59] as the encoder. We collect large numbers of data from both the segmentation dataset [60], [61], [62], [63], [64], [65], [66], [67] and the image matting dataset [58], [68], [69] to train the Detail-Net. Use leverage MSE Loss to force Detail-Net to predict the transparency value of the object. Thus, by forcing Detail-Net to learn the alpha value for the transparent parts like the hairs and fur, it is equipped with the ability to capture finer details by producing the alpha-matte. We could simply use a threshold of 0.5 to binarize the prediction to give segmentation masks.
# 4.2 Progressive Magnification
The cascaded subnets bring extra burden for the computation and inference speed. To mitigate this problem, we propose a progressive magnification strategy to increase the efficiency of our pipeline.
First, we make the heaviest Context-Net as an “offline” extractor. As the Context-Net receives the full image as input, it models each pixel equally thus irrelevant to the user interaction. In our pipeline, the Context-Net is executed only once per image and could be pre-executed before receiving user interactions. Adhering to the original settings, the full image is resized to $1 0 2 4 \times 1 0 2 4$ as input.
Second, we progressively zoom in on the local regions to support small-size input for the Object-Net and DetailNet. Concretely, based on the segmentation results of the previous round, we calculate and expand the box around the segmentation target to get the input of the Object-Net. If the previous around the mask is void, we calculate the box covering the full image. Considering box region is zoomed in, we do not need the high input resolution to capture the details. Thus, the input size of Object-Net is set as $3 8 4 \times 3 8 4$ for better efficiency. A similar zoom strategy is also applied to the Detail-Net, we crop the small local regions around the low-confidence pixels and formulate multiple patches as a batch. Thus, although the input size of Detail-Net is set as $2 5 6 \times 2 5 6 ,$ , we could still get highly refined details. In this way, we make each subnet focus on a different part of the images, effectively reducing the computation burden and inference time of our FocalClick-XL pipeline.
# 5 EXTENSION TO MORE INTERACTIONS
We further present the extension for more forms of user interactions with a unified structure. At the same time, we investigate the evaluation protocol for different interactions.
# 5.1 Prompting Unified Interaction
FocalClick-XL supports various user interactions like click, box, scribble, coarse mask, etc. In this section, we introduce the representation of different interaction forms and the transferable training scheme. In addition, we also elaborate on the user-interaction simulation strategies for constructing training samples.
Interaction representation. Different from SAM [2] series that use coordinates to represent interactions, we represent different interactions uniformly with a 2-channel “bi-map” for the positive and negative signals. For clicks, we follow previous works [7], [8], [9] to encode clicks as round disks. Scribbles are directly divided into a positive and negative scribble map. Boxes are represented with a binary rectangle mask. As for coarse masks, the mask would be regarded as the positive interaction channel, and an all-zero negative map is represented. In the following part, we first present the training strategies for supporting various of interactions, afterward, we present how to simulate diversified user interactions during training.
Promptable transferation. As we formulate different user interactions as a unified 2-channel map, FocalClick-XL can deal with various tasks using the same architecture. We concatenate the 2-channel map with the input image and use the Prompting layer (projection conv) to encode interaction signals. Considering clicks could be regarded as the basic elements for different interactions, we first train FocalClick-XL on clicks as the pertaining. Specifically, we pre-train the Object-Net and the adapter layers of the Context-Net together on simulated clicks and object-centric data (Context-Net takes the full image, and we sample boxes around the GT masks for the input of Object-Net). After pretraining, we find that tuning only the Promtping Layer is sufficient for transferring FocalClick-XL to different tasks. The Prompting Layer of the Object-Net projects the interaction maps into the control signals. After task-specific tuning, the control signals of different forms of interactions could be unified into the same feature space. Given a new interaction definition, FocalClick-XL could be easily transferred to new tasks with low training and storage requirements.
Fig. 4: Simulated user interactions, such as scribbles, boxes, and coarse masks. For the scribbles, we first develop several meta-simulators and compose them for more versatile results.
Interaction simulation. The strategies used to simulate user interactions play a pivotal role in model training. To simulate clicks, we adopt methodologies from previous works [7], [8], [9], [11], placing clicks iteratively within the maximin-error regions identified in current predictions. Additionally, we assign higher probabilities to sample clicks around boundary regions. For boxes, we derive the minimum bounding box from ground truth masks and introduce variations in box sizes and locations. Coarse masks undergo random erosion, dilation, and downsampling processes to augment ground truth masks. Moreover, we incorporate current model predictions into this augmentation process. Visual examples of these strategies are illustrated in Fig. 4.
Simulating human scribbles poses the greatest challenge due to their flexible and arbitrary nature in shape. Past approaches [4], [6] have utilized simplistic strategies such as linking points or filling basic geometries. This study introduces multiple meta-simulators designed to generate diverse scribbles. As demonstrated in Fig. 4, the bezier scribble uses the bezier function to draw curves within the mask regions; the axial scribble calculates the media axis of the given mask; the boundary scribble draws lines along with the mask boundary. For the stroke thickness, we randomly choose values from 3 to 7. We combine these four strategies to generate diversified scribbles to simulate the diversified user input in real practices.
# 5.2 Evaluation Protocols
Most of the previous works investigate the click-based setting. There are no existing standard evaluation protocols for other interactions like scribbles and coarse masks. Thus, it is not trivial to build a benchmark for evaluation. We follow the evaluation protocol of click-based methods and
Fig. 5: The procedure of deterministic scribble generation. The true positives, false negatives, and false positives of the segmentation result are marked in red, green, and blue respectively. We first compute the largest connected region of the error mask (middle image) and generate deterministic scribbles as the following interaction.
# Algorithm 2 Deterministic Scribble Simulator
1: max mask max(error mask) 2: skel mask $$ MedialAxis(max mask) 3: Graph $$ RadiusNeighbourGraph(skel mask) 4: for subgraph $\in$ Connected(Graph) do 5: while True do 6: cycle $$ FindCycle(subgraph) 7: if cycle $\scriptstyle = =$ None then 8: break 9: else 10: RemoveCycle(subgraph, cycle) 11: end if 12: end while 13: end for 14: dis $t a n c e \gets [ ]$ 15: for $v \in$ Graph.nodes() do 16: max path $$ ShortestPath(Graph, v) 17: distance.append(max path) 18: end for 19: longest path ← max(distance) 20: scribbl $^ { \circ } \gets ^ { }$ BezierCurve(longest path)
extend it to scribbles. Besides, we formulate the protocols for boxes and coarse masks.
Scribble-based evaluation. We extend the click-based protocol for scribbles. The challenge is that clicks could simply be added at the center of the error region, but not for scribbles, as they have various shapes, which introduces randomness. Accordingly, we utilize a deterministic scribble simulator that synthesizes scribbles according to the shape and size of the given mask. As demonstrated in Fig. 5, similar to the click-based protocol, we first calculate the max error regions. Then, the medial axis for the largest error mask is computed to obtain the skeleton of the objects. Afterward, we transform the skeleton mask into a radius neighbor graph, where the neighborhood of a vertex points at a distance less than the radius from it. Then we divide the graph into connected components sub-graphs and remove its cycles. Finally, we will create a Bezier curve with the points in the graph’s longest path. The pseudocode is shown in Alg. 2. Therefore, we could iteratively add scribbles on the FP or FN regions. Thus, we generalize the NoC metric for clicks to NoS (Number of Scribbles), and report NoS85/90 and $\mathrm { N o F ^ { 2 0 } } 8 5 / 9 0$ .
Box and coarse mask evaluation. Box and coarse masks only provide single-round interactions. For the boxes, we measure the mIoU. As for the coarse masks, we exert different levels of perturbations on the ground truth masks to get a series of coarse mask datasets. We report the mIoU for the predicted masks with those coarse mask datasets to verify the robustness of our method.
TABLE 1: Model configurations for the basic FocalClick. We present two versions with different resolutions.
Specifically, to evaluate FocalClick-XL ’s ability for coarse mask refinement. We design a mask perturbation strategy to generate flowed masks with different initial IoUs. Specifically, we leverage interactive perturbation by applying different types of kernels to conduct erosion or dilation on the ground truth masks. For each interaction, we add a small perturbation on the current mask, until it reaches the target IoU range. Thus we generate different levels of coarse masks with ranges of [0.85, 0.90], [0.75, 0.80], [0.65, 0.70], [0.55, 0.60], [0.45, 0.50].
# 6 EXPERIMENTS FOR FOCALCLICK
This section introduces the basic experiment results of FocalClick. We start with the experiment configurations, then we introduce the ablation studies to verify the effectiveness of each component and report the performance of interactive mask correction. Afterwards, in the next section, we further analyze the extended part of FocalClick-XL .
# 6.1 Experimental Configuration
Model series. To satisfy the requirement for different scenarios, we design two versions of models with different input resolutions as demonstrated in Tab. 1. The S1 version is adapted to edge devices and the plugins for web browsers. The S2 version would be suitable for CPU laptops. In this paper, we conduct experiments for both SegFormer [50] and HRNet [48] as our Segmentor to show the universality of our pipeline. In the rest of the paper, we use S1, S2 to denote different versions of our model.
Training protocol. We simulate the Target Crop by randomly cropping a region with a size of $2 5 6 ~ \times ~ 2 5 6$ on the original images. Then, we simulate the Focus Crop by calculating the external box for the ground truth mask, or making random local crops centered on boundaries with the length of 0.2 to 0.5 of the object length. Then, we add augmentations of random expansion from 1.1 to 2.0 on the simulated Focus Crop. Thus, the whole pipeline of Segmentor and Refiner is trained together end-to-end.
For the strategy of click simulation, we exploit iterative training [11] following RITM [7]. Besides the iteratively added clicks, the initial clicks are samples inside/outside the ground truth masks randomly following [19]. The maximum number of positive /negative clicks is set as 24 with a probability decay of 0.8. For the hyper-parameters, following RITM [7], we train our models on a combination dataset of COCO [60] and LVIS [61]. We also report the performance of our model trained on SBD [55], and a large combined dataset [60], [61], [62], [63], [64], [65], [66], [67]. During training, we only use flip and random resize with the scale from 0.75 to 1.4 as data augmentation. We apply Adam optimizer of $\beta _ { 1 } = 0 . 9 , \beta _ { 1 } = 0 . 9 9 9$ . We denote 30000 images as an epoch and train our models with 230 epochs. We use the initial learning rate as $5 \times 1 0 ^ { - 4 }$ and decrease it
TABLE 2: Ablation studies for FocalClick on both interactive segmentation from scratch and interactive mask correction. ‘TC’, ‘FC’, ‘PM’ denote Target Crop, Focus Crop, and Progressive Merge. ‘NoC’, ‘NoF’ stand for the Number of Clicks and the Number of Failures.
10 times at the epoch of 190 and 220. We train each of our models on two V100 GPUs with batch size 32. The training takes around 24 hours.
Evaluation protocol. We follow previous works [7], [10], [12], [14], [19], [22] to make fair comparisons. During evaluation, the clicks are automatically simulated with a fixed strategy: Each click would be placed at the center of the largest error region between the previous prediction and the ground truth mask. For example, when starting from scratch, the first click would be placed at the center of the ground truth mask. Additional clicks would be added iteratively until the prediction reaches the target IOU (Intersection over Union) or the click number reaches the upper bound.
For the metrics, we report NoC IOU (Number of Clicks), which means the average click number required to reach the target IOU. Following previous works, the default upper bound for click number is 20. We note the sample as a failure if the model fails to reach the target IOU within 20 clicks. Hence, we also report NoF IOU (Numbers of Failures) to measure the average number of failures.
# 6.2 Ablation Study
We conduct plenty of ablation studies for each module of FocalClick and report experimental results on both the original DAVIS and our DAVIS-585 dataset.
Holistic analysis. We verify the effectiveness for each of our novel components in Tab. 2. We first construct a naive baseline model based on SegFormer-B0, noted as Naive-B0- S1/S2. It takes the full image as input and does not apply TC (Target Crop), FC (Focus Crop), and PM (Progressive Merge), which is similar to early works like [19], [20], [22]. It is shown that this kind of pipeline performs poorly, especially for small input resolutions S1 $1 2 8 \times 1 2 8 )$ . Most test samples fail to reach the target IOU within 20 clicks. Then, after progressively adding the TC, FC, and PM, we observe that each component brings steady improvement for annotating from both initial masks and scratch.
Making comparisons between S1 and S2, we find that the naive version heavily relies on the input scale. The performance drops tremendously from S2 to S1. However, with the assistance of TC, FC, and PM, the disadvantage of small input could be compensated.
TABLE 3: Statistics for the area of Focus Crop and Target Crop. We report the ratio relative to the full scale image.
TABLE 4: The expand ratio for TC (Target Crop) and FC (Focus Crop). The values show the NoC80/90 on DAVIS. The last row/column shows the performance without FC/TC.
Analysis for cropping strategy. We first count the average area of the Focus Crop, Target Crop and calculate the ratio to full image in Tab. 3. It shows that our cropping strategy is effective in selecting and zoom-in the local regions.
In Tab. 4, we verify the robustness of our cropping strategy. The results show that the fluctuation caused by the hyper-parameter is negligible compared with the improvement brought by the modules. Besides, for the evaluation result in Tab. 5, we simply set those ratios as 1.4 following previous works [7], [10], [14]. However, Tab. 4 shows that our work could reach even higher performance with a more delicate tuning.
We also visualize the intermediate results of the Refiner to demonstrate its effectiveness. In Fig. 6, the red boxes in the first column show the region selected by Focus Crop. The yellow box denotes the Target Crop (The first row shows the case of the first click; hence the Target Crop corresponds to the entire image). The second and the third column show the prediction results of the Segmentor and the Refiner. It demonstrates that Refiner is crucial for recovering the fine details.
Computation analysis. The objective of FocalClick is to propose a practical method for mask annotation; efficiency is a significant factor. In Tab. 6, we make a detailed analysis and comparison for the number of parameters, FLOPs, and the inference speed on CPUs. We summarize the methods of the predecessors into five prototypes according to their backbone and input size.
In Tab. 6, most works use big models and 400 to 600 input sizes, which makes them hard to use on CPU devices. In contrast, FocalClick supports light models and small input sizes like 128 and 256. The FLOPs of our B0-S1 version is 15 times smaller than the lightest RITM [7], 360 times smaller than FCANet [12]. Using the same Segmentor, our hrnet-18s version could reduce 2 to 8 times FLOPs compared with original RITM [7].
Besides, as FocalClick could be adapted to various Segmentors, the FLOPs could be further reduced by using more light-weighted architectures like [70], [71].
# 6.3 Performance for Mask Correction
We evaluate the performance of mask correction on the DAVIS-585 benchmark, and the results are listed in Tab. 5. We also report the results of annotating from scratch.
TABLE 5: Quantitative results on DAVIS-585 benchmark. The metrics ‘NoC’ and ‘NoF’ mean the average Number of Clicks required and the Number of Failure examples for the target IOU. All models are trained on COCO [60]+LVIS [61].
Fig. 6: Qualitative analysis for the effectiveness of Refiner. The first column denotes the Target Crop in yellow and the Focus Crop in red. The second and the third column demonstrate the mask in focus crop before and after refinement.
TABLE 6: Computation analysis for FocalClick and classical pipelines. $' \mathrm { B } 0 / 3 ^ { \prime }$ is short for SegFormer-B0/3. ‘Seg’ denotes Segmentor, ‘Ref’ denotes Refiner. $' 4 0 0 ^ { \prime }$ , $' 6 0 0 ^ { \prime }$ , $' 5 1 2 ^ { \prime }$ denote the default input size required by different models. The speed is measured on a CPU laptop with $2 . 4 \operatorname { G H z }$ , $4 \times$ Intel Core i5.
All initial masks provided by DAVIS-585 have IOUs between 0.75 to 0.85, and some challenging details have already been well annotated. Hence, making good use of them could logically facilitate the annotation. However, according to Tab. 5, RITM [7] do not show much difference between starting from initial masks and scratch. In contrast, FocalClick makes good use of the initial masks. It requires significantly smaller numbers of clicks when starting from preexisting masks. Besides, it shows that the S1 version of FocalClick could outperform the big version of RITM [7] for mask correction tasks with 1/67 FLOPs.
# 7 EXPERIMENTS FOR FOCALCLICK-XL
In this section, we give a detailed analysis for the extended part of FocalClick-XL . We first report comparison results with state-of-the-art methods across different benchmarks. Afterwards, we report in-depth ablation studies and qualitative demonstrations.
# 7.1 Quantitative Analysis
Click-based segmentation. We first give a quantitative analysis of FocalClick-XL on click-based interactive segmentation. In Tab. 7, we compared FocalClick-XL with previous state-of-the-art methods on GrabCut [15], Berkeley [53], SBD [55], and DAVIS [54]. We divide the table into four blocks according to the usage of training data and observe that our method presents competitive results against the previous state-of-the-art under different settings. We could also observe that, compared with the basic FocalClick, our FocalClick-XL demonstrates significant improvements.
In the last block, we list the performance for SAM [2] and its variants SAM-HQ [34]. Although trained on large datasets and with big models, SAM [2] does not demonstrate its dominance. As analyzed before, the performance of SAM is limited by fine details. SAM-HQ [34] explores to refine the masks produced by SAM. However, the refinement module is interaction-agnostic. Thus, it could not benefit from the increasing number of user interactions and fails to get satisfactory performance. In contrast, as the magnifier takes the interaction map as guidance, FocalClickXL could focus on target-centric details and benefits from excessive user interactions. FocalClick-XL achieves new state-of-the-art across multiple benchmarks.
In Tab. 8, we also report the k-mIoU results on DAVIS [54], HQSeg-44K [34], ssTEM [79] and BraTS [80]. Results demonstrate the strong ability of FocalClick-XL in different evaluation configurations and domains.
Scribble-based segmentation. In Tab. 10, we report our performance on scribble-based segmentation. We use the protocol introduced in Sec. 5.2 to measure the NoS (Numbers of Scribble). As most of the previous scribble-based methods are not open-sourced, we reproduce some of the representative ones for comparisons. In row 1, we develop a similarity-based model like [4], which shows poor performance as it could not deal with the fine details. Row 2 corresponds to using the click-based solution [7] to deal with scribbles. We find that, although clicks could be regarded as short scribbles, directly using click-based models could not get satisfactory results. In row 3, we follow IFIS [6] to simulate scribbles via linking randomly sampled points. This strategy brings improvements compared to using disks but still gets poor performance. FocalClickXL shows significant superiority over those naive solutions, which proves the strong transferring abilities of FocalClickXL across different interaction formats. We could also refer to Tab. 7 for more analysis. When both trained on COCO [60] and LVIS [61], scribble-based methods are slightly better than clicks-based ones as scribbles could give more indications compared with clicks.
TABLE 7: Comaprisons with SOTAs. We report evaluation results on GrabCut, Berkeley, SBD and DAVIS datasets. ‘NoC 85/90’ denotes the average Number of Clicks required the get IoU of $8 5 / 9 0 \%$ . ‘Synthetic’ data uses the datasets of [60], [72], [73]. ‘Large Dataset’ denotes a combined dataset. [60], [61], [62], [63], [64], [65], [66], [67]
TABLE 8: Quatitatrive comparisons for k-mIOU with previous state-of-the-art methods. This metric measures the mean IoU given k numbers of clicks.
TABLE 9: Ablation studies for the core components on clickbased settings. We start with Context-Net (SAM) as a baseline and add components step by step to verify their effectiveness.
Boxes and coarse masks as input. The box and coarse masks only act as the indication for the initial round. We report the mIoU (mean Interaction over Union) for the predicted masks. In Tab. 11, we report the mean IoU given the single bounding box. FocalClick-XL archives impressive performance with over $9 0 \%$ mIoU on GrabCut and Berkeley.
It shows a notable performance gain against the original version of SAM. We also report the performance with a single click or scribble. Compared with these two formats, the box gives the best indications for the rough contour of the target object, which would be a premier choice for the first round of interaction.
In Tab. 12, we verified the abilities of FocalClick-XL -B to refine the coarse input masks. We conduct different levels of perturbation on the ground truth masks, getting the coarse masks with different IoUs. The results show that FocalClickXL is robust for the coarse masks, even poor masks with very low IoUs could be refined to a good state. Following SAM-HQ [34], we also make evaluations on BIG [81] and UVO [82] in Tab. 13.
# 7.2 Ablations Studies
After verifying our promising performance, in this section, we dive into the details of our framework. We conduct ablation studies to verify the effectiveness of our design. We first analyze the importance of each subnet of our pipeline. Afterward, we make experiments for the structure of the SAM adapter and the tuning strategy to transfer FocalClickXL from click to other interactions.
TABLE 10: Quatitative results (NoS) of scribble-based interactions. $^ { \prime \prime } \mathrm { N o } S ^ { \prime \prime }$ means the number of scribbles required for target IoU.
TABLE 12: Quatitative results $\mathbf { \left( m I o U \right) }$ of coarse mask refinement. We give different levels of perturbations on the ground truth masks and report the mIoUs. Results show that FocalClick-XL brings huge improvements to masks with different levels of flaws with robust performance.
TABLE 14: Ablation studies for adapting the Context-Net. We try placing the adapters at different positions and pick the one with the highest performance.
TABLE 16: Computation analysis. Our methods introduce a modest increase in parameters. However, the performance gains are significant.
Task decomposation. We analyze each subnet of FocalClickXL in Tab. 9. The Context-Net and Object-Net could both be utilized as a baseline to achieve the function of interactive segmentation individually. We first report their performance in the first two columns. Afterward, we add each subnet step by step. The results demonstrate steady improvements brought by each component.
SAM adapter. We find that the adapter layer in the Context-Net plays a crucial role in leveraging the strong context-aware priors of SAM for our task. In Tab. 14, we explore several design choices. The results indicate that incorporating additional Conv-GeLU-Conv layers after each transformer block yields the most favorable performance.
Transfer tuning. We further investigate how to adapt a clickbased FocalClick-XL to other forms of user interactions, using scribbles as an example. In Tab. 15, we explore tuning different components of the model. Our findings reveal that adjusting just a single input layer is sufficient to transfer FocalClick-XL across tasks. This suggests that most of the knowledge is shared across different interaction types, and the key lies in effectively encoding these interactions into the corresponding control signals.
Computation analysis. In Tab. 16, we report the numbers of parameters of FocalClick-XL and SAM. Our method requires acceptable additional parameters but brings obvious performance improvement. In Tab. 17, we compare the inference speed with previous state-of-the-art interactive
TABLE 11: Quatitative results (mIoU) of box-based interactions. We also report the results by providing a single click or scribble as a reference.
TABLE 13: Comparison results with SAM series on out-of-domain datasets. BIG [81] is used to evaluate high-quality segmentation, while UVO [82] covers a large variety of object categories. We use the ground truth box as the prompt in the BIG dataset and use the box prompt on the UVO dataset.
TABLE 15: Ablation studies for the transfer tuning strategies. From the click-based setting to the scribble-based setting, we explore tuning different parts of the model.
TABLE 17: Speed analysis. The online inference speed (waiting time after each interaction) is comparable to or even faster than previous works.
segmentation models. Thanks to the progressive magnification strategy, our FocalClick-XL shows comparable or better speed for the time of waiting after each interaction.
# 7.3 Qualitative Analysis
We begin by visualizing the comparison results with SAM in Fig. 8, where our method demonstrates significant improvements. Leveraging Object-Net and Detail-Net, FocalClickXL achieves high-quality segmentation with minimal user interactions.
In Fig. 7, we highlight the generalization capabilities of FocalClick-XL across different interaction types. The model accurately refines masks with iteratively added clicks and scribbles (rows 1–2). While box-based interactions typically support single-round refinement, they effectively capture the target’s overall location and shape, leading to satisfactory segmentation results. Additionally, FocalClickXL exhibits strong robustness when processing coarse masks—regardless of the input mask quality, it consistently enhances segmentation accuracy.
Further prediction results are shown in Fig. 9, providing additional evidence of FocalClick-XL ’s effectiveness in handling diverse user interactions. Clicks and scribbles are sequentially integrated to iteratively refine earlier predictions, demonstrating the model’s adaptability to various user input formats.
Fig. 7: Demonstrations for FocalClick-XL . Our method provides a unified solution for various interaction formats and predicts high-quality masks with fine details.
Fig. 8: Qualitative comparisons with SAM. FocalClickXL shows significantly better mask qualities compared with SAM given a single click. | Interactive segmentation enables users to extract binary masks of target
objects through simple interactions such as clicks, scribbles, and boxes.
However, existing methods often support only limited interaction forms and
struggle to capture fine details. In this paper, we revisit the classical
coarse-to-fine design of FocalClick and introduce significant extensions.
Inspired by its multi-stage strategy, we propose a novel pipeline,
FocalClick-XL, to address these challenges simultaneously. Following the
emerging trend of large-scale pretraining, we decompose interactive
segmentation into meta-tasks that capture different levels of information --
context, object, and detail -- assigning a dedicated subnet to each level.This
decomposition allows each subnet to undergo scaled pretraining with independent
data and supervision, maximizing its effectiveness. To enhance flexibility, we
share context- and detail-level information across different interaction forms
as common knowledge while introducing a prompting layer at the object level to
encode specific interaction types. As a result, FocalClick-XL achieves
state-of-the-art performance on click-based benchmarks and demonstrates
remarkable adaptability to diverse interaction formats, including boxes,
scribbles, and coarse masks. Beyond binary mask generation, it is also capable
of predicting alpha mattes with fine-grained details, making it a versatile and
powerful tool for interactive segmentation. | [
"cs.CV"
] |
# 1 Introduction
Incorporating real-world knowledge is essential for enhancing natural language processing capabilities (Xie et al., 2023; Peng et al., 2023; Pan et al., 2024). Directly extracting specific knowledge or facts from various unstructured texts is time-consuming and laborious, hence knowledge graph (KG) is adopted to store some common facts and reduce retrieval cost. However, traditional KGs are limited to static fact storage. To capture the dynamic nature of facts, temporal knowledge graph (TKG) was proposed to record facts changing over time. It can provide certain evidence for many downstream tasks, like situation analysis, political decision making and service recommendation (Mezni, 2021; Saxena et al., 2021; Jia et al., 2021; Wu et al., 2023, 2024). TKG reasoning aims to predict the missing objects of future events based
9o 9t-1 9t GCN-based model (a) Methods based on structural information.
You need to correctly predict the
missing object of
the last quadruplet:
189: [France, Accuse, IHraenz]bollah] LLM 3A3n9g:e[lFar_aMnceer,keDl]emand,
275: [France, Demand, UBS]
339: [France, Demand, ? ] (b) Methods based on semantic information.
Figure 1: Two research lines of TKG reasoning. One line leverages graph patterns across different timestamps, and the other line utilizes semantic information from event quadruples to capture logical evolution.
on existing facts (Leblay and Chekol, 2018; GarciaDuran et al., 2018; Li et al., 2021), where the formal problem definition is formulated in Section 3.1.
According to the type of utilized information, existing methods typically consist of two lines. One line relies on structural information (Figure 1(a)), modeling entity interactions through graph neural network (GCN) (Mo et al., 2021; Cai et al., 2023; Wang et al., 2023a). Previous works in this category have explored the utilization of recurrent networks with neighborhood aggregation mechanisms and recursive graph structures to jointly capture temporal dependencies and concurrent relations in TKGs (Jin et al., 2020; Li et al., 2021). The other line (Figure 1(b)) focus on semantic reasoning with pre-trained language models (PLMs) (Xu et al., 2023a), particularly applying large language models (LLMs) to generate interpretable reasoning (Chang et al., 2024). These methods (Lee et al., 2023; Liao et al., 2024; Luo et al., 2024) typically generate predictions through in-context learning with relevant historical facts. Some methods further enhance performance by incorporating retrievalaugmented generation and parameter-efficient tuning, allowing LLMs to better adapt to the TKG reasoning task. In summary, existing works focus on either structural or semantic information, overlooking the potential benefits of integrating both types of information. However, different types of information can provide complementary insights for reasoning. The absence of structural information leads to insufficient knowledge of entity interaction patterns, while the lack of semantic information prevents understanding of entities’ actions. As exemplified in Figure 1(b), France’s recurring “accuse” or “demand” actions could inform predictions about future “demand” action. There is a logical progression from “accuse” to “demand”, but the graph-based methods simply treat them as two different relations, losing this reasoning evidence.
From another viewpoint, events to be predicted can be typically divided into two categories: historical events, which have already taken place in the past, and non-historical events, which have never occurred up to now. There is an inherent reasoning gap between different events (Xu et al., 2023b; Gastinger et al., 2024). For historical events, capturing recurrence patterns is crucial while exploring evolution patterns is essential for non-historical events. This inspires several works that attempted to handle historical and non-historical events differently. TiRGN (Li et al., 2022) models both the periodic patterns (which often appear in recurring historical events) and sequential evolution patterns (which characterize non-historical events), while CENET (Xu et al., 2023b) employs a binary classifier to separate the two types of events and focus predictions on relevant candidate entities. These methods often overlook that different types of information have their own advantages when handling different types of events.
To address the aforementioned limitations, we propose a Multi-Expert Structural-Semantic Hybrid (MESH) framework that effectively integrates structural and semantic information to model historical patterns for temporal knowledge graph reasoning. This model consists of a feature encoder, two kinds of event-aware expert modules and a prediction expert module. The underlying feature encoder outputs structural information from GCN and semantic information from LLM. Then we employ two kinds of event-aware expert modules to learn information weight allocation patterns for historical/non-historical events. There is a challenge in distinguishing event types. To address this, we design a prediction expert module which assigns different weights to each event-aware expert module, thereby implicitly distinguishing different types of events. This unified architecture enables adaptive information fusion without requiring explicit event type labels, offering both flexibility and efficiency. To summarize, the contributions of our work are as follows:
• We discover and verify the complementary advantages of structural and semantic information when applied to different types of events. • We propose a novel non-generative approach to leveraging LLMs for TKG reasoning, in combination with graph-based models. • We employ two kinds of event-aware expert modules that adapt to different information preferences between historical and non-historical events, with a prediction expert for automatic weight allocation between experts. • We conduct extensive experiments on three public benchmarks and the results demonstrate the effectiveness of our proposed method.
# 2 Related Work
GCN-Based TKG Reasoning Models. GCNs have shown strong ability to model structural information for graphs, leading to a series of GCNbased methods for temporal knowledge reasoning. RE-Net (Jin et al., 2020) utilizes a recurrent event encoder and neighborhood aggregator to capture temporal dependencies and concurrent relations. REGCN (Li et al., 2021) recurrently fit entity and relation features in the order of timestamps. TiRGN (Li et al., 2022) explicitly integrates time embeddings to graph embeddings, which facilitates learning across long temporal periods. GCN-based models treat entities as nodes and integrate representations of entities and relations to predict the next event. While effective in capturing structural patterns, these methods often overlook semantic information in the reasoning process.
LLM-Based TKG Reasoning Models. Some methods formulate TKG reasoning as masked language modeling (MLM) or next-token generation tasks, using language models as the backbone. ChapTER (Peng et al., 2024) uses a pretrained language model as encoder and employs a prefix-based tuning method to obtain good representations. PPT (Xu et al., 2023a) performs masked token prediction with fine-tuned BERT. Recently, LLMs have demonstrated powerful capabilities in summarization and reasoning. ICL (Lee et al., 2023) leverages LLMs with in-context learning by carefully selecting historical facts as context and decodes the outputs to rank predicted entities. GenTKG (Liao et al., 2024) proposes a retrieval-augmented generation approach with temporal logic rules, while applying parameterefficient tuning to adapt LLMs for TKG reasoning. CoH (Luo et al., 2024) captures key historical interaction from input text and also applies parameter-efficient tuning. Although these LLMbased methods provide context-based interpretable reasoning, they often struggle to capture the complex structural patterns in TKGs.
# 3 Methods
In this section, we first introduce the problem definition and notations of the temporal knowledge graph reasoning. Then we present the overall architecture of our proposed approach, followed by a detailed description of each model component.
# 3.1 Problem Formulation
A TKG $\mathcal { G }$ consists of a set of static graphs $\mathcal { G } _ { t }$ , where each static graph contains all the facts at timestamp $t$ . Formally, a TKG can be represented as $\mathcal { G } =$ $\{ \mathcal { E } , \mathcal { R } , \mathcal { T } , \mathcal { F } \}$ , where $\varepsilon , \mathcal { R }$ , and $\tau$ denote the sets of entities, relations, and timestamps, respectively. $\mathcal { F }$ denotes the set of facts, each formulated as a quadruple $( s , r , o , t )$ , where $s , o \in \mathcal { E }$ , $r \in \mathcal { R }$ , and $t \in \mathcal T$ , $\mathbf { \nabla } _ { s / o }$ represents the subject/object and $r$ represents the relation between $s$ and $o$ at $t$ . To model the dynamic nature of real-world knowledge, the TKG reasoning task aims to predict the missing entity in query $( s , r , . . , t )$ with existing facts occur before time $t$ . Additionally, an event $( s , r , o , t )$ , if there exists a previous occurrence $( s , r , o , k )$ where $k < t$ , is denoted by historical event; otherwise, is denoted by non-historical event.
# 3.2 Overall Framework
In this section, we briefly introduce the overall architecture of MESH. As shown in Figure 2, it follows a three-layer architecture, consisting of an underlying feature encoder, two kinds of eventaware expert modules in the middle layer, and a prediction expert at the top. Specifically, the feature encoder contains two components: a GCN-based structural encoder taking sub-graph structures as input and a LLM-based semantic encoder taking prompts with entity/relation names as input. These two components capture TKG information from complementary perspectives and generate query representations $\pmb { q } _ { g }$ and $\pmb { q } _ { s }$ , respectively. To adaptively handle feature fusion at different layers, we employ query-motivated gates that take $\pmb { q } _ { g }$ as input. The two kinds of specialized event-aware expert modules control the fusion patterns of features for historical or non-historical events, producing representations ${ { q } _ { h i s } } / { { q } _ { n h i s } }$ , and the prediction expert learns the representation $\pmb q$ for final prediction.
Figure 2: The overall architecture of MESH.
# 3.3 Feature Encoder
High-quality encoders are essential for integrating and analyzing dynamic knowledge (Zhao et al., 2018a,b; Zhang et al., 2023; Wang et al., 2023b). Since different types of encoders can capture complementary perspectives of TKGs, we employ two independent encoders: a GCN-based encoder for extracting information from graph structures and an LLM-based encoder for modeling semantic information. In this section, we will introduce these two encoders in detail.
# 3.3.1 Structural Encoder
We employ a structural encoder that learns expressive representations of entity interactions over time. The structural encoder aggregates relational information from the graph topology, enabling the model to incorporate both structural dependencies and temporal dynamics ( $\mathrm { \Delta X u }$ et al., 2024). Formally, given a temporal knowledge graph $\mathcal { G }$ the structural encoder $G$ generates structural embeddings as:
$$
\pmb { H } _ { g } , \pmb { R } _ { g } = \pmb { G } ( \pmb { \mathcal { G } } )
$$
where $\pmb { H } _ { g } \in \mathbb { R } ^ { | \mathcal { E } | \times d }$ and $\pmb { R } _ { g } \in \mathbb { R } ^ { | \mathcal { R } | \times d }$ denote the structural feature of entities and relations, respectively. In this paper, we adopt RE-GCN (Li et al., 2021) as the structural encoder, but our method is not restricted to any specific structural encoder.
# 3.3.2 Semantic Encoder
Entities and relations in TKGs usually contain rich semantic information. For example, the entity “Javier Solan” is associated with semantic attributes as a Spanish politician, providing valuable contextual knowledge crucial for reasoning tasks. Leveraging this semantic information is essential for enhancing TKG reasoning and improving model performance. Existing methods often leverage the reasoning and generative capabilities of LLMs to directly generate the answer for TKG reasoning tasks (Lee et al., 2023; Liao et al., 2024). However, they typically suffer from high inference latency. To mitigate this issue, recent works have focused on leveraging the representational capacity of LLMs to reduce inference costs (Wang et al., 2023c,d; Liu et al., 2024a). Motivated by this, we adopt an LLM-based approach to encode entities and relations efficiently.
Specifically, we construct the following prompts:
# Entity Encoding Template
In the context of <DATA DOMAIN>, please provide <DATA TYPE> background about $\scriptstyle \leq \mathrm { E N T I T Y } >$ .
# Relation Encoding Template
In the context of <DATA DOMAIN>, what are the <DATA TYPE> perspectives through which we can understand the <RELATION>?
We fill the underlined places with datasetspecific characteristics and particular entities or relations, then feed these prompts to LLMs such as LLaMA (Touvron et al., 2023) to obtain semantic embeddings. For example, for ICEWS14 dataset, <DATA DOMAIN> and <DATA TYPE> can be political and historical wordings. $\scriptstyle \leq \mathrm { E N T I T Y } >$ can be ‘France’ and <RELATION> can be ‘Abuse’. Finally, we extract the hidden states from the last transformer layers of the LLM to obtain semantic representations of entities and relations, denoted as $\mathbf { { \cal H } } _ { L L M } \in \mathbb { R } ^ { | \mathcal { E } | \times d _ { L L M } }$ and $\mathbf { { \cal R } } _ { L L M } ~ \in ~ \mathbb { R } ^ { | \mathcal { R } | \times d _ { L L M } }$ . However, the original LLM representations are trained for general language tasks (Touvron et al., 2023) and typically have significantly larger dimensions than structural embeddings( $\mathit { \hat { d } } _ { L L M } \ > > \ d \rangle$ . To adapt ${ \pmb { H } } _ { L L M } , { \pmb { R } } _ { L L M }$ to our TKG reasoning tasks, we employ adapter modules $f _ { H } , f _ { R }$ that compress these representations to a lower-dimensional space, typically implemented as multi-layer perceptrons (MLPs) (Chen et al., 2024; Liu et al., 2024b):
$$
\pmb { H } _ { l } = f _ { H } ( \pmb { H } _ { L L M } ) , \pmb { R } _ { l } = f _ { R } ( \pmb { R } _ { L L M } )
$$
where $\pmb { H } _ { l } \in \mathbb { H } ^ { | \pmb { \mathcal { E } } | \times d } , \pmb { R } _ { l } \in \mathbb { R } ^ { | \pmb { \mathcal { E } } | \times d }$ .
# 3.3.3 Query Representation
From the underlying feature encoder, we obtain entity embeddings $\boldsymbol { h _ { g } } \in \mathbb { R } ^ { d }$ , $\pmb { h } _ { l } \in \mathbb { R } ^ { d }$ and relation embeddings $\boldsymbol { r } _ { g } \in \mathbb { R } ^ { d }$ , $\pmb { r } _ { l } \in \mathbb { R } ^ { d }$ by taking the corresponding rows from ${ \pmb H } _ { g } , { \pmb H } _ { l } , { \pmb R } _ { g } , { \pmb R } _ { l }$ . We subsequently detail decoders to generate query representations for the query $( s , r , . . , t )$ .
Since the convolutional score function has shown its effectiveness as decoder in previous work (Vashishth et al., 2020; Li et al., 2021), we employ ConvTransE as decoder:
$$
\begin{array} { c } { { { \pmb q } _ { g } = D _ { g } ( { \pmb h } _ { g } \oplus { \pmb r } _ { g } ) } } \\ { { { \pmb q } _ { l } = D _ { l } ( { \pmb h } _ { l } \oplus { \pmb r } _ { l } ) } } \end{array}
$$
where $D _ { g }$ and $D _ { l }$ denote the decoders for structural and semantic features, respectively, $\pmb q _ { g } \in \mathbb { R } ^ { d }$ and $\pmb q _ { l } \in \mathbb { R } ^ { d }$ denote the query representations from structural and semantic perspectives.
# 3.4 Event-Aware Experts
We suggest that different type of events may require different kind of information for reasoning. For example, historical events often involve complex context which are better captured by LLMs, as demonstrated by our analysis in Section 4.6. Existing methods (Li et al., 2021; Liao et al., 2024) overlook this diversity and thus fail to achieve optimal performance on different types of events consistently. This observation motivates us to design a mechanism that can adaptively handle different types of events. Consequently, we propose eventaware experts in this section to adaptively integrate structural and semantic information from previous steps. As shown in Section 4.4, it effectively enhances the model’s reasoning capability.
Specifically, we divide events into two categories: historical and non-historical (Xu et al., 2023b). We set $M$ experts for historical events and $N$ experts for non-historical events (Zhang et al., 2024b). We employ $\scriptstyle q _ { g }$ as the input to the query-motivated gate, since it captures the evolving structural patterns of each sub-graph over time and records dynamic structural information, thereby can better distinguish event types. The operation of $i ^ { t h } ( i \leq M + N )$ expert is:
$$
\begin{array} { c } { \alpha _ { i } = \sigma ( { \mathbf { q } } _ { g } { \mathbf { W } } _ { i } + b _ { i } ) } \\ { { \mathbf { q } } _ { i } = \alpha _ { i } \cdot { \mathbf { q } } _ { g } + ( 1 - \alpha _ { i } ) \cdot { \mathbf { q } } _ { s } } \end{array}
$$
where $\mathbf { W } ^ { i } \in \mathbb { R } ^ { d \times 1 }$ and $b ^ { i }$ denote the weight matrix and bias term of the gating function to generate the weight $\alpha _ { i }$ that indicating the dependency of this expert on structural information. $\pmb q _ { i } \in \mathbb { R } ^ { d }$ represents the query representation from the $i$ -th expert module, which combines comprehensive views for further prediction. We regard the first $M$ experts as historical experts and the following $N$ as nonhistorical experts, which are adept in handling historical and non-historical events, respectively.
# 3.5 Prediction Expert
While we have studied how to leverage different information based on event types, distinguishing the types of events to be predicted remains challenging since the types of future events are unknown. Previous methods (Xu et al., 2023b) typically employ binary classification to determine event types, making the prediction performance susceptible to classification errors. To address this limitation, we design a prediction expert that adaptively integrates information from different kinds of experts without explicit type classification.
Based on query representations produced by multiple event-aware experts, we finally construct a prediction expert to mix all information and predict future events. Similar to Equation (5), we first adaptively allocate weights to experts with the initial query representation ${ \pmb q } _ { g }$ :
$$
\begin{array} { c } { { \alpha = \sigma ( { \pmb q } _ { g } { \pmb W } + { \pmb b } ) } } \\ { { { \pmb q } = { \pmb \alpha } \cdot [ { \pmb q } _ { 1 } , . . . , { \pmb q } _ { M + N } ] ^ { T } = \displaystyle \sum _ { 1 \leq i \leq M + N } \alpha _ { i } { \pmb q } _ { i } } } \end{array}
$$
where $\mathbf { W } \in \mathbb { R } ^ { d \times ( M + N ) }$ and $ { \mathbf { b } } \in \mathbb { R } ^ { ( M + N ) }$ denote the weight matrix and bias term of the gating function, resulting in $\pmb { \alpha } \in \mathbb { R } ^ { ( M + N ) }$ . Equation (8) combines expert information with the dynamic weights.
The final prediction $\pmb { p } _ { s , r , t }$ for query $( s , r , . . , t )$ is made with the matrix product between $\pmb q$ and $\pmb { H } _ { g }$ :
$$
\pmb { p } _ { s , r , t } = \sigma ( \pmb { q } \cdot \pmb { H } _ { g } )
$$
where $p _ { s , r , t } \in \mathbb { R } ^ { | \mathcal { E } | }$ indicates the probability of the missing object is corresponding candidate within the candidate set $\mathcal { E }$ .
# 3.6 Optimization
In this section, we introduce the loss function for the model optimization. There are two training objectives: 1) Event-aware experts should specialize in corresponding event types. 2) The overall prediction is accurate for TKG reasoning tasks.
We can obtain the partial information of historical and non-historical experts as:
$$
\begin{array} { c } { { \pmb q ^ { h i s } = \displaystyle \sum _ { 1 \leq i \leq M } \alpha _ { i } \pmb q _ { i } } } \\ { { \pmb q ^ { n h i s } = \displaystyle \sum _ { M + 1 \leq i \leq M + N } \alpha _ { i } \pmb q _ { i } } } \end{array}
$$
Then, the event-aware predictions can be obtained similar to Equation (9):
$$
\begin{array} { c } { { { \pmb p } _ { s , r , t } ^ { h i s } = \sigma ( { \pmb q } ^ { h i s } \cdot { \pmb H } _ { g } ) } } \\ { { { \pmb p } _ { s , r , t } ^ { n h i s } = \sigma ( { \pmb q } ^ { n h i s } \cdot { \pmb H } _ { g } ) } } \end{array}
$$
By definition, historical facts refer to events that have occurred before $t$ , while non-historical facts have no prior occurrences at time $t$ . We can calculate the frequency of each fact’s occurrence before $t$ and construct the historical indicator:
$$
F _ { t } ^ { s , r } ( o ) = \sum _ { k < t } | \{ ( s , r , o , k ) | ( s , r , o , k ) \in \mathcal { G } _ { k } \} |
$$
$$
I _ { t } ^ { s , r } ( o ) = { \left\{ \begin{array} { l l } { 1 { \mathrm { i f } } F _ { t } ^ { s , r } ( o ) > 0 } \\ { 0 { \mathrm { i f } } F _ { t } ^ { s , r } ( o ) = 0 } \end{array} \right. }
$$
where $\mathcal { G } _ { k }$ denotes the subgraph at time $k$ . The set $\{ ( s , r , o , k ) | ( s , r , o , k ) \in \mathcal { G } _ { k } \}$ indicates all events happened at timestamp $k$ that contains the object o. $F _ { t } ^ { s , r } ( o )$ denote the summation of the count over these set prior to the current timestamp $t$ , indicating the frequency of the event. The historical indicator $I _ { t } ^ { s , r } ( o )$ can finally distinguish event types if the frequency is non-zero.
To enable event-aware expert modules to specialize in different event types, we compute the expert loss based on corresponding predictions:
$$
\begin{array} { r l } & { \mathcal { L } _ { e } ^ { h i s } = - \displaystyle \sum _ { ( s , r , o , t ) \in \mathcal { G } } y _ { s , r , t } p _ { s , r , t } ^ { h i s } I _ { t } ^ { s , r } ( o ) } \\ & { \mathcal { L } _ { e } ^ { n h i s } = - \displaystyle \sum _ { ( s , r , o , t ) } y _ { s , r , t } p _ { s , r , t } ^ { n h i s } ( 1 - I _ { t } ^ { s , r } ( o ) ) } \end{array}
$$
where $\pmb { y } _ { t } ^ { s , r } \in \mathbb { R } ^ { | \mathcal { E } | }$ is the one-hot ground truth vector of for entity prediction.
Table 1: Statistics of datasets. $| \mathcal { F } _ { h i s } |$ and $\it { R a t e } _ { h i s }$ denote the number and percentage of historical events in test set, respectively, and $\Delta t$ denotes the time granularity of each dataset.
Besides, the prediction with all experts should be accurate, resulting in the major loss:
$$
\mathcal { L } ^ { m } = - \sum _ { ( s , r , o , t ) \in \mathcal { G } } \pmb { y } _ { s , r , t } \pmb { p } _ { s , r , t }
$$
The total loss function is formulated as a weighted sum of the major loss and the auxiliary expert loss:
$$
\mathcal { L } = \mathcal { L } ^ { m } + \omega ( \mathcal { L } _ { e } ^ { h i s } + \mathcal { L } _ { e } ^ { n h i s } )
$$
where $\omega$ is the balancing weight. We detail the training procedure in Appendix A.
# 4 Experiments
# 4.1 Experimental Setup
# 4.1.1 Datasets
We conduct experiments on three public benchmarks ICEWS14 (Garcia-Duran et al., 2018), ICEWS18 (Jin et al., 2020) and ICEWS05- 15 (Garcia-Duran et al., 2018). Table 1 shows the statistics of these datasets. All these three datasets show a balanced distribution between historical and non-historical facts in their test sets, with historical event ratios ranging from $4 1 . 6 \%$ (ICEWS14) to $5 4 . 0 \%$ (ICEWS05-15), making them suitable for evaluating the prediction of different event types.
# 4.1.2 Evaluation Metrics
Following previous works (Jin et al., 2020; Li et al., 2021), we employ the Mean Reciprocal Rank (MRR) and Hits $@ \mathbf { k }$ $( \mathrm { H } @ \mathrm { k } )$ as the evaluation metrics. MRR calculates the average of the reciprocal ranks of the first relevant entity retrieved by the model, while $\operatorname { H @ K }$ calculates the proportion of the correct entity ranked in the top k. The MRR metric is not available for LLM-based methods as they directly generate entities rather than ranking candidates. Detailed definitions of these metrics are shown in Appendix B.1.
# 4.1.3 Baselines
To conduct a comprehensive comparison, we select nine up-to-date TKG reasoning methods, including five graph-based methods and four LLMbased methods. For graph-based models, we choose RE-Net (Jin et al., 2020), REGCN (Li et al., 2021), CENET (Xu et al., 2023b).For LLMbased models, we choose ICL (Lee et al., 2023), CoT (Luo et al., 2024) and GenTKG (Liao et al., 2024).We also compare with two straightforward baselines introduced in Appendix B.2, namely “Naive” and “LLM-MLP”.
# 4.1.4 Implementation Details
For the structural encoder $G$ , we employ an efficient graph-based model RE-GCN (Li et al., 2021). The hidden dimension $d$ is 100. The dropout of each GCN layer is set to 0.2. To maintain the stability of structural features, the structural encoder is trained for 500 epochs and then frozen. For the semantic encoder, we utilize LLaMA-2-7B $\mathit { \Delta } d _ { L L M } = 4 0 9 6 )$ coupled with a two-layer MLP as the adapter. The decoder $D _ { g }$ and $D _ { l }$ are ConvTransE with 50 channels and kernel size of 3. For event-aware experts, $M = N = 1$ . All experiments are conducted on an NVIDIA A100 GPU, with the learning rate set to 0.001. Our results are averaged over three random runs.
# 4.2 Main Results
The experimental results for entity prediction are presented in Table 2. Based on these results, we observe the following findings:
• MESH achieves state-of-the-art performance on ICEWS14 and ICEWS18, surpassing all graphbased and LLM-based baselines. It maintains competitive performance on ICEWS05-15, only second to TiRGN (Li et al., 2022). Moreover, our proposed model can be integrated with any structural or semantic encoder, improving methods like TiRGN as shown in Table 3. • MESH significantly outperforms our structural encoder RE-GCN on all results. After incorporating semantic information, our model outperforms RE-GCN with improvements of $2 . 4 7 \% / 1 . 5 5 \%$ in MRR, $3 . 5 5 \% / 1 . 6 3 \%$ in $\mathrm { H } @ 3$ , and $2 . 8 1 \% / 1 . 8 5 \%$ in $\mathrm { H @ 1 0 }$ on ICEWS14/ICEWS18, respectively. These results demonstrate that the strong understanding capability of LLMs can effectively enhance the model’s power of prediction.
Table 2: TKG reasoning performance (with time-aware metrics) on ICEWS14, ICEWS18, ICEWS05-15. The best results are in bold and the second best results are underlined. Results are averaged over three random runs $( \mathsf { p } <$ 0.05 under t-test).
• Compared to LLM-based methods, our approach demonstrates superior performance across all available metrics. It demonstrates the importance of structural information in TKG reasoning. Notably, LLM-based methods show consistently lower scores on $\mathrm { H @ 1 0 }$ compared to graph-based approaches, revealing their inherent limitations in maintaining comprehensive historical knowledge. This observation aligns with the catastrophic forgetting during continual learning of LLM, indicating their difficulty in effectively leveraging complete historical patterns during reasoning.
• Our method addresses this limitation of LLMbased model by integrating structural information with global information of graph. Moreover, our model significantly reduces the inference cost. While these LLM-based methods usually require over 12 hours for inference on ICEWS14, our approach completes the same task within minutes.
# 4.3 Compatibility Study
To validate the compatibility of MESH, we conduct experiments with different structural and semantic encoders, as shown in Table 3. Our default configuration employs RE-GCN as the structural encoder $G$ in Equation (1) and LLaMA2-7B as the semantic encoder in Equation (2). For structural encoders, TiRGN shows superior performance over RE-GCN across all metrics. For semantic encoders, Stellaen-1.5B-v5 (Zhang et al., 2024a) slightly outperforms LLaMA2-7B. When integrating these alternative encoders into our framework, we observe consistent improvements. Specifically, replacing RE-GCN with TiRGN leads to better performance, achieving an MRR of $4 4 . 9 7 \%$ , $\mathrm { H } @ 3$ of $5 0 . 7 8 \%$ , and $\mathrm { H } @ 1 0$ of $6 5 . 5 4 \%$ . Incorporating Stella also brings performance improvements. Since structural encoders (e.g., RE-GCN, TiRGN) generally outperform semantic encoders (e.g., LLaMA2, Stella), the improvements are less significant compared to those obtained from better structural encoders. Overall, our method achieves consistent performance gains by integrating both structural and semantic encoders, compared to using a single encoder alone. These experimental results strongly support our claim that our model is not limited to specific structural or semantic encoder, allowing MESH to effectively integrate various advanced encoder modules.
Table 3: Compatibility study on ICEWS14.
# 4.4 Ablation Study
Table 4 shows the ablation studies of our proposed model. First, we remove the semantic or structural information obtained from Equation (1)/(2), denoted as w/o Semantic Info or w/o Structural Info. It leads to a $2 . 4 7 \% / 4 . 5 9 \%$ decrease in MRR, indicating that both types of information are complementary and crucial for accurate predictions. Next, we drop the specialization of event-aware experts as w/o Event-aware, specifically by removing the auxiliary expert loss in Equation (19). Finally, we omit the prediction expert, i.e., replace $\pmb q$ in Equation (9) with the average of $\{ \pmb q _ { i } \}$ , denoted by w/o Prediction Expert. It leads to decreases of $0 . 9 2 \%$ in MRR, $1 . 5 3 \%$ in $\mathrm { H } @ 3$ , and $1 . 5 4 \%$ in $\mathrm { H @ 1 0 }$ , indicating the importance of adaptively integrating multiple event-aware experts. Appendix C.1 provides the further studies on gating inputs.
Figure 3: Sensitivity analysis results of $\omega$ on ICEWS14.
Table 4: Ablation study on ICEWS14.
# 4.5 Sensitivity Analysis
To explore the sensitivity of MESH to the expert loss weight $\omega$ in Equation (19), we conduct experiments by varying the value of $\omega$ from 0.2 to 2.0. As shown in Figure 3, we evaluate the model performance on ICEWS14 using MRR and $\mathrm { H } @ 3$ . The MRR varies between $4 3 . 9 7 \%$ and $4 4 . 3 6 \%$ , while $\mathrm { H } @ 3$ varies between $4 8 . 9 1 \%$ and $4 9 . 8 1 \%$ . The results show that our model maintains stable performance across different values of $\omega$ . As $\omega$ increases, the model performance first improves and then declines, achieving the best results when $\omega = 1$ . This trend indicates that the model performs best when the expert loss and prediction loss are weighted equally, showing that maintaining a balanced scale between these two loss components is better.
Table 5: Different event types on ICEWS14.
Table 6: T-test for $\alpha$ on ICEWS14
# 4.6 In-depth Analysis on Event Types
In this section, we conduct three experiments to validate our claim that different event types require distinct types of information, and to explore the optimal number of experts.
Performance on Different Events. As shown in Table 5, we observe distinct performance between graph-based methods and LLM-based methods on different types of events. Graph-based methods like RE-GCN demonstrate strong capability in capturing evolution patterns through their structural modeling, while LLM-based models (e.g., GenTKG) excel particularly at modeling historical events due to their powerful representation learning but show limited generalization to non-historical scenarios. Our proposed method achieves consistent improvements in both scenarios, suggesting its effectiveness in learning specific reasoning patterns for different types of events. This balanced performance can be attributed to our model’s ability to leverage both structural patterns and semantic information effectively, bridging the gap between historical/non-historical events.
Statistic Analysis of Prediction Expert. In this part, we present a statistical analysis to demonstrate the ability of the prediction expert to predict different event types with varied patterns. We performed a t-test on $\alpha _ { 1 }$ , as shown in Table 6. $\alpha _ { 1 }$ refers to the weight computed in Equation (7), which is assigned to the historical expert for prediction. As shown in the ‘Mean’ row, we observe that the mean value of $\alpha _ { 1 }$ for historical events is relatively higher than that for non-historical events. With standard deviations calculated from 7,371 samples, we conducted a ttest with the alternative hypothesis that ‘The mean weight for historical events is greater than that for non-historical events’, which was validated with a highly significant p-value $( p < 0 . 0 0 1 )$ .
Table 7: Expert configuration tests on ICEWS14.
Sensitivity Analysis on Event-aware Experts Configuration. In this part, we analyze the experiment results of varying the number of historical/non-historical expert modules. As shown in Table 7, the optimal performance is achieved with $( M , N ) = ( 1 , 1 )$ . As the number of experts increases, the prediction performance tends to decrease, indicating that complex combinations of expert modules are not necessary for the TKG Reasoning task. In fact, increasing the number of experts may lead to parameter redundancy and raise the risk of overfitting. | Temporal knowledge graph reasoning aims to predict future events with
knowledge of existing facts and plays a key role in various downstream tasks.
Previous methods focused on either graph structure learning or semantic
reasoning, failing to integrate dual reasoning perspectives to handle different
prediction scenarios. Moreover, they lack the capability to capture the
inherent differences between historical and non-historical events, which limits
their generalization across different temporal contexts. To this end, we
propose a Multi-Expert Structural-Semantic Hybrid (MESH) framework that employs
three kinds of expert modules to integrate both structural and semantic
information, guiding the reasoning process for different events. Extensive
experiments on three datasets demonstrate the effectiveness of our approach. | [
"cs.CL"
] |
# 1 Introduction
The dream of automating software engineering (SE) has long captivated both the SE and artificial intelligence (AI) communities [1, 2, 3]. Recent advancements in Large Language Models (LLMs) have shown promising results, particularly in code generation at the function level, with models achieving resolution rates above $90 \%$ on benchmarks such as HumanEval [4]. Unfortunately, real-world SE tasks extend far beyond isolated functions or self-contained code files. This is exemplified by repository-level issue resolution [5, 6], which encompasses not only software maintenance—addressing bugs and technical debt—but also software evolution, which involves introducing new features and enhancements [7].
The complexity of repository-level coding tasks has led researchers and practitioners to assume that sophisticated strategies are necessary for their completion [8]. Indeed, current leading approaches typically utilize LLM agents powered by proprietary models like GPT-4/4o [9] and Claude 3.5 Sonnet [10]. These agents are designed to leverage tools, execute commands, observe environmental feedback, and plan subsequent actions [11]. Nevertheless, these methods suffer from two problems. First, the agent-driven mechanism introduces unpredictability in decisionmaking [2]. As the reasoning processes become intricate in tackling complex problems, the accumulation of errors can hinder the generation of optimal solutions [12]. Second, the reliance on closed-source models creates substantial barriers for the broader SE community [13, 14], including limited accessibility, inability to enhance or customize models for specific tasks, and serious security concerns regarding the privacy of sensitive code repositories when interacting with external API services.
The above two challenges lead to a bold question: Can open-source LLMs be employed in an agentless manner to complete repository-level coding tasks? At first glance, this seems improbable. Closed-source agent-based approaches can resolve up to $5 5 \%$ of issues on the popular SWE-bench Lite benchmark2 for issue fixing, whereas existing methods using open-source models have only achieved a maximum resolution rate of $3 0 . 6 7 \%$ as of May 2025 [15].
Despite these initial reservations, we posit that the answer is “Yes”, and the key lies in empowering the open-source LLMs to fully comprehend code repositories, not just the information within individual functions and files, but also the dependencies across functions and files. To move forward to this goal, we propose Code Graph Models (CGMs), to jointly model the semantic and structural information of code repositories. Specifically, we first construct a code graph for each repository, which characterizes the hierarchical and reference dependencies between code entities. We then develop a method to integrate this graph into the LLM through two key mechanisms. (i) Semantic Integration: Node attributes (containing code or comments) are first encoded by a pretrained text encoder and then mapped to the LLM’s input space via an adapter, enabling the model to understand the semantic information of all nodes. (ii) Structural Integration: The graph structure is incorporated into the LLM through the attention mask, allowing direct message passing only between neighboring nodes in each layer of the LLM, similar to spatial Graph Neural Networks (GNNs) [16]. The entire system— comprising the text encoder, adapter, and LLM decoder—is then fine-tuned using Low Rank Adaptation (LoRA) [17]. The resulting CGM can tackle repository-level coding tasks by using both the code graph and user instructions (text format). To further augment the abilities of the CGM, we develop a specially designed Graph Retrieval-Augmented Generation (RAG) framework, consisting of four modules: Rewriter, Retriever, Reranker, and Reader (i.e., CGM). The first three modules focus the CGM on the subgraph that is most pertinent to the user’s query or issue.
Our approach has demonstrated remarkable results on the SWE-bench Lite benchmark, reaching a $4 3 . 0 0 \%$ resolution rate using the open-source Qwen2.5-72B model and our agentless RAG framework. As of May 2025, this performance ranks first among methods utilizing open-source models, second among methods with open-source code implementations (the underlying model may still be closed-source), and eighth overall. Notably, our approach surpasses the previous best method based on open-source models (Moatless+DeepSeek-V3 [15]) by $12 . 3 3 \%$ , despite that method employing DeepSeek-V3, which shows stronger performance than Qwen2.5-72B.
The main contributions of this work are as follows:
• We propose CGMs, a novel architecture that seamlessly integrates repository code graphs with open-source LLMs through semantic and structural integration. Its modular design allows independent replacement of components—including the encoder and adapter—providing flexibility that may further enhance performance.
• We develop an agentless Graph RAG framework that enhances the CGM’s performance by focusing on the most relevant subgraphs for user queries.
• Our CGM, armed with the Graph RAG, achieves a $4 3 . 0 0 \%$ resolution rate on SWE-bench Lite, surpassing most agent-based approaches. We also demonstrate its effectiveness on other repository-level tasks such as code completion.
# 2 Related Works
# 2.1 Large Language Models for Code
Recent advancements in LLMs have shown remarkable success in generating code at self-contained function or file levels [3]. This includes powerful closed-source models like GPT-4/4o [9], Gemini-2.0 [18], and Claude 3.5 Sonnet [10], as well as open-source alternatives such as Llama 3.1 [19], Qwen 2.5 [20], and DeepSeek-V3 [21]. Additionally, code-specialized open-source models have also emerged, including StarCoder [22, 23], DeepSeek-Coder [14, 24], and Qwen-Coder [25]. However, these models struggle with repository-level coding tasks that better reflect practical software development scenarios. Even the most capable closed-source models achieve only modest success rates on the SWE-bench Lite benchmark [5] for real-world issue fixing, while open-source models lag further behind with a maximum resolution rate of $2 6 \%$ [26]. Although closed-source models show superior performance, their limited accessibility and data privacy concerns hinder widespread adoption in the SE community. Furthermore, their proprietary nature prevents fine-tuning on task-specific data to improve performance, if even such data is available.
For open-source LLMs to better handle repository-level tasks, they must develop a comprehensive understanding of both semantic and structural information within codebases. DeepSeek-Coder [14] has attempted to address this challenge by pre-training models on topologically sorted repository codes. However, this approach faces two major limitations: real-world repositories often contain more code than can fit within the model’s maximum context length; and the conversion of repository structure into text format tends to obscure explicit dependencies that exist in the codebase.
To overcome these challenges, we propose representing repositories as text-rich graphs and aligning them with LLMs via self-supervised continual pre-training. This approach preserves code repository structure while enabling more effective processing and understanding of complex dependencies.
# 2.2 Graphs in Code Language Models
The integration of graph structures into code language models can be classified into three main approaches [27]: (1) attention mask modification, (2) graph-to-text conversion, and (3) positional encoding augmentation. In the first approach, models like GraphCodeBERT [28] and StructCoder [29] modify attention masks to capture relationships between code tokens in Abstract Syntax Trees (ASTs) and Data Flow Graphs (DFGs). The second approach, demonstrated by TreeBERT [30] and UniXcoder [31], transforms ASTs or node paths into textual sequences that can be processed by language models. The third approach, exemplified by TPTrans [32], leverages relative positional encodings to represent structural relationships within ASTs.
While these approaches have shown promise, they primarily focus on Transformer encoders and small-scale language models (such as BERT or CodeT5) and are limited to file- or function-level tasks. In contrast, our work enhances decoder-only LLMs to handle repository-level tasks. We construct text-rich code graphs for entire codebases, moving beyond simple ASTs or DFGs. Inspired by GraphCodeBERT and StructCoder, we incorporate graph structures through attention masks in LLMs. However, due to the text-rich nature of the graphs, each node’s text or semantic information is processed by a pretrained text encoder and then projected onto the LLM’s input space via an adapter.
# 2.3 Agent-drive Methods for Software Engineering
LLM-based agents like Devin [33] have shown the potential to solve real-world SE problems through their reasoning [34, 35] and interactive capabilities [36, 37, 38, 11]. Along this direction, researchers have worked to enhance LLM agents through various approaches, including specialized agent-computer interfaces (ACI) [39, 40, 8, 41], fine-grained search [42, 12, 11], and expanded action spaces [43].
However, these agent-based approaches face several drawbacks. First, they typically delegate decision-making to the agents, allowing them to determine both the timing and nature of actions. While agents base their decisions on previous actions and environmental feedback, the expansive action space and complex feedback mechanisms can lead to repetitive behaviors or accumulating errors, ultimately resulting in suboptimal solutions [12]. Second, resolving a single issue often requires 30-40 interaction turns, making the process time-consuming and complicating the identification of specific turns that resulted in unsatisfactory outcomes [2]. Third, the inherent unpredictability of agent behavior and reliance on closed-source models creates obstacles for leveraging data to improve performance, despite the abundance of such data in practice, such as issue-patch pairs for issue fixing [5]. While SWE-Gym [44] attempts to address trainability, it may introduce bias by only training with the trajectories that lead the SWE agent to correct answers. As a remedy, we propose the CGM, built on open-source LLMs and enhanced through an agentless Graph RAG framework.
# 2.4 Agentless Methods for Software Engineering
Agentless models offer a more controlled approach to simulating real-world SE processes by following well-defined, fixed steps rather than relying on LLM agents to make autonomous decisions or use complex tools. They help avoid the issues of unpredictability and lengthy interaction chains. These methods typically operate in two main stages: localization and editing [45]. The localization stage identifies relevant code snippets within a repository, while the editing stage generates or modifies code based on these identified sections. This framework is particularly effective for repository-level code completion tasks, especially when combined with RAG [46, 47]. For more complex tasks like issue fixing, enhanced approaches with additional steps exist [2, 1]. For instance, Agentless [2] implements a comprehensive ten-step pipeline, dedicating four steps to improving localization accuracy. This method has achieved a promising resolution rate of $4 0 . 6 7 \%$ on SWE-bench Lite, comparable to state-of-the-art (SOTA) agent-based methods, though it relies on the closed-source model Claude-3.5 Sonnet.
Figure 1: An example of our repository-level code graph, where “PKG”, “FUNC”, and “T-FILE” represents “PACKAGE”, “FUNCTION”, and “TEXTFILE” respectively. In this graph, solid lines represent hierarchical dependencies (i.e., contains), while dashed lines represent reference dependencies (calls/imports/extends).
Recent research has also focused on enhancing code understanding by incorporating structural information through graph-enhanced repository modeling [48, 49, 45]. However, even when graph structures are used during retrieval, existing methods typically flatten the retrieved code snippets into linear text sequences for downstream model prompting. This flattening process fails to preserve the inherent heterogeneity between graph and text modalities. As a remedy, we propose the CGM that explicitly aligns these two distinct modalities, enabling better preservation and utilization of structural information throughout the entire process.
# 3 Code Graph Construction
Before delving into the CGM, it is crucial to understand the repository-level code graph that CGM utilizes and the process of its construction. The primary aim of this code graph is to offer a structured representation of the structural and semantic information inherent in complex codebases.
We represent each repository as a directed graph $G = ( V , E )$ , where $V$ is the set of distinct entities in the codebase and $E$ is the set of edges between these entities. To be specific, the code graph includes up to seven types of nodes and five types of edges (details are provided in Appendix B). The node types vary in granularity, ranging from the repository level (REPO) to fine-grained attributes. The edge types comprise both hierarchical (i.e., contains) and reference dependencies (calls/imports/extends).
As shown in Figure 1, the hierarchical dependencies (i.e., the solid edges) span the code graph. In other words, all nodes are interconnected by edges reflecting hierarchical dependencies, establishing a top-down tree structure. This structure mirrors the organization of code entities as dictated by file systems and programming language syntax rules. Building this tree graph begins with AST parsing [48]. During this phase, code entities and their hierarchical dependencies are identified in a recursive manner: the root node (i.e., REPO) is added to the graph first, followed by its children (i.e., PKG and T-FILE), until all nodes without descendants (i.e., FUNC) are processed. With each recursion, directed edges are added from parents to children.
On the other hand, reference dependencies (i.e., the dashed edges) capture interactions between different entities, such as class inheritance, function calls, and module imports. Unlike hierarchical edges, which maintain a vertical hierarchy, reference edges create horizontal connections that may introduce cycles, such as those caused by recursive calls. These edges are typically not part of an AST. To derive them, we conduct a lightweight semantic analysis to resolve symbols, such as references or calls to classes and attributes. Once a target symbol is identified, an edge is added from the source node to the target node in the code graph.
Concerning node attributes, we retain the original content and line range of each node. This approach enables explicit graph traversal and retrieval and facilitates training models with enhanced semantic understanding capabilities. During post-processing, we remove the text contained in the child nodes from the parent nodes within the tree graph derived from the hierarchical dependencies. The resulting code graph is a text-rich graph [50] in which each node encapsulates a corresponding code snippet.
# 4 Code Graph Models (CGMs)
In this section, we elaborate on the architecture of the Code Graph Model (CGM), the training strategy we adopted, and how we enhance the CGM via the Graph RAG framework.
$\odot$ ISSUE: Modeling‘s \`separability_matrix\` (b) Retriever Selected Subgraph (d) Reader / Code Graph Model (CGM)
dnoesetsendoCt ocompoputnedsMepoadrealsbiClitoynsciodreretcthley for IEnxfteraecrtoNroNdeosdes 福 Adjacency Matrix AGttreanptiho-nawMaarsek TNeoxdteTEomkebned
following model: \`\`\`python ? Structural Integration Node Token
from astropy.modeling import models ... 5 Trainable 国
(a) Rewriter Extractor Nodes Subgraph + Semantic Integration Frozen Code Entity & Keywords Inferer Text ® LoRA E A LoRA E Text Encoder Code Entity:aastsrtrooppy/y/mooddeleiling/g/msoepdaelrsa.pblye..p.y S LoRA E A A Adapter Keywords: separability_matrix nested_models ... Subgraph Z LoRA E A D D LLM-Decoder 8 B (c) Reranker Text of D Texts Selected Files {Gqivuenryt}he contexts: [node token] Extractor Inferer File1.py File2.py FileN.py 8 8 The following files may need to be 12: Fiulencntiaomnes coornmtaeitnhiondgs‘hsaepnadrlianbgl.e..py’. SFtilaegeN1a:me Rank √ File SkeletioSntaRgaen2k: mPordoifimedp:t {fsoerletchted“_xfixlxes”}task Answer
# 4.1 Model Architecture
The architecture of the CGM is illustrated in Figure 2(d). CGM takes the code graph as inputs, enhancing the LLM’s comprehension of both semantic and structural information within the graph. Below, we detail how CGM integrates both aspects into the LLM.
Semantic Integration: The code graphs are text-rich, with semantic information only residing in the textual contents of the nodes. As shown in Figure 2(d), we integrate the node information into the LLM decoder $\mathcal { D }$ by transforming node text into node tokens through an encoder $\mathcal { E }$ and an adapter $\mathcal { A }$ .
Specifically for the encoder, we utilize the pretrained encoder from CodeT $5 +$ [51], chosen for its proven effectiveness in processing both source code and text (comments and documentation). For nodes containing lengthy text, we segment the content into chunks of 512 tokens each. These chunks are processed independently by the encoder. To maintain graph consistency, we duplicate the source node for each chunk, preserving identical connections to other nodes. The chunks within a node are fully connected, and their sequential order is maintained through position embeddings in the LLM decoder $\mathcal { D }$ . We fine-tune the encoder using Low-Rank Adaptation (LoRA) [17] to optimize its performance for downstream tasks.
The adapter $\mathcal { A }$ serves as a bridge between the encoder and LLM, projecting encoder outputs into the LLM’s input embedding space. Following successful practices in Vision Language Models (VLMs) [52, 53], we implement the adapter as a two-layer MLP with GELU activation [54]. The adapter is trained from scratch with random initialization.
Unlike VLMs, which bridge different modalities, CGM’s encoder $\mathcal { E }$ and decoder $\mathcal { D }$ are of the same modality, simplifying the alignment process. Furthermore, we compress each 512-token chunk (shown as gray tokens in Figure 2(d)) into a single node token (black tokens in Figure 2(d)) for the LLM decoder. This compression effectively extends the LLM’s context length by a factor of 512, enabling the processing of extensive code repository contexts. Similar techniques, referred to as soft prompt compression, have been shown to enhance long-context modeling in recent studies [55, 56, 57].
Structural Integration: Besides node information, another challenge is integrating the graph structure into the LLM decoder $\mathcal { D }$ . While LLMs excel at processing sequential data, they are not inherently designed to capture graph structures [50]. Traditional approaches have attempted to incorporate repository-level structural information by simply linearizing code snippets into sequences [14, 45], but this transformation often fails to preserve the explicit relationships between code entities.
To better maintain structural relationships, we introduce a graph-aware attention mask to replace the causal attention mask solely between node tokens in the LLM. This mask is derived from the code graph’s adjacency matrix, taking into account the node duplication process described earlier. We then fine-tune the LLM with LoRA to adapt it to both the new attention mechanism and the node tokens from the adapter $\mathcal { A }$ . This approach ensures that attention occurs only between neighboring nodes in the code graph, mimicking the message passing mechanism frequently used in spatial GNNs [58, 59].
# 4.2 Training Strategies
Given the pretrained encoder $\mathcal { E }$ and decoder $\mathcal { D }$ , the training of the CGM consists of two main phases:
Subgraph Reconstruction Pre-training: This phase focuses on training the CGM to effectively capture both the semantic and structural aspects of code graphs. To achieve this, we introduce a novel pre-training task that requires the model to reconstruct code content from its corresponding code graph, a process we refer to as Graph-to-Code.
In this task, the inputs are subgraphs randomly sampled from large-scale code graphs, with a limited number of nodes. This constraint ensures that the corresponding output code remains below 8,000 tokens, allowing for computational efficiency and manageable context sizes during training. To enhance the meaningfulness of the output code, we employ a hierarchical approach that preserves the inherent dependencies within the code graphs as they are translated into text. Concretely, for higher-level nodes (e.g., REPO and PACKAGE), we position them at the beginning of the output sequence or their respective files to maintain hierarchical consistency. We then utilize the approach from DeepSeek-Coder [14] to perform topological sorting on all file nodes, thereby establishing a structured order for the code content. Lastly, intra-file nodes (e.g., CLASS and FUNCTION) are sorted by line numbers and concatenated within their respective files, culminating in a coherent text sequence that accurately represents the sampled subgraph.
Noisy Fine-tuning: This phase fine-tunes CGM on real-world issue-patch pairs [5], adapting it to practical software debugging and code editing tasks. As displayed in Figure 2(d), the model learns to generate code patches based on two inputs: (i) a subgraph and (ii) a text prompt that indicates the “oracle” files—files that require modification according to the ground-truth patch. The subgraph is constructed by combining the oracle files, their downstream nodes, and one-hop neighbors from the repository-level code graph. To improve model robustness, we intentionally introduce noise into the prompts: $10 \%$ include an irrelevant file that doesn’t require modification, while another $10 \%$ omit at least one oracle file. This controlled noise exposure helps the model better generalize to real-world scenarios where inputs may be incomplete or contain irrelevant information.
# 4.3 The Graph RAG Framework
This section presents our Graph RAG framework, a streamlined extension of CGM designed for automated resolution of real-world repository tasks. The framework consists of four core modules: Rewriter, Retriever, Reranker, and Reader (the proposed CGM). This compact architecture stands in contrast to the SOTA agentless method, which requires ten distinct steps [2].
As illustrated in Figure 2, the framework operates sequentially. First, Rewriter enhances the original issue description to help Retriever identify relevant nodes in the code graph. Retriever then constructs a connected subgraph using both lexical and semantic search techniques. This subgraph serves as input for both Reranker and Reader. Reranker analyzes the subgraph to identify the Top $K$ files likely to be modified. Finally, Reader (CGM) generates the code patch using both the subgraph from Retriever and the selected files from Reranker. Rewriter and Reranker are implemented by prompting the open-source Qwen2.5-72B-instruct [20], while the semantic search in Retriever utilizes the open-source CGE-Large model [60]. In Appendix D, we provide a case study on how CGM solve a specific issue from scratch. Meanwhile, we report the computational costs of our framework, including the cost of code graph construction, in Appendix C.4
Rewriter comprises two subcomponents: Extractor and Inferer, as illustrated in Figure 2(a). Extractor identifies key code elements from the user query, including file names, function names, and relevant keywords. Inferer then enriches the query’s semantics by providing more detailed functional descriptions. The specific prompts for both components are detailed in Appendix F.
Retriever generates a connected subgraph from the code graph for subsequent modules. As shown in Figure 2(b), Extractor nodes (blue nodes) are first identified through string matching with the code elements and keywords extracted earlier. Next, Inferer nodes are located (red nodes) through semantic search, comparing the Inferer’s output with each node’s textual information. These anchor nodes are then expanded to include their one-hop neighbors, capturing local programming dependencies [61]. To ensure connectivity and incorporate upstream information, these expanded nodes are connected to the Root node (REPO in Figure 1). Finally, each FILE node in the subgraph is expanded to include all its internal nodes, aligning with Reranker’s file-level output. The result is a repository-enhanced context subgraph representing the user query, asdenoted by the shaded nodes in Figure 2(b).
Table 1: Performance comparison of open source system on SWE-bench Lite and Verified. CS-3.5 denotes Claude-3.5- Sonnet-20241022, DS-V3 represents DeepSeek-V3, Q2.5C-32B means Qwen2.5-Coder-32B and Q2.5-72B stands for Qwen2.5-72B-Instruct. The icons $\Cup$ and $\underline { { \widehat { \mathbf { a } } } }$ denote open and closed-source models, respectively.
Reranker further refines the subgraph generated by Retriever, selecting only the Top $K$ files deemed most likely to be revised. This refinement is necessary because Retriever’s output includes files that may only be referenced and not modified. Reranker operates in two steps: first, it selects $K = 1 0$ files based on the original user query and file names; next, it narrows this selection down to $K = 5$ files by individually scoring each one according to how relevant its file skeleton [2] is to the user query. The specific prompt for Reranker can be found in the Appendix F.
Reader receives two inputs: the subgraph from Retriever as node tokens (black tokens) and the selected files with their full contents as text tokens (gray tokens), as depicted in Figure 2(d). These inputs are combined using the prompt template in the white box on the left of the figure. The graph and text tokens complement each other by providing global and local information related to the user query. Using this comprehensive information, Reader (i.e., the CGM) generates the final response.
# 5 Experiments
In this section, we assess the performance of the CGM on two primary tasks: repository-level issue resolution and code completion, for both Python and Java programming languages. We also conduct a series of ablation studies to validate the effectiveness of the model design and training strategies.
# 5.1 Repository-Level Issue Fixing
This section evaluates the proposed CGM against other SOTA methods in resolving real-world software issues. We use three benchmark datasets: SWE-bench Lite, containing 300 issues from 11 Python repositories, SWE-bench Verified, containing 500 issues from 12 Python repositories, and SWE-bench-java Verified, comprising 91 issues from 6 Java repositories. All benchmarks utilize developer-written unit tests to verify the correctness of model-generated patches, ensuring rigorous evaluation. Performance is measured using the resolution rate $( \% \mathbf { R } )$ , defined as the percentage of successfully resolved issue instances. We present results for two variants of our model: CGM-Multi, trained for both issue resolution and code completion tasks across Python and Java repositories, and CGM-SWE-PY, specifically optimized for Python issue resolution. Detailed information regarding the datasets and implementations can be found in Appendix C.5.
As shown in Table 1(a), our CGM-SWE-PY model achieves a $43 \%$ resolution rate on SWE-bench Lite, placing it first among methods utilizing open-source models, second among those that implement open-source methods but use closed-source models, and eighth overall. Notably: (i) When compared to other methods based on open-source models, CGM-SWE-PY outperforms Moatless+DeepSeek-V3 by $12 . 3 3 \%$ [15], despite DeepSeek-V3’s generally superior performance in various coding benchmarks compared to our LLM decoder Qwen2.5-72B [21]. Furthermore, it exceeds Lingma SWE-GPT by $21 \%$ , even though the latter employs carefully curated COT (chain-of-thought) data to boost Qwen2.5-72B’s effectiveness in issue resolution. (ii) In relation to other agentless frameworks, CGM-SWE-PY slightly surpasses Agentless $^ +$ Claude-3.5-Sonnet by $2 . 3 3 \%$ and significantly outperforms Agentless+GPT-4o by $1 1 . 0 0 \%$ . This achievement is particularly noteworthy given that Agentless leverages a complex ten-step pipeline with more powerful closed-source models, while CGM-SWE-PY operates on a simpler four-step Graph RAG framework. We attribute this success to CGM’s enhanced capacity to interpret both semantic and structural information within repositories. (iii) While the top methods on SWE-bench Lite are entirely closed-source regarding both models and implementations, CGM-SWE-PY’s results are within $10 \%$ of these systems. This indicates that CGM-SWE-PY has the potential to compete with leading agent-based methodologies. Compared to other open-sourced model-based methods, CGM significantly narrows the gap between open-source models and closed-source methods in issuefixing scenarios. (iv) Our multi-task model, CGM-Multi, achieves a resolution rate of $3 6 . 6 7 \%$ on SWE-bench Lite, ranking 23rd overall. The relatively lower performance compared to CGM-SWE-PY can be attributed to its broader focus, which encompasses both issue fixing and code completion tasks across Python and Java repositories. (v) We further apply CGM-SWE-PY to a larger Python benchmark—SWE-bench Verified in Table 1(b), where CGM-SWE-PY ranks first again among open weight models, and fifth among methods with open-source system.
Table 2: Performance evaluation on SWE-bench-java Verified. DS-V2 denotes DeepSeek-Chat-V2, DSC-V2 represents DeepSeek-Coder-v2, GPT-4o refers to GPT-4o-2024-05-13, DB-128K stands for Doubao-Pro-128k, and GPT-4o-MINI indicates GPT-4o-MINI-2024-07-18. The icons $\Cup$ and denote open-source and closed-source methods or models, respectively.
Table 3: Performance comparison on CrossCodeEval and ComplexCodeEval benchmarks. DeepSeek-236B represents DeepSeek-V2.5-236B, Mixtral-123B denotes Mistral-Large-Instruct-2411, and Qwen2.5-72B refers to Qwen2.5-72BInstruct. Baseline models are evaluated using FIM (Fill-in-Middle) and one-hop expansion.
In the SWE-bench-java evaluation for Java repositories as shown in Table 2, CGM-Multi records a resolution rate of $1 4 . 2 9 \%$ , significantly outclassing SWE-Agent build upon both closed-source and open-source models. These findings further substantiate the effectiveness of our proposed GCM and the specially designed Graph RAG framework.
# 5.2 Repository-Level Code Completion
In this section, we evaluate the CGM’s performance on code completion tasks at the repository level for both Python and Java programming languages. Our evaluation uses two benchmarks: CrossCodeEval and ComplexCodeEval. Concretely, CrossCodeEval focuses on cross-file code completion, while ComplexCodeEval encompasses more intricate tasks, including API recommendations and test case generation. Performance is measured using two metrics: Edit Match (EM) and Edit Similarity (ES), evaluating how similar the generated code is to the ground-truth code. Detailed information regarding the datasets, metrics, and the implementation of baseline models can be found in Appendix C.6.
Table 3 presents the results for CGM-Multi, which uses Qwen2.5-72B-instruct as its LLM decoder. We compare it with similarly sized large language models, including Mistral-Large-Instruct-123B, DeepSeek-V2.5-236B, and the standalone Qwen2.5-72B-instruct. For all models, context retrieval for code completion is performed by identifying one-hop neighbors of the target file (that requires completion) in the code graph. While CGM-Multi processes the entire subgraph as input, baseline models only receive the textual content from the nodes. Results show that CGM-Multi performs on par with or exceeds other models on CrossCodeEval. More importantly, it greatly outperforms the
Table 4: Comparison of CGM with RAG variants on CrossCodeEval. Results are reported for Java and Python across multiple base models. Evaluation metrics include EM and ES.
# baseline models on ComplexCodeEval, demonstrating its superior capability in handling complex tasks through comprehensive subgraph analysis.
Next, we evaluate CGM against other RAG methods for CrossCodeEval. The comparison includes several established systems: BM25, the default retrieval method in CrossCodeEval [62]; RLcoder [63], which employs reinforcement learning for retrieval optimization; RepoFuse [64], which integrates code graphs during retrieval but converts retrieved code snippets into linear text sequences; and R2C2 [65], which combines retrieved code snippets with Tree-sittergenerated abstract context as the input to the LLM. In our CGM implementation, we still construct input subgraphs by combining target files with their one-hop neighbors. We evaluate these methods using various base models for generation, including CodeLlama-7B, StarCoder-7B, DeepSeek-Coder-7B, and Qwen2.5-Coder-7B. This diverse set of comparison methods enables a comprehensive evaluation of CGM’s effectiveness in long-context retrieval and understanding. As shown in Table 4, CGM typically outperforms other RAG methods, regardless of the base model used, suggesting that graph-based context retrieval is more effective for code completion tasks. Moreover, CGM’s superiority over RepoFuse, which also uses code graphs for retrieval, can be attributed to CGM’s explicit integration of structural information within the subgraph, whereas RepoFuse flattens node context into text sequences, obscuring the explicit dependencies among code entities.
# 5.3 Ablation Studies
In this section, we present key findings from our ablation studies, with detailed analysis available in Appendix C.7. Our investigation reveals four crucial insights: (i) Graph RAG: Our assessment of the Graph RAG modules shows that the presence of Rewriter, Retriever, and Reranker is essential for achieving optimal performance on the SWE-bench Lite benchmark. Notably, Reranker plays a pivotal role as it dictates which files should be modified. (ii) Semantic Integration: Joint fine-tuning of all three components—encoder $\mathcal { E }$ , the adapter $\mathcal { A }$ , and the decoder $\mathcal { D }$ —yields superior performance compared to keeping any component fixed. (iii) Structural Integration: The integration of graph structural information through attention masking is essential for optimal performance. (iv) Training Strategies: The subgraph reconstruction task, as described in Section 4.2, significantly contributes to improving the CGM’s overall performance. (v) Backbone Generalization: Moreover, CGM can also be generalized on backbones with different sizes, demonstrating its potential for resource-constrained scenarios. | Recent advances in Large Language Models (LLMs) have shown promise in
function-level code generation, yet repository-level software engineering tasks
remain challenging. Current solutions predominantly rely on proprietary LLM
agents, which introduce unpredictability and limit accessibility, raising
concerns about data privacy and model customization. This paper investigates
whether open-source LLMs can effectively address repository-level tasks without
requiring agent-based approaches. We demonstrate this is possible by enabling
LLMs to comprehend functions and files within codebases through their semantic
information and structural dependencies. To this end, we introduce Code Graph
Models (CGMs), which integrate repository code graph structures into the LLM's
attention mechanism and map node attributes to the LLM's input space using a
specialized adapter. When combined with an agentless graph RAG framework, our
approach achieves a 43.00% resolution rate on the SWE-bench Lite benchmark
using the open-source Qwen2.5-72B model. This performance ranks first among
open weight models, second among methods with open-source systems, and eighth
overall, surpassing the previous best open-source model-based method by 12.33%. | [
"cs.SE",
"cs.LG"
] |
# 1 Introduction
Autoregressive LLMs demonstrate impressive performance across a wide range of tasks, including logical reasoning [Pan et al., 2023], theorem proving [Yang et al., 2023], and code generation [et. al., 2021]. However, because they generate one token at a time, they can be slow when producing long responses. Recent work has explored using diffusion models to accelerate token generation by predicting blocks of tokens in parallel. For tasks such as logical reasoning, where the LLM output is fed into symbolic solvers like Z3 [Fedoseev et al., 2024], syntactic correctness of the output is essential. Prior works [Poesia et al., 2022, Ugare et al., 2024a, Loula et al., 2025] have shown that LLMs frequently make syntactic and semantic errors, often generating structurally invalid outputs that cause downstream tasks to fail due to unparsable input. To mitigate this issue, constrained decoding has emerged as a promising approach that provably ensures structural correctness by projecting the LLM output onto a set of valid strings, typically defined by a regular grammar or, more generally, a context-free grammar (CFG). However, existing constrained decoding techniques are designed specifically for autoregressive LLMs and rely on their step-by-step generation process to prune invalid tokens that cannot lead to structurally valid outputs. At each generation step, the decoder selects the highest-probability token from the set of valid options, based on the LLM’s output distribution.
In contrast, diffusion LLMs predict blocks of tokens in parallel without sequential dependencies, making existing constrained decoding algorithms incompatible. Furthermore, greedy token selection in autoregressive models maximizes the probability locally at each step but can be suboptimal over an entire sequence, potentially leading to structurally valid yet lower-quality outputs that fail to maximize the overall probability of valid strings. Lew et al. [2023], Park et al. [2024b] have reported this distortion in output distribution for autoregressive LLMs under constrained decoding. Therefore, any constrained decoding algorithm for diffusion LLMs should also ensure that enforcing formal constraints does not come at the cost of distorting the true output distribution.
Key Challenges: Diffusion LLMs generate a block of tokens starting from a fully masked string composed of special mask tokens $\perp$ , and iteratively unmask one or more tokens at each step until producing a fully unmasked output. Each unmasking step (referred to as a diffusion step) can unmask tokens at arbitrary positions in the block, with no left-to-right sequential dependency across steps. As a result, designing constrained decoding for diffusion LLMs requires addressing the following:
• RQ1: Efficiently detecting invalid tokens and restricting token choices at each diffusion step to ensure the final unmasked string is always structurally correct. • RQ2: Ensuring the generated token block maximizes the probability under the output distribution.
Contributions: We present the first constrained decoding algorithm for diffusion LLMs, making the following contributions:
• We introduce DINGO, the first constrained decoding algorithm for diffusion LLMs that supports any user-specified regular expression. DINGO provably ensures that the output string is always a valid prefix of some string in the target regular language.
• DINGO uses dynamic programming to ensure that the output string achieves the maximum probability among all valid strings over the output block with respect to the true output distribution. This approach guarantees scalability while maintaining optimality (e.g., maximizing the probability), in contrast to existing methods such as [Park et al., 2024b], which rely on repeated resampling. Resampling-based methods are computationally expensive and unsuitable for practical deployment.
• Extensive experiments on multiple open-source diffusion LLMs and benchmarks show that DINGO significantly outperforms standard unconstrained decoding, achieving up to a $6 8 \%$ improvement on challenging tasks such as the GSM-symbolic benchmark for symbolic reasoning [Mirzadeh et al., 2024] and a JSON generation benchmark [NousResearch, 2024].
Roadmap: We provide the necessary background in Section 2, formalize constrained decoding for diffusion LLMs in Section 3, describe the DINGO algorithm along with its correctness and optimality proofs in Section 4, and present experimental results in Section 5.
# 2 Background
Notation: : In the rest of the paper, we use small case letters $x$ for constants, bold small case letters ${ \bf \Pi } ( { \pmb x } )$ for strings, capital letters $X$ for functions, $\mathbf { \nabla } \cdot \mathbf { \sigma } _ { \cdot }$ for string concatenation, $| { \pmb x } |$ to denote the length of $\pmb { x }$
Diffusion LLM: The diffusion LLM $\mathcal { L } _ { m , n } : V ^ { m } \to V ^ { n }$ processes finite strings $\pmb { p } \in V ^ { m }$ over a finite alphabet $V$ including the special mask symbol $\perp$ and produces the output string $o \in V ^ { n }$ . Typically $\pmb { o } = \pmb { p } \cdot \pmb { r }$ with length $n$ represents the entire output string of $\mathcal { L }$ where $\pmb { p }$ is the input prompt, $r$ is the response, and $m + | { \boldsymbol { r } } | = n$ . $\mathcal { L }$ can compute the response $r$ over a single block Austin et al. [2021], Ye et al. [2025], Nie et al. [2025] in pure diffusion setup or over multiple blocks i.e. $r _ { 1 } \cdot r _ { 2 } \cdot \cdot \cdot r _ { k }$ in a semi-autoregressive setup where different blocks are computed sequentially from left to right [Han et al., 2023, Arriola et al., 2025].
At a high level, to compute a block of tokens of size $d , { \mathcal { L } }$ pads the prompt $\pmb { p }$ with a fully masked suffix, resulting in $\pmb { p } \cdot \perp ^ { d }$ , where $\perp ^ { d }$ denotes a sequence of $d$ special mask tokens $\perp$ . The model then iteratively unmasks a subset of these tokens at each step, ultimately producing a fully unmasked output string $\pmb { o }$ . Each such step is referred to as a diffusion step, and $\mathcal { L }$ typically applies $T$ diffusion steps to compute $\pmb { o }$ . The number of steps $T$ is usually a fixed, predetermined constant satisfying $T < d$ , which enables greater scalability compared to their autoregressive counterparts.
Definition 2.1 (Diffusion step). A diffusion step $f _ { n } : V ^ { n } \times \mathbb { N } \to V ^ { n }$ applies a single unmasking step to a masked (or, a partially masked) string of length to compute a new masked (or, possibly unmasked) string of the same length. The first argument represents the input string appended with the output block while the second argument dictates the number of masked tokens in the output string.
Each diffusion step $f _ { n }$ consists of two components: a transformer step $\mathcal { N } _ { n } : V ^ { n } \to \mathbb { R } _ { + } ^ { | V | \times n }$ which predicts the token probability distribution at each output position, and a mask prediction step $\mathcal { M } _ { n } : \mathbb { R } _ { + } ^ { | V | \times n } \times \mathbb { N } \to \mathbb { R } _ { + } ^ { | V | \times n }$ , which determines which token positions to remask. Typically, for each position, the mask prediction step identifies the token with the highest probability and compares these maximum probabilities across positions. $\mathcal { M } _ { n }$ then greedily remasks positions with relatively lower max-probability scores [Nie et al., 2025] and produces the modified token distribution. Further details about $\mathcal { N } _ { n }$ and $\mathcal { M } _ { n }$ are in Appendix A.
Formally, the diffusion step is defined as $f _ { n } ( \pmb { x } _ { i - 1 } , i ) = D _ { m , n } \big ( \mathcal { M } _ { n } ( \mathcal { N } _ { n } ( \pmb { x } _ { i - 1 } ) , i ) \big )$ where $D _ { m , n } :$ R|+V |×n → V n is the decoder. We now use the diffusion step to formally define the diffusion LLM for generating strings of length $n$ in either a single-block or multi-block setting.
Definition 2.2 (Single block diffusion LLM). A diffusion LLM that outputs a block of $d$ tokens given an input $\pmb { p } \in V ^ { m }$ using $T$ diffusion steps is a function $\mathcal { L } _ { m , n } : V ^ { m } \to V ^ { n }$ , where $n = m + d$ and the output is $\pmb { o } = \pmb { p } \cdot \pmb { r } = \mathcal { L } _ { m , n } ( \pmb { p } )$ . Let ${ \bar { f _ { n } } } : V ^ { n } \times \mathbb { N } \to V ^ { n }$ denote a single diffusion step, and let $P _ { m , n } : V ^ { m } V ^ { n }$ be the padding function. Then the output is computed as ${ \pmb O } = \mathcal { L } _ { m , n } ( { \pmb p } ) = { \pmb x } _ { T }$ where: $\pmb { x } _ { 0 } = P _ { m , n } ( \pmb { p } ) = \pmb { p } \cdot \perp ^ { d }$ and ${ \pmb x } _ { i } = f _ { n } ( { \pmb x } _ { i - 1 } , i )$ for $1 \leq i \leq T$ .
Definition 2.3 (Semi Autoregressive diffusion LLM). In the semi-autoregressive setup, given an input $\pmb { p } \in V ^ { m }$ , the output $\pmb { o } \in V ^ { m + d \times k }$ is generated over $k$ blocks, where each block is computed via a call to the single block diffusion model. The output of the $i$ -th diffusion model call is ${ \pmb x } _ { i } = \mathcal { L } _ { m _ { i } , n _ { i } } ( { \pmb x } _ { i - 1 } )$ , with $\pmb { x } _ { 0 } = \pmb { p }$ and the final output $\pmb { o } = \pmb { x } _ { k }$ . The input and output lengths for each block are defined as $m _ { i } = m + ( i - 1 ) \times d$ and $n _ { i } = m + i \times d$ for all $1 \leq i \leq k$ .
DFA and regular expression: We provide necessary definitions regarding regular expression.
Definition 2.4. (DFA) A DFA $\boldsymbol { D _ { \mathcal { R } } } = ( Q , \Sigma , \delta , q _ { 0 } , F )$ for a regular expression $\mathcal { R }$ is a finite-state machine that deterministically processes input strings to decide membership in the language $L ( \mathcal { R } ) \subseteq$ $\Sigma ^ { * }$ defined by $\mathcal { R }$ . It consists of states $Q$ , a start state $q _ { 0 }$ , a set of accepting states $F$ , and transition rules $\delta : Q \times \Sigma Q$ and the input alphabet $\Sigma$ .
Definition 2.5 (extended transition function). The extended transition function $\delta ^ { * } : \Sigma ^ { * } \times Q \to Q$ maps an input $( { \pmb w } , { \pmb q } )$ to the resulting state $q _ { r }$ , obtained by sequentially applying $\delta$ to each character $c _ { i }$ in $\pmb { w } = c _ { 1 } \cdot \cdot \cdot c _ { | \pmb { w } | }$ , starting from state $q$ .
Definition 2.6 (Live DFA states). Given a DFA $( Q , \Sigma , \delta , q _ { 0 } , F )$ , let $Q _ { l }$ represent the set of live states such that $q \in Q _ { l }$ iff $\exists w \in \Sigma ^ { * }$ s.t. $\delta ^ { * } ( { \pmb w } , { \pmb q } ) \in { \cal F }$ .
# 3 Optimal Constrained Decoding
We formalize the correctness and optimality of constrained decoding for any diffusion LLM with respect to a user-defined regular expression $\mathcal { R }$ . Given $\mathcal { R }$ , let $L ( \mathcal { R } ) \subseteq \Sigma ^ { * } \subseteq ( V \setminus \bot ) ^ { * }$ denote the set of all finite strings that satisfy the expression $\mathcal { R }$ .
Correctness: A valid constrained decoding algorithm must ensure that the output string always remains a valid prefix of some string in $L ( \mathcal { R } )$ , effectively eliminating any output that cannot be extended into valid completions. By treating the output string as a prefix rather than a fully completed string, we can accommodate the semi-autoregressive setup, where blocks of tokens are appended to the right of the current output. This approach avoids prematurely rejecting strings that may lead to valid completions in subsequent blocks and also aligns with the notion of correctness adopted in existing constrained decoding algorithms for the autoregressive LLM [Ugare et al., 2024b, Banerjee et al., 2025]. We denote the set of all valid prefixes of $L ( \mathcal { R } )$ as $L _ { P } ( \mathcal { R } )$ .
Each diffusion step $f _ { n }$ produces a string over the vocabulary $V$ , which may include one or more special mask tokens . These tokens act as placeholders for actual (non-mask) tokens that will be filled in during future diffusion steps. To account for these future substitutions, we define a masked (or partially masked) string as valid if there exists a replacement for all mask tokens such that the resulting fully unmasked string is a valid prefix of some string in $L ( \mathcal { R } )$ . To formalize this notion, we first define the substitution set, which represents the set of fully unmasked strings obtained by replacing all mask tokens in a masked or partially masked string. We then use substitution sets to define the correctness of the constrained decoder.
Definition 3.1 (Substitution Set). Given a masked (or, partially masked) string $\pmb { x } \in V ^ { n }$ , the substitution set $S ( { \pmb x } ) \subseteq ( V \setminus \{ \bot \} ) ^ { n }$ is the set of all fully unmasked strings obtained by replacing each occurrence of $\perp$ in $\pmb { x }$ with a token from $V \setminus \{ \bot \}$ . For unmasked strings with no $\perp$ $\bar { , } \bar { S } ( \bar { \pmb x } ) = \bar { \{ \pmb x \} }$
Definition 3.2 (Correctness of Constrained decoder). Any deterministic decoder $D _ { m , n , \mathcal { R } }$ : $\mathbb { R } _ { + } ^ { | V | \times n } \to V ^ { n }$ is a valid constrained decoder if, for all $n \in \mathbb { N }$ , input prompt $\pmb { p }$ and for any output distribution $\mathcal { D } _ { n }$ provided as $n$ probability vectors each of size $| V |$ , there exists an unmasked string $\pmb { x }$ in the substitution set $S ( D _ { m , n , \mathcal { R } } ( \mathcal { D } _ { n } ) )$ of the decoded output such that actual response $\pmb { p } \cdot \pmb { r } = \pmb { x }$ is a valid prefix i.e., $\pmb { r } \in L _ { P } ( \mathcal { R } )$ . 2
Optimality: Given a distribution $\mathcal { D } _ { n }$ and a regular expression $\mathcal { R }$ , the set of decodings that are valid prefixes for $\mathcal { R }$ (as defined in Definition 3.2) may not be unique. An optimal constrained decoder selects, among all valid strings, the string that maximizes the probability under $\mathcal { D } _ { n }$ . The output distribution $\mathcal { D } _ { n }$ is represented as $n$ vectors $\pmb { v } _ { 1 } , \ldots , \pmb { v } _ { n }$ , each of size $| V |$ , where the $i$ -th vector $\pmb { v } _ { i }$ captures the token distribution at position $i$ . For any masked position ${ j , \pmb v _ { j } }$ assigns probability 1 to the mask token $\perp$ and 0 to all other tokens. Assuming the input prompt has length $m$ , the token distribution of the actual response is given by $\pmb { v } _ { m + 1 } , \dots , \pmb { v } _ { n }$ . For any output string $\pmb { r } = t _ { m + 1 } \cdots t _ { n }$ , let $P ( \pmb { r } \mid \pmb { v } _ { m + 1 } \ldots \pmb { v } _ { n } )$ denote the probability of the string $\boldsymbol { r }$ under the output distribution. Then, the optimal constrained decoding can be formalized as follows:
$$
r ^ { * } = \arg \operatorname* { m a x } _ { r } P ( \pmb { r } | \pmb { v } _ { m + 1 } . . . \pmb { v } _ { n } ) \mathrm { s . t . } \exists \pmb { x } \in V ^ { * } . ( \pmb { x } \in S ( \pmb { r } ) ) \land ( \pmb { x } \in L _ { P } ( \mathcal { R } ) )
$$
Since the token distributions $\pmb { v } _ { m + 1 } , \dots , \pmb { v } _ { n }$ are independent across positions, the probability of the string $\boldsymbol { r }$ can be written as $\begin{array} { r } { P ( \pmb { r } | \pmb { v } _ { m + 1 } , \ldots \pmb { v } _ { n } ) = \prod _ { i = m + 1 } ^ { n ^ { - } } \pmb { v } _ { i } [ t _ { i } ] } \end{array}$ where ${ \pmb v } _ { i } [ t _ { i } ]$ denotes the probability assigned to token $t _ { i }$ by the vector $\mathbf { \nabla } \pmb { v } _ { i }$ . Using this, we can rewrite the optimization problem from Eq. 1 as follows:
$$
r ^ { * } = \operatorname * { a r g m a x } _ { r = t _ { m + 1 } \cdots t _ { n } } \prod _ { i = m + 1 } ^ { n } v _ { i } [ t _ { i } ] { \mathrm { ~ s . t . ~ } } \exists x \in V ^ { * } . ( { \pmb x } \in S ( r ) ) \land ( { \pmb x } \in L _ { P } ( \mathcal { R } ) )
$$
# 4 DINGO Algorithm
The search space for Eq. 2 is exponential– $| V | ^ { d }$ , where $d = n - m$ denotes the block length, making naive enumeration-based methods impractical. To efficiently retrieve the optimal output string $r ^ { * }$ from Eq. 2, DINGO leverages dynamic programming. Given a regular expression $\mathcal { R }$ , it first modifies the transition function to handle the mask symbol $\perp$ , which is then utilized during inference.
# 4.1 Precomputation
For a user-provided $\mathcal { R }$ and the corresponding DFA $D _ { \mathcal { R } } = ( Q , \Sigma , \delta _ { \mathcal { R } } , q _ { 0 } , F )$ (referred to as characterlevel DFA) with $\Sigma \subseteq ( V \setminus \bot )$ , we first construct the token-level DFA $\boldsymbol { D _ { t } } = ( Q , ( V \setminus \perp ) , \delta _ { t } , q _ { 0 } , F )$ recognizing $L ( \mathcal { R } )$ over strings generated by $\mathcal { L }$ . A single token $\pmb { t } \in ( V \setminus \bot )$ can span across multiple characters in $\Sigma$ i.e. $\pmb { t } = c _ { 1 } \cdots c _ { l }$ where $c _ { i } \in \Sigma$ . To construct the token-level transition function $\delta _ { t } : Q \times ( V \setminus \bot ) \to Q$ , we process each token $\pmb { t } \in ( V \setminus \bot )$ and state $q \in Q$ by executing the character-level DFA $D _ { \mathcal { R } }$ on the sequence of constituent characters $c _ { 1 } \cdot \cdot \cdot c _ { l }$ , starting from state $q$ , and record the resulting state $q _ { r }$ . We then define the token-level transition as $\delta _ { t } ( q , t ) = q _ { r }$ .
To handle the special mask token $\perp \in V$ , we define the transition function $\delta _ { \perp } : Q 2 ^ { Q }$ . For each state $q \in Q , \delta \overset { \vartriangle } { \bot } ( q )$ returns the set of states $Q _ { r } \subseteq Q$ that are reachable via a single token transition using $\delta _ { t }$ . Formally, $\delta _ { \perp } ( q ) = \{ q ^ { \prime } \mid q ^ { \prime } = \delta _ { t } ( q , t ) ; t \in \left( V \setminus \perp \right) \}$ . Since $\delta _ { \perp }$ may return multiple states, it resembles the transition function of a non-deterministic finite automaton (NFA). The precomputation step combines $\delta _ { t }$ and $\delta _ { \perp }$ to define $\delta : Q \times V 2 ^ { Q }$ , which is used in the dynamic programming step. Using the token-level DFA $D _ { t }$ , we also construct the set of live states $Q _ { l } \subseteq Q$ (Definition 2.6).
$$
\delta ( q , t ) = { \left\{ \begin{array} { l l } { \{ \delta _ { t } ( q , t ) \} } & { { \mathrm { i f ~ } } t \in ( V \setminus \perp ) , } \\ { \delta _ { \perp } ( q ) } & { { \mathrm { i f ~ } } t = \perp . } \end{array} \right. }
$$
# 4.2 DINGO Dynamic Programming
Before going into details, we present two key observations that lead to the decoding algorithm.
Observation 1: Determining whether a fully unmasked string $r = t _ { 1 } \cdot \cdot \cdot t _ { d } \in ( V \setminus \perp ) ^ { * }$ is a valid prefix is equivalent to checking whether the resulting state $q _ { r }$ , obtained by applying $\delta$ to the sequence $t _ { 1 } \cdot \cdot \cdot t _ { d }$ starting from $q _ { 0 }$ , is live. Similarly, for a partially (or fully) masked string $\pmb { r } _ { \bot }$ , applying $\delta$ to $t _ { 1 } \cdot \cdot \cdot t _ { d }$ yields a set of resulting states $Q _ { r }$ . In this case, $\pmb { r } _ { \bot }$ is a valid prefix if and only if any state $q \in Q _ { r }$ is live (Definition 3.2).
Observation 2: For optimality, it is sufficient to track the maximum probability path from the start state $q _ { 0 }$ to each resulting state in $Q _ { r }$ . Once these paths are computed, we select the one with the highest probability that leads to a live state. The corresponding string is the optimal string $\pmb { r } ^ { * }$ (or one of the optimal strings in case of multiple optimal solutions) for the optimization problem in Eq. 2.
Based on these observations, the main challenge is to efficiently maintain the maximum probability path to each reachable state in $Q _ { r }$ . We address this using a dynamic programming (DP) approach, similar to traditional graph-based DP algorithms such as [Forney, 1973].
DP states: For each token position $1 \leq i \leq d$ in the block, the DP maintains: a) $W [ i , q ]$ , which records the maximum probability with which a state $q \in Q$ can be reached from the start state $q _ { 0 }$ via transitions on some token sequence with length $i$ ; and b) $P r [ q , i ]$ , which stores the last transition, i.e., the previous state and the corresponding token, that led to the maximum probability stored in $W [ i , q ]$ . If a state $q$ is unreachable, then $W [ i , { \bar { q } } ] = 0$ . Formally, given the probability vectors $\pmb { v } _ { 1 } , \ldots , \pmb { v } _ { i }$ , $W [ i , q ]$ is defined as follows where ${ \delta } _ { t } ^ { * }$ is extended transition function (Definition 2.5).
$$
W [ i , q ] = \operatorname * { m a x } _ { t _ { 1 } \ldots t _ { i } } \prod _ { j = 1 } ^ { i } v _ { j } [ t _ { j } ] \quad { \mathrm { s . t . ~ } } q = \delta _ { t } ^ { * } ( t _ { m + 1 } \cdot \cdot \cdot t _ { n } , q _ { 0 } )
$$
DP state update: Given the states at token position $i$ , we describe the computation for position $i + 1$ . Initially, $W [ i , q ] = 0$ for all $q \neq q _ { 0 }$ , and $W [ i , q _ { 0 } ] = 1$ (lines $1 - 3$ in Algo. 1). To compute $W [ i + 1 , q ]$ for each $q \in Q$ , we consider all tokens $t \in V$ (including the mask token $\perp$ ) that can transition to $q$ from some previous state $q ^ { \prime }$ at step $i$ . Among all such transitions, we select the one with the highest probability and add it to the maximum probability path reaching $q ^ { \prime }$ at step $i$ . The value $P r [ i + 1 , q ]$ stores the previous state and token that lead to the maximum probability path to $q$ at step $i + 1$ (lines 12 – 15 in Algo. 1). Formally,
$$
V _ { i + 1 } ( q , q ^ { \prime } ) = \left\{ \begin{array} { l l } { \underset { t \in V } { \operatorname* { m a x } } ~ \pmb { v } _ { i + 1 } ( t ) ~ \mathrm { s . t . } ~ q \in \delta ( q ^ { \prime } , t ) } \\ { 0 ~ \mathrm { i f } ~ q , q ^ { \prime } ~ \mathrm { a r e ~ n o t ~ c o n n e c t e d } ~ } \end{array} \right. ~ W [ i + 1 , q ] = \underset { q ^ { \prime } \in Q } { \operatorname* { m a x } } W [ i , q ^ { \prime } ] \times V _ { i + 1 } ( q , q ^ { \prime } )
$$
Path construction: We consider all reachable states $q$ at the end of the block with $W [ d , q ] > 0$ . Among the live states $q _ { l } \in Q _ { l }$ satisfying this condition, we select the state $q _ { \mathrm { m a x } }$ with the highest value of $W [ \bar { d } , q _ { l } ]$ . We then use $P r$ to iteratively reconstruct the token sequence backward that forms the maximum probability path starting from $q _ { \mathrm { m a x } }$ and ending at $q _ { 0 }$ (lines $2 0 - 2 2$ in Algo. 1).
Semi-autoregressive setup: In semi-autoregressive setup, we may not start from DFA start state $q _ { 0 }$ since one or more blocks of tokens $\pmb { r } _ { 1 } \cdots \pmb { r } _ { l }$ may have been generated in left the current block. Provide the string $\pmb { r } _ { 1 } \cdots \pmb { r } _ { l }$ ends at a live state $q _ { l }$ , we can apply dynamic programming approach with the intializtion $W [ 0 , q _ { l } ] = 1$ and $W [ 0 , q ] = 0$ for all state $q \neq q _ { l }$ . Details are in Appendix D.
# 4.3 Correctness of DINGO
Proposition 4.1. [Correctness] Given any regular expression $\mathcal { R }$ , input prompt $\pmb { p } \in V ^ { m }$ , block length $d ,$ , output distribution $\mathcal { D } _ { m + d } = \pmb { v } _ { 1 } \dots \pmb { \mathscr { v } } _ { m + d } ,$ i $f L _ { P } ( \bar { \mathcal { R } } ) \cap ( V \setminus \perp ) ^ { d } \not = \bar { \left\{ \right\} }$ and $\pmb { r } \sim \pmb { v } _ { m + 1 } \dots \pmb { v } _ { m + d } \ b$ e the decoded string, then $\exists x \in V ^ { * } . ( { \pmb x } \in S ( { \pmb r } ) ) \land ( { \pmb x } \in L _ { P } ( { \mathcal { R } } ) )$ holds.
Proof sketch: DINGO ensures that if a state $q \in Q$ is reachable in $i$ tokens, then $W [ i , q ] > 0$ for all $1 \leq i \leq d$ . Since $L _ { P } ( \mathcal { R } ) \cap ( V \setminus \bot ) ^ { d } \neq \{ \}$ , there exists a state $q _ { l } \in Q _ { l }$ that is reachable in $d$ steps. Therefore, $W [ d , q _ { m a x } ] > 0$ (see line 16 in Alg.1). Consequently, there exists a sequence $\pmb { x } \in \mathcal { S } ( \pmb { r } )$ such that $\delta ^ { * } ( { \pmb x } , q _ { 0 } ) = q _ { m a x } \in Q _ { l }$ , implying that $\pmb { x } \in L _ { P } ( \mathcal { R } )$ . Formal proof is in AppendixB.
Proposition 4.2. [Optimality] Given any regular expression $\mathcal { R }$ , input prompt $\pmb { p } \in V ^ { m }$ , block length $d ,$ , output distribution $\mathcal { D } _ { m + d } = \pmb { v } _ { 1 } \dots \pmb { v } _ { m + d } ,$ if $\cdot L _ { P } ( \mathcal { R } ) \cap ( V \setminus \bot ) ^ { \widehat { d } } \not = \bar { \not \{ \} } \ \{ \}$ and $\pmb { r } ^ { * } \sim \pmb { v } _ { m + 1 } \dots \pmb { \cdot v } _ { m + d }$ be the decoded string, then for any valid string $\pmb { r } ^ { \prime }$ satisfying $\exists { \pmb x } \in V ^ { * }$ . $( { \pmb x } \in { \mathcal { S } } ( { \pmb r } ^ { \prime } ) ) \wedge ( { \pmb x } \in L _ { P } ( { \mathcal { R } } ) )$ , $P ( \pmb { r } ^ { \prime } \mid \pmb { v } _ { m + 1 } \ldots \pmb { v } _ { n } ) \leq P ( \pmb { r } ^ { * } \mid \pmb { v } _ { m + 1 } \ldots \pmb { v } _ { n } )$ .
Proof Sketch: Formal proof is in Appendix B.
Require: $q _ { 0 }$ , block length $d$ , probability vectors $\pmb { v } _ { 1 } , \ldots . . . \pmb { v } _ { d }$ for the current block, $Q _ { l } , Q , \delta$ .
1: $W [ 0 , q ] \gets 0$ for all $( q \in Q ) \land ( q \neq q _ { 0 } )$
2: $W [ 0 , q _ { 0 } ] \gets 1$
3: $P r [ 0 , q ] $ (None, None) for all $( q \in Q )$ ) $D$ Initialization of the DP
4: ${ V } _ { i } \gets \{ \}$ for all $i \in \{ 1 , \ldots , d \}$ $D$ maximum token probability transtion $( q ^ { \prime } q )$ at position $i$
5: $T _ { i } \gets \{ \}$ for all $i \in \{ 1 , \ldots , d \}$ $D$ token for the maximum probability transition $( q ^ { \prime } \to q )$ )
6: for $i \in \{ 1 , \ldots , d \}$ do
7: for $( q \in Q )$ do
8: for $t \in V$ do
9: $\begin{array} { r l } & { q ^ { \prime } \delta ( q , t ) } \\ & { \quad V _ { i } ( q , q ^ { \prime } ) , T _ { i } ( q , q ^ { \prime } ) \mathrm { M a x T r a n s i t i o n } ( v _ { i } , t , q , q ^ { \prime } ) } \end{array}$
10:
11: for $i \in \{ 1 , \ldots , d \}$ do ▷ DP computation loop
12: for $( { \dot { q } } \in Q ) \land ( q ^ { \prime } \in Q )$ do
13: if $W [ i , q ] < W [ i - 1 , q ^ { \prime } ] \times V _ { i } ( q , q ^ { \prime } )$ then
14: $W [ i , q ] W [ i - 1 , q ^ { \prime } ] \times V _ { i } ( q , q ^ { \prime } )$ ▷ Update maximum probability path to $q$
15: P r[i, q] ← (q′, Ti(q, q′)) $D$ Update the parents accordingly
16: $q _ { m a x } \gets \arg \operatorname* { m a x } _ { \scriptstyle q \in Q _ { l } } W [ d , q ]$
17: if $W [ d , q _ { m a x } ] = 0$ then ▷ No valid prefixes
18: return None, qmax
19: ${ \pmb r } ^ { * } \{ \}$ , $q _ { c u r r } \gets q _ { m a x }$
20: for $i \in \{ d , \ldots , 1 \}$ do ▷ Decoding the optimal string $\pmb { r } ^ { * }$
21: $q _ { c u r r } , t \gets P r [ i , q _ { c u r r } ]$
22: r∗ r∗ t
23: return reverse $( \pmb { r } ^ { * } )$ , $q _ { m a x }$
# 4.4 DINGO algorithm
Algorithm 1 presents DINGO steps. The two main loops dominating its computational complexity involve calculating transition costs and performing the DP updates respectively.
First, for each of the $d$ time steps, the algorithm computes the optimal single-token transition costs $V _ { i } ( q _ { s } , q _ { t } )$ between all source states $q _ { s } \in Q$ and target states $q _ { t } \in Q$ . This is achieved by iterating through each source state $q _ { s }$ , each token $t \in V$ , and then for each state $q _ { t }$ reached from $q _ { s }$ via $t$ (i.e., $q _ { t } \in \delta ( q _ { s } , t ) )$ , updating the cost $V _ { i } ( q _ { s } , q _ { t } )$ with ${ \pmb v } _ { i } [ t ]$ if it is better. The complexity for this part is $\begin{array} { r } { O ( d \cdot ( | \bar { Q } | ^ { 2 } + \bar { \sum } _ { q _ { s } \in Q } \bar { \sum } _ { t \in V } | \delta ( q _ { s } , \dot { t } ) | ) ) } \end{array}$ . The sum $\dot { \textstyle \sum _ { q _ { s } } \sum _ { t } | \delta ( q _ { s } , t ) | }$ represents the total number of transitions, $N _ { \mathrm { t r a n s } } = O ( | Q | \cdot | V | + | Q | \cdot N _ { \perp } )$ , where $N _ { \perp }$ is the maximum number of states reachable via the $\perp$ token. Thus, this part takes $O ( d \cdot ( | Q | ^ { 2 } + | Q | \cdot | V | ) )$ ).
Second, the core dynamic programming update calculates $W [ i , q ]$ for each diffusion step $i$ and state $q$ . This involves iterating over $d$ diffusion steps, $| Q |$ current states $q$ , and for each $q$ , considering all $| Q |$ possible previous states $q ^ { \prime }$ . This leads to a complexity of $O ( d \cdot \bar { | Q | } ^ { 2 } )$ .
Combining these dominant parts, the total complexity is $O ( d \cdot ( | Q | ^ { 2 } + | Q | \cdot | V | ) + d \cdot | Q | ^ { 2 } )$ , which simplifies to $O ( d \cdot ( | Q | ^ { 2 } + \mathsf { \bar { | } } Q | \cdot | V | ) )$ . This can be expressed as $O ( d \cdot | Q | \cdot ( | Q | + | V | ) )$ .
# 5 Experiments
In this section, we evaluate DINGO on a math reasoning task (GSM-Symbolic Mirzadeh et al. [2024]) and a schema-based text-to-JSON task (JSONModeEval [NousResearch, 2024]) and demonstrate significant improvement over baselines. In both tasks, we use the LLaDA-8B-Base (LLaDA-8BB) Nie et al. [2025], LLaDA-8B-Instruct (LLaDA-8B-I) Nie et al. [2025], Dream-v0-Base-7B (Dream-B-7B) Ye et al. [2025], and Dream-v0-Instruct-7B (Dream-I-7B) Ye et al. [2025] models.
Experimental Setup. We run experiments on a 48-core Intel Xeon Silver 4214R CPU with 2 Nvidia RTX A5000 GPUs. DINGO is implemented using PyTorch Paszke et al. [2019] and the HuggingFace transformers library Wolf et al. [2020]. The token-level DFA is implemented in Rust using a highly efficient regex-DFA library to minimize overhead during DFA construction and LLM inference. We report the mean number of DFA states and transitions as well as the offline pre-computation time in Appendix E.
Baselines. We compare DINGO against unconstrained diffusion LLM generation. Furthermore, to highlight the benefit of optimal constrained decoding with DINGO, we implement a constrained decoding strategy Greedy Constrained that mirrors existing autoregressive constrained generation methods Willard and Louf [2023], Ugare et al. [2024b]. Greedy Constrained iterates over the diffusion block and at each position $i$ computes a binary mask $m \in \{ 0 , 1 \} ^ { | V | }$ based on the DFA, specifying valid tokens $\mathbf { \bar { \rho } } _ { m } = 1 \mathbf { \bar { \rho } } _ { . }$ ) and excluded tokens $\mathbf { \bar { \rho } } _ { m } = 0 \mathbf { \bar { \rho } } _ { \mathrm { ~ . ~ } }$ ). Decoding is then performed on the masked probability distribution $m \odot v _ { i }$ , where $\odot$ denotes element-wise multiplication. Since in some cases, Unconstrained outperforms Greedy Constrained, we also report Best of Greedy $+$ Unconstrained , which takes the better result of the two approaches for each problem in the dataset.
Math Reasoning: We evaluate DINGO on GSM-Symbolic Mirzadeh et al. [2024] dataset, which consists of reasoning-based math world problems where numerical values and names are replaced by symbolic variables. Diffusion LLMs are tasked with generating correct symbolic expression solutions to those word problems. We evaluate correctness by using the Z3 solver [De Moura and Bjørner, 2008] to check if the final expressions from the LLM generations are functionally equivalent to the ground truth expressions. We set the generation length to 128, number of blocks to 8, and total diffusion steps to 64 and prompt the LLMs with 4-shot examples from GSM-Symbolic [Mirzadeh et al., 2024] (the prompts can be found in Appendix F.1). We initialize DINGO and Greedy Constrained with a regex (shown in Appendix F.2) that permits math expressions wrapped in $\ll$ and $\gg$ and natural language text outside these expressions for reasoning as done in CRANE Banerjee et al. [2025].
Table 1 compares the performance of DINGO with the baseline methods. The Accuracy $( \% )$ column reports the percentage of functionally correct LLM-generated expressions, Parse $( \% )$ indicates the percentage of syntactically valid responses (i.e., expressions without invalid operations), and Time provides the average time in seconds taken to generate a completion.
As displayed in the table, DINGO significantly improves functional correctness over the baselines. For instance, for LLaDA-8B-I, DINGO outperforms unconstrained generation by 13 percentage points and Greedy Constrained generation by 5 percentage points. Furthermore, DINGO achieves $100 \%$ syntactic accuracy across all models evaluated. On the other hand, unconstrained and Greedy Constrained generation make many syntactic errors, especially for non-instruct tuned models. For these cases, generation with Greedy Constrained results in responses that are syntactically valid prefixes but not syntactically valid by themselves. We present case studies in Appendix F.3. Importantly, DINGO is extremely efficient, introducing marginal overhead compared to unconstrained generation.
Table 1: Comparison of constrained and unconstrained generation methods on GSM-Symbolic
JSON Generation: We further evaluate DINGO on a text-to-JSON generation task JSON-ModeEval, which conists of zero-shot problems specifying a JSON schema and a request to generate a
JSON object that contains specified contents. Generating JSON that adheres to a specified schema is extremely important for applications like tool use and function calling Ugare et al. [2024b], Willard and Louf [2023]. We evaluate the correctness of JSON generated by an LLM by first evaluating whether the JSON string can be parsed and converted to a valid JSON object. We further evaluate whether the generated JSON is valid against the schema specified in the prompt. We set the generation length to 128, number of blocks to 1, and the total diffusion steps to 64. For the constrained generation methods, we convert each problem’s JSON schema into its corresponding regular expression and guide the diffusion LLM to generate output conforming to that regex.
Table 2 presents the results of our experiment. The Parse $( \% )$ column reports the percentage of syntactically valid LLM generations while the Accuracy $( \% )$ column reports the percentage of generations that are both syntactically valid and valid against their respective schemas. Notably, DINGO achieves $100 \%$ schema validation and syntactic accuracy, while baseline methods struggle in many cases to generate valid JSON. We attribute this to the fact that Greedy Constrained may distort the distribution through its greedy approximation and can only generate a valid prefix, not a fulll parsable generation Park et al. [2024a].
Table 2: Comparison of constrained and unconstrained generation methods for JSON Schema.
Ablation Study on The Number of Diffusion Blocks: We analyze the performance of DINGO on GSM-Symbolic using different numbers of diffusion blocks. We run generation with a response length of 128, using 64 total diffusion steps, and each of 1, 2, and 8 blocks. As shown in Figure 1, DINGO performs well across all block settings, outperforming baselines in both functional and syntactic correctness. Further ablations on the number of diffusion blocks are presented in Appendix I.
30 1 35 当
%25 T
A10
5 5
0 0 1 2 8 Number of Blocks Number of Blocks (a) LLaDA-8B-I (b) Dream-I-7B
# 6 Related Works
To the best of our knowledge, our work is the first to provide provable guarantees on constrained adherence for inference in diffusion language models. We next discuss the broader set of related works on diffusion language models and constrained language model decoding.
Diffusion Language Models: Diffusion Language Models Austin et al. [2021] have emerged as a promising alternative to traditional autoregressive architectures Radford et al. [2019], offering advantages in parallel processing and controllability while addressing limitations in sequential generation. Recent advances in semi-autoregressive diffusion models Han et al. [2023], Nie et al. [2025], Ye et al. [2025], Arriola et al. [2025] have significantly narrowed the performance gap with autoregressive counterparts. SSD-LM [Han et al., 2023] introduced a semi-autoregressive approach that performs diffusion over the natural vocabulary space, enabling flexible output length and improved controllability by iteratively generating blocks of text while facilitating local bidirectional context updates. More recently, several breakthrough models have advanced the field: LLaDA (Large Language Diffusion with mAsking) achieved competitive performance with SOTA open-source autoregressive models of a similar size like LLaMA3-8B through a forward data masking process and a reverse process, parameterized by a vanilla Transformer to predict masked tokens [Nie et al., 2025]. BD3-LMs (Block Discrete Denoising Diffusion Language Models)Arriola et al. [2025] introduced a novel approach that interpolates between discrete denoising diffusion and autoregressive models while supporting flexible-length generation and improving inference efficiency with KV caching. Most recently, Dream-7BYe et al. [2025] emerged as a strong open diffusion large language model that matches state-of-the-art autoregressive (AR) language models of similar size.
Constrained Decoding with Autoregressive LLMs: Constrained decoding has shown promising results in augmenting autoregressive language models. Researchers have developed efficient techniques for ensuring syntactic correctness in regular [Deutsch et al., 2019, Willard and Louf, 2023, Kuchnik et al., 2023] or context-free [Koo et al., 2024, Ugare et al., 2024a, Dong et al., 2024, Banerjee et al., 2025] languages. Other works have focused on semantically constrained decoding through Monte Carlo sampling [Lew et al., 2023, Loula et al., 2025] or backtracking [Poesia et al., 2022, Ugare et al., 2025]. Lew et al. [2023], Park et al. [2024a] demonstrated that all these approaches that perform greedy constrained approximation for inference can distort the sampling distribution. DINGO addresses this challenge by performing optimal constrained sampling on blocks of tokens in a diffusion language model, which partially mitigates distribution distortion issues.
Concurrent to our work, Cardei et al. [2025] performs constrained sampling from diffusion language models by minimizing a loss function defined using a surrogate model used for scoring constraints. However, their proposed method does not guarantee convergence to the constraint and necessitates a differentiable surrogate model. In contrast, our work focuses on providing provable guarantees for constraint satisfaction during inference without the need of an additional surrogate model.
Limitations DINGO is optimal for per-block generation, making it ideal for pure diffusion settings. However, this optimality may not hold in semi-autoregressive setups involving multiple blocks. Currently, our approach is limited to regular language constraints, while programming languages often belong to context-free or context-sensitive classes. As a result, our method cannot directly enforce these more expressive constraints, which have been addressed in prior work on autoregressive constrained generation. Nonetheless, we believe the core dynamic programming framework behind DINGO can be extended to support richer language classes in future work. Moreover, important constraints like toxicity mitigation fall outside formal language classes, highlighting directions for further research. | Diffusion LLMs have emerged as a promising alternative to conventional
autoregressive LLMs, offering significant potential for improved runtime
efficiency. However, existing diffusion models lack the ability to provably
enforce user-specified formal constraints, such as regular expressions, which
makes them unreliable for tasks that require structured outputs, such as
fixed-schema JSON generation. Unlike autoregressive models that generate tokens
sequentially, diffusion LLMs predict a block of tokens in parallel. This
parallelism makes traditional constrained decoding algorithms, which are
designed for sequential token prediction, ineffective at preserving the true
output distribution. To address this limitation, we propose DINGO, a dynamic
programming-based constrained decoding strategy that is both efficient and
provably distribution-preserving. DINGO enables sampling of output strings with
the highest probability under the model's predicted distribution, while
strictly satisfying any user-specified regular expression. On standard symbolic
math and JSON generation benchmarks, DINGO achieves up to a 68 percentage point
improvement over unconstrained inference | [
"cs.LG",
"cs.PL",
"cs.SE"
] |
# 1 INTRODUCTION
Cosmos DB [2] is a large-scale distributed database service that supports multiple consistency levels [3] across geographic “regions”. Horizontal scalability is achieved by organizing data into “partitions” [4] containing data from non-overlapping hashbased key ranges. Partitions are organized into “partition-sets” and “accounts”. An “account” is a unit of management for users. A partition-set consists of a set of geographically distributed replicated copies of a single key range. Each partition resides in a single geographical “region”. An account contains all the partitionsets that span the entire key range of all the databases present in an account. In a “single-writer” setup, each account has a single “write region” that serves read and write operations and zero or more “read regions” that serve only read operations. Within each partition-set, users issue read operations against partitions in “read regions” and issue read or write operations to a single “write region” partition.
Like all distributed systems, Cosmos DB faces failures from internal and external sources such as datacenter power failures, network partitioning, high resource consumption and software bugs. Some of these failures cause a subset of partitions within the current write region to fail or degrade. Prior to the work described in this paper, Cosmos DB handles such failures by attempting to “failover” an entire account to another geographic region, even if only a single partition has failed. This failover is signaled to clients via a DNS update and transparently handled by the SDK.
The process of failing over all partition-sets in an account can itself fail. For example, a read region partition within a large account might fail during the failover transition. This process is coordinated using a “control plane” set of machines, and the control plane itself can fail due to various reasons. The overall result is that the impact and risk of failing over an account increases with the size of the account and the overall scale of the service.
Geo-failover operations are typically triggered by an operator several minutes into an outage, and it can take time to failover all partitions of all affected accounts in a big fleet of accounts that a single customer could have deployed. Also, in our experience, the impact could start small and later spread in unanticipated ways. This could lead to delays in restoring availability, due to the operator judgement involved in deciding when to failover, and due to failing over their entire fleet of accounts, even if only a partial set of partitions were impacted. The combination of having an extraordinarily large failover impact for single partition outages, reaching the scaling limits of the control plane in the event of a broader outage, and the duration of availability loss during failures suggests a new approach should be considered for failovers.
The Cosmos DB team set out to implement a scheme internally referred to as “Per-Partition Automatic Failover”. This scheme allows each individual partition-set to autonomously choose which region is the current write region, allowing independent decisions to be made for each partition-set. This reduces the number of partitions that need to perform work during partial outages that affect only a few partitions. During larger outages, the ability for the partition-sets to act autonomously reduces the risk that a control plane failure prevents successful restoration of availability of data plane for users. A heartbeat-based failure detection mechanism at the partition-set level automatically triggers failovers within a single partition-set. We target completing region-wide failovers within two minutes of failure being detected.
# 2 BACKGROUND ON COSMOS DB
A Cosmos DB partition is made up of a set of replicas within the region and is currently architected to choose a single long-term replica per partition (the “primary”) whose responsibility is to accept write traffic, then to replicate that write traffic in a tree topology to other replicas. Consensus protocols such as Paxos [25], Multi-Paxos, Egalitarian Paxos [20], and Raft [23] do not strictly require a long-term leader; any leader can begin issuing new rounds of consensus in each of these algorithms. However, choosing a long-term leader typically allows accepting read and write traffic at much high performance. Cosmos DB follows this tradition of selecting a long-term leader.
Cosmos DB’s consistency level offerings [21] and replication strategy evolved over time, leading to its current implementation. Cosmos DB arranges each partition into a set of replicas, each replica being in one of the roles “primary” or “secondary”. These replicas, making up a partition, could be distributed across fault domains and availability zones, within a region. At any time, there is only a single primary replica. Typical operation uses 3 secondary replicas. Only a primary replica can accept “write traffic” from external sources. The primary replica transmits write operations to the secondary replicas in that partition and performs a quorum commit before acknowledging the user.
This arrangement supports the documented consistency levels of Cosmos DB for single-region accounts. Support for multi-region accounts was enabled by adding two new replica roles, “XP primary” and “XP secondary”, where “XP” means “crosspartition”. (These roles also support partition split and merge operations, not discussed at length in this paper.) The primary replica in each partition chooses an “XP primary” replica from the set of secondary replicas. For all read region partitions, the primary replica acts as an “XP secondary”. The write region’s “XP primary” replicates traffic it receives to the “XP secondary” replicas, which are the read region primary replicas. This architecture is visually depicted, albeit with different terminology, in the diagram from [16], “Replica Sets”.
Figure 1 shows a single partition-set, where each “replica-set” represents a single partition. The “forwarder” is the XP primary. The “leader” replica in the left replica-set is the write region primary replica. The two other “leader” replicas represent “XP secondary” replicas. The remaining “followers” are secondary replicas. In this arrangement, a write operation is “committed” to a partition when a quorum of replicas in that partition has written the write operation to disk. A write operation is “globally committed” if a quorum of partitions in a partition-set have committed the write.
Figure 1: Partition-set illustration. US West is the write region whose XP primary is responsible to replicating traffic to the read regions.
Cosmos DB also has a multi-region-write offering. All regions are writable in this setup and hence there isn’t a need to elect a new “write region” for the partition-set, since all partitions are writable.
So, Per-Partition Automatic Failover discussion is restricted to single-writer setup, where there is a user designated “write region” and other regions are in “follower/read-only” capacity.
Cosmos DB uses lease management to make consistency guarantees. This matters most in its “Global Strong Consistency” offering. [3] Cosmos DB is architected to serve read-only queries from its secondary replicas and read regions. Replicas that do not respond to replication messages have their leases revoked or expired and will fail to serve queries until they begin responding to replication messages.
Management of the current primary replica within a partition (a replica-set) is coordinated using Service Fabric. Service Fabric uses Paxos internally to perform state changes within a Service Fabric cluster of machines. Service Fabric has the capability of acting across geographic regions, and so in theory could coordinate Cosmos DB’s failover schemes internally. [5] However, Cosmos DB consists of hundreds of separate Service Fabric clusters, and users can dynamically choose in which geographical regions their accounts are located. Service Fabric’s notion of leader selection does not mesh well with Cosmos DB’s geo-failover needs, so a new mechanism supporting Cosmos DB was needed.
# 3 PER-PARTITION AUTOMATIC FAILOVER
This section provides a summary of the design of Per-Partition Automatic Failover as well as the requirements which drove our design choices. We introduce the “Failover Manager” which implements the distributed state machine functionality for enabling horizontally scalable, decentralized per-partition failover.
# 3.1 Requirements
We began the process of implementing this feature by starting from these high-level requirements:
1) Cosmos DB must detect and recover from partition/regional failures in under 2 minutes.
While any availability loss is undesirable for a customer, the core architecture of Cosmos DB is based on a lease management system. Using these schemes requires accepting the potential for availability losses on the order of the lease duration. This requirement is defined on a perpartition basis, but it is possible that many partitions may be simultaneously impacted. For example, a complete power loss to a data center may impact hundreds of thousands of partitions. The failover of these partitions must all complete within the desired time period.
# 2) The current set of account features must be supported.
The feature set of Cosmos DB contains things such as: multiple consistency levels, change feed, backup and restore, dynamic capacity management, auto scaling, automatic indexing, adding and removing regions. These features must continue to work correctly in the presence of Per-Partition Automatic Failover. Notably, global strong consistency’s defining characteristic is that there is never observable data loss for acknowledged write operations; other consistency levels permit some level of data loss for unrecoverable regional outages, but the advertised limits of allowable data loss must be respected.
3) The Per-Partition Automatic Failover feature must be compatible with the existing replication protocol, operational protocols and network topologies.
Cosmos DB contains millions of lines of code. We deemed that reimplementing our replication protocols would be overly expensive and risky to the product. There are many operational protocols that Cosmos DB supports that must also continue to work. These operational protocols include partition-level migrations, splits, and merges, the ability to add and remove regions to and from database accounts, our internal configuration management systems, and our internal monitoring and troubleshooting systems. All these internal sub-systems have assumed a singular write region at account level so far; going forward, they will have to align with partition level write region going forward.
# 4) Customer latency requirements should be respected.
Cross-region request processing latency is higher than intra-region request processing latency due to network transit times. Customers typically organize their Cosmos DB accounts so that write traffic is transmitted within a single region, from the customer’s application hosted in Azure to the Cosmos DB partitions in that region. Customers usually prefer latency losses over availability losses, but when lower latency is possible, it should be prioritized.
5) The feature must be transparent to users not using direct mode [24], but can require client SDK updates by customers using direct mode.
The Cosmos DB SDK has two modes: direct mode and gateway mode. Direct mode creates TCP connections directly to the backend hosts performing data operations. Gateway mode connects to the frontend gateway machine, which then uses the Cosmos DB SDK in direct mode. New features that affect the network design of the Cosmos DB SDK require customers to begin using that new version of the SDK. Other application-level changes might also be required, but it is desirable to limit these.
# 3.2 High-level design
We chose a design that would be minimally invasive to Cosmos DB, identifying integration points where the feature would operate. We defined a state machine that drives behavior changes. This state machine is backed by a separate highly available store using the CAS Paxos protocol to coordinate updates. State machine changes trigger existing behaviors in the backend processes. We call the backend component that implements this behavior the “Failover Manager”.
Replicas in each partition perform state updates to transition a partition-set from state to state. Each state update performs a CAS Paxos round as a leader, using the state machine transition function as the edit function to apply. The result of updating the state machine is then translated into actions for that replica to apply to its local runtime state. Example actions are:
To begin acting as a write region primary replica. To begin acting as a read region XP secondary replica. • To stop accepting new write traffic in preparation for a graceful failover.
# 4 THE FAILOVER MANAGER STATE MACHINE
At the core of Per-Partition Automatic Failover is the new “Failover Manager”, which executes a deterministic state machine for each partition to coordinate and execute failover state transitions at partition-level granularity.
This section discusses the Failover Manager state machine approach and discusses the implementation of a novel, highly available store based on CAS Paxos that the Failover Manager uses to store state machine state.
# 4.1 State machine vs. workflow approach
Using a state machine approach for geo-failovers is a change for Cosmos DB. Cosmos DB currently uses an internal control plane to manage accounts, partitions, and partition-sets. This control plane executes “workflows” that coordinate backend operations to effect state changes. The Cosmos DB Control Plane, that orchestrates account-level geo failover, is built upon workflows that execute sequences of operations that each contain internal persisted state. In contrast, a state machine approach defines a set of persisted state data and formally specifies the allowable state changes. These workflows impose limits on Cosmos DB’s release process, runtime reliability, and scalability.
Persisted workflow state is generally not backward and forward compatible across software releases. This is a problem that systems such as Windows Workflow Foundation attempt to address [6], but those solutions are quite complex to manage and sometimes do not address all possible changes: Windows Workflow Foundation does not attempt to migrate the state associated with running actions. The state machine’s formally specified data allows us to guarantee that state information is forward and backward compatible between software versions, allowing software releases during state transitions.
Workflows must reach terminal states, either by success or failure. When a terminal state is reached via failure, the system may be in an arbitrary state, which then requires logic in the next workflow execution to recover from these arbitrary states. This methodology is prone to error and leads to availability losses for customers. The formally specified state machine has no terminal states, ensuring that availability is always eventually restored.
The Cosmos DB Control Plane is a relatively small number of physical machines compared to the backend clusters that host the data partitions. The control plane faces scalability limits when large numbers of accounts need to be failed over. Driving state machine changes directly on the backend machines allow scaling the volume of control operations directly with the backend machine count, removing the scaling limits of the control plane.
These workflows can be limited in their capabilities in the face of multiple simultaneous operations. Each workflow typically obtains a lock on its subject partitions, this lock being released near the end of the workflow. While the workflow is executing, no other workflow can acquire the lock. This prevents, for example, a failover workflow from executing at the same time as a partition split workflow on the same set of partitions. The state machine approach allows more arbitrary types of state changes, eliminating these simultaneity restrictions.
# 4.2 Distributed state machine execution
We desired that the Failover Manager component be able to make state transitions in a distributed fashion using the machines the partitions are hosted on to make decisions. This desire was driven by requirements of resilience and simplicity. Deploying a new service, with new RPC schemes, entails increasing the exposure to new failure modes. Making such a service scalable entails scaling the service along with the size of Cosmos DB. Making the service resilient to entire region failures means having multiple instances of the service; this service then needs to be able to coordinate its own state updates somehow, therefore requiring distributed protocols for executing state transitions.
Instead, we decided that the distributed protocol for executing state transitions would live directly in the backend service to be highly scalable with the size of the backend partition fleet. Each state transition accepts an input from the partition performing the state transition and performs a change to the persisted state. The newly persisted state can then be translated into a set of local actions to take.
The change to the persisted state is executed as a compare-andswap operation. Each state change performs this algorithm:
1) Compute a “report” with the local status of the partition.
2) Read the current persisted state machine value and its version number.
3) Perform an edit operation using the state machine value and the report value as inputs and produce a new state machine value.
4) Perform a compare-and-swap operation on the persisted state machine value with the new state machine value, using the version read in Step 2 as the comparison value. a. If the compare-and-swap fails, go to Step 2.
The Failover Manager component executes infrequently: each primary replica in each partition performs a state update every 30 seconds. The probability of a conflict during the compare-and-swap operation in Step 4 is relatively low (but very much non-zero, which we discuss in the Experimental Results section).
# 4.3 CAS Paxos
The Failover Manager state machine’s state must be persisted somewhere. This state persistence store must be highly available, consistent, resilient to failures, and be able to store a mutating value over time. We settled on CAS Paxos [1] as the ideal mechanism to provide availability, consistency, and mutability. The title of the CAS Paxos paper describes our reasons for this choice very succinctly: “CASPaxos: Replicated State Machines without logs”. We desired a single replicated state machine for each partition-set, replicated across a globally distributed set of locations, where each update operation could be guaranteed to be considered by all future update operations, with minimal extra state management.
# 4.3.1 State machines
We implemented CAS Paxos in a multi-layer approach. The lowest layer transliterates the state machine operations described in the $\mathrm { T L A ^ { + } }$ [11] descriptions of Paxos [7] and CAS Paxos [8]. This layer contains classes for the roles defined in Paxos.
class LeaderStateMachine
{ // The resulting Phase1aMessage should be sent to all // acceptors. StartPhase1Result StartPhase1( const Nullable<NakMessage>& message $= \{ \}$ ); // For each Phase1bMessage from an acceptor, call this // method. If the result is empty, then keep receiving // messages from acceptors. If the result contains a // Phase2aMessage, send it to all acceptors and begin // trying to learn the result from the resulting // Phase2bMessages and a LearnerStateMachine. template<typename TValueEditor> StartPhase2Result StartPhase2( const Phase1bMessage& message, TValueEditor valueEditor);
};
class AcceptorStateMachine
{ AcceptorStateMachine( AcceptorState acceptorState); // Upon receipt of a Phase1aMessage: // Call this method to update the acceptor's state, // Persist the state, then send the resulting // Phase1bResult back to the leader. Phase1bResult OnReceivedPhase1a( const Phase1aMessage& message); // Upon receipt of a Phase2aMessage: // Call this method to update the acceptor's state, // Persist the state, then send the resulting // Phase2bResult back to // the learners. Phase2bResult OnReceivedPhase2a( const Phase2aMessage& message); const AcceptorState& GetAcceptorState();
};
class LearnerStateMachine
{
public: LearnerStateMachine( TQuorumCheckerFactory quorumCheckerFactory, LearnerState learnerState); // Upon receipt of a Phase2bMessage, attempt to // learn from it. // The LearnResult may be empty if no value is learned, // or a valid value if a value is stably learned. LearnResult Learn( const Phase2bMessage& message); const LearnerState& GetLearnerState() const;
};
Writing this layer in this way ensured that we made no errors in translation; indeed, we’ve found no bugs in our CASPaxos state machine implementation.
The second layer implements message transmission and acceptor state storage using our application-level logic. This layer performs all three roles (Leader, Acceptor, and Learner) inside a single process, using external storage to persist the serialized acceptor state. Races to update the acceptor state storage are resolved by performing acceptor state machine changes using a compare-and-swap algorithm similar to that used for the Failover Manager state machine: failure to perform the compare and swap causes a re-read of the acceptor state, a re-application of the acceptor state machine to the message and state, and a retry of the compare-and-swap operation.
# 4.3.1 Choice of storage for acceptor state
We explored several options to store the CAS Paxos acceptor state. The requirements for this store are:
1) The store must be at least as low in the Azure stack as Cosmos DB.
2) The store must support a compare-and-swap operation on complex document content.
3) The store must be geographically distributed.
4) The store must be able to scale up to the workload imposed by this feature executing across all partitions in Cosmos DB during a regional outage.
Coincidentally, Microsoft Azure has an offering that meets these requirements called “Cosmos DB”. We configure a set of geographically distributed non-replicated Cosmos DB accounts to act as the acceptor state to back all partitions globally. We use the ‘If-Match’ HTTP header to perform acceptor state updates atomically [9]. Cosmos DB implements horizontal scaling, allowing the acceptor state storage to scale to support all replicated partition-sets in Cosmos DB. This might appear to create a circular dependency from Cosmos DB onto Cosmos DB. This is illusory, as we are only creating a dependency from the Cosmos DB crosspartition replication stack onto a non-cross-partition replicated Cosmos DB account. Our choice of the actual storage provider is flexible enough that if this decision needs to be revisited, we can do so with relative ease.
We explored using a dynamically managed quorum using the data partitions themselves to back the acceptor state. This leads to complexities performing management of quorum sets in the face of adding and removing regions. This problem is solved and is described in [17][18][19]. One problem left unsolved in that approach is the 2-region scenario: the most inexpensive solution for customers to maintain high availability in the face of a region outage is to have two regions in an account. Using solely the partitions backing a 2-region account would disallow state changes in the case one region fails. It is possible to extend a quorum using a single extra external store. Windows Failover Cluster has a Cloud Store witness feature [10]. Rather than burden ourselves with these complexities in the initial implementation, we decided to begin implementation using a set of statically configured stores for acceptor state.
# 4.4 Failover Manager state machine
We specified the Failover Manager state machine using $\mathrm { T L A ^ { + } }$ and verified it using TLC, the $\mathrm { T L A ^ { + } }$ checker [11]. This allowed us to explore the behavior of the state machine in millions of scenarios in just a few minutes and allowed us to verify with high probability some properties of the state machine and its interaction with Cosmos DB consistency levels. Translating the state machine into $\mathrm { C } { \mathrm { + + } }$ was then a trivial exercise.
The properties we desired of the state machine were:
Absent the presence of repeated failures and failures greater than that required for maintaining quorums for each consistency level, it must always be true that a partition becomes available for read and write within a constant multiple of the replication lease expiration interval.
When data is available to be read or written by a partition in a partition-set, the desired consistency level is maintained.
We did not have to apply the real-time methodologies described by [22 Ch. 9 “Real Time”], as we treated time as discrete quantity that ticks once per replication lease interval.
To avoid lengthy TLC verification using temporal properties, we used a scheme where certain variables were recorded in a special state history variable. We then verified invariants such as the ‘WritesEnabledAtEndOfHistoryWhenRegionsSetIsStable’ and ‘ReadProperty’ as seen in Figure 5.
WritesEnabledAtEndOfHistoryWhenRegionsSetIsStable $\scriptstyle = =$ /\ IF IsEnoughHistory(RegionStateHistory) THEN LET lengthOfFullHistory $\scriptstyle = =$ Len(RegionStateHistory) lastRecentHistoryEntry $\scriptstyle = =$
RegionStateHistory[lengthOfFullHistory] recentHistoryEntries $\scriptstyle = =$ SubSeq( RegionStateHistory, lengthOfFullHistory - NumberOfHistoryTicksToLookback $^ \mathrm { ~ + ~ 1 ~ }$ , lengthOfFullHistory) IN IF /\ IsRegionSetStableInHistory(recentHistoryEntries) /\
IsUserPreferredWriteRegionStableInHistory(recentHistoryEntries) THEN /\ IsCurrentWriteRegionTakingClientWrites ELSE TRUE ELSE TRUE
ReadDataOnRegion(region) $\scriptstyle = =$ /\ $\setminus /$ RegionCurrentServiceStatus[region] $\mathbf { \tau } = \mathbf { \tau }$ ReadOnlyReplicationAllowed \/ RegionCurrentServiceStatus[region] $\mathbf { \tau } = \mathbf { \tau }$ ReadOnlyReplicationDisallowed \/ RegionCurrentServiceStatus[region] $\mathbf { \sigma } = \mathbf { \sigma }$ ReadWrite \/ RegionCurrentServiceStatus[region] $\mathbf { \sigma } = \mathbf { \sigma }$ ReadWriteWithWritesQuiesced /\ RegionCurrentBuildStatus[region] $\mathbf { \sigma } = \mathbf { \sigma }$ BuildCompleted /\ RegionLatestCommittedUptoLSN[region] $\mathbf { \sigma } = \mathbf { \sigma }$ RegionGCLSN[region] /\ LastReadData' $\mathbf { \sigma } = \mathbf { \sigma }$ [Region $| \ l _ { - } >$ region, Data $| \ l \to$
RegionCommittedDataAtLSN[region][RegionLatestCommittedUptoLSN[region]]]
ReadProperty $\scriptstyle = =$ [][ LastReadData'.Data $> =$ LastReadData.Data $] _ { - } < <$ LastReadData $> >$
# 4.5 Failover modes – Graceful and Ungraceful
We implement two failover modes: graceful and ungraceful. Each partition periodically updates the Failover Manager state machine. When the current write region partition continually fails to do so, an ungraceful failover is triggered. The Failover Manager state machine waits for a defined quorum of partitions to report state after the determination to perform an ungraceful failover has occurred, then chooses a new write region based on a user-defined priority list and based on examining the highest reported progress of all regions reporting; the highest priority region that shares the highest progress is then chosen. At any time, any region out of a defined quorum that is providing state updates and has the highest progress in that quorum can be selected. In consistency models weaker than “Global Strong” consistency, this can result in data loss; this data loss is accepted by the customer having chosen a weaker consistency model. We attempt to minimize this data loss by choosing the region with the highest progress after waiting a short period of time for regions to report their progress.
This model, instead of a workflow driven model, ensures that there is always a region available to fail over to in case the selected region also fails. This property was verified by TLC.
It is generally desirable for customers to access data in the region
as close to the customer’s application as possible. Regions are
arranged by users in priority order. When a higher priority region
becomes available to become the write region, the Failover
Manager state machine begins performing a graceful failover to that region. A graceful failover suspends accepting writes for a short period of time, waits for all traffic to finish replicating to the new write region, then enables writes at that new write region. Users can change their priority list at any time; whenever there is a mismatch between the priority list and the current state of a partition-set, the Per-Partition Automatic Failover feature performs a graceful failover.
The process of a graceful failover itself can fail, either because the source or destination partition fails. When this happens, we simply initiate a new ungraceful failover. We detect this by a simple check: if too much time has passed while a graceful failover is ongoing, we perform an ungraceful failover. The state machine encodes this behavior.
There is a potential for degenerate behavior: it is possible for the graceful failover target to become responsive again, to be chosen as a graceful failover target, and for this graceful failover to again fail, leading to a loop where graceful failovers are continually attempted. A simple exponential backoff strategy on graceful failovers solves this problem. The count of the number of unsuccessful graceful failovers is stored in the state machine, along with the last time one was attempted. Graceful failovers are disallowed until an appropriate time. Without this fix, the degenerate behavior will cause a continuous outage. With this fix, the degenerate behavior affects the customer with increasing rarity; the customer will see outages, but they will rapidly decrease in frequency due to the exponential backoff.
There is another potential degenerate behavior: in a loop, a graceful failover can succeed, then destination region fails, and an ungraceful failover happens. We will amend our implementation to account for this by requiring exponentially increasing amounts of “live” time for a graceful failover target.
# 4.6 Dynamic Quorum
Historically, Cosmos DB implemented Global Strong consistency using a typical strict majority quorum scheme. A strict majority of regions would be required to acknowledge a write operation before the write operation could be acknowledged to the user. To enforce read consistency, all regions with an active “read-lease” must acknowledge that a write operation has taken place, even if the region has not yet committed the write. If a region does not respond, eventually its read-lease is terminated.
This scheme is problematic with two regions: the only strict majority in 2 regions is both regions, implying that availability is lost if either region fails. Even three region cases present issues: in the presence of a regional outage, it is possible for a single partition in a second region to fail which would lead to loss of write availability for that partition-set.
In our experience, customers typically prioritize availability in such situations and would prefer to maintain availability when there is only one available copy of their data. They do this with the expectation that yet another cascading failure is rare and expect that, upon recovery, they will eventually be able to make use of all replicas again, maintaining strong consistency in the meantime.
We therefore desired a protocol where any number of partitions in a partition-set can fail, gracefully degrading the number of active partitions. Prior art for this exists in e.g. Windows Failover Cluster [12]. We record the current set of read-leases in the Failover Manager State. When a partition must have its read-lease revoked, we first consult the Failover Manager and request permission. Permission is denied if the number of remaining read-leases (including the implicit write region’s lease) would decrease below the user’s configured minimum durability. Upon a failover, any partition that had an active read-lease can be chosen as the failover target.
With this scheme, a user can configure a two-region account with minimum durability 1. If either region’s partition fails, the remaining active region’s partition can take over, remove the failed region’s partition from the set of active read-leases, and continue operating. When replication resumes and the previously failed partition begins acknowledging write operations, it can be re-added to the set of active read-leases, then re-added to the Failover Manager state, and it again becomes a potential failover target again.
# 5 INTEGRATION AND DEPENDENCIES
The backend partition is only part of the failover story. The capability of the backend service to fail over is irrelevant if customers cannot successfully use the system during and after such an event. Cosmos DB internally must also manage partition-sets when they are in failed-over states.
Our guiding principle here is that user-visible behavior must work correctly and transparently to users, but background system maintenance operations are permitted to be impacted until normal behavior is restored.
# 5.1 Client failover and SDK integration
The account-level failover features that Cosmos DB offers today are detected by the Cosmos DB SDK through DNS updates. The Cosmos DB SDK periodically re-resolves the DNS endpoint for the current write region. An account-level failover updates this DNS entry during the workflow that migrates the current write region for all partition-sets in the account to another region. This model has several weaknesses.
Updating a DNS entry requires that the DNS Time-To-Live field be respected at all intermediate DNS resolvers. Any customer, ISP, etc., who has updated their DNS client or intermediate DNS resolver to ignore the TTL field can cause DNS resolution to continue to return the old value [13].
Even before Per-Partition Automatic Failover existed, it is seen that during an account-level failover each partition-set is in different states at different times. During an account level failover, each partition-set transitions the current write region relatively independently from each other, though a trigger from the Control Plane. A client attempting to write by using the “current” cached write region for the account will find that certain migrated partition-sets will not accept writes. The client can attempt other regions, but prior to the Per- Partition Automatic Failover, the implementation did not have a cache of the current write region on a per-partition-set basis.
Finally, updating DNS records when there is already an issue at hand introduces more moving parts into restoring availability. Entire region failures might also cause the ability to write to Azure DNS to fail. Relying on more services to update data increases the surface area for failures.
We decided to use a single DNS TXT record per account to store each of the account’s regional endpoints and their priorities. This information is written during account provisioning, when users add or remove regions to the account, or when users change region priority settings. During failover situations, no DNS updates are performed. The client detects failures to perform operations and attempts using other region’s partitions as specified by the TXT record.
This new model exposed a flaw in error handling in the client. Previously, certain errors were not deemed to be retriable because (absent a DNS update) the request would be certain to fail. With Per-Partition Automatic Failover, these errors must always be interpreted to mean that the current write region for the partitionset is unavailable, and other regions should be tried: no DNS update will be forthcoming, and so the only evidence the client SDK can use to decide to try other regions is the error having been returned. Absent other evidence, every error becomes evidence of the need to try other regions. This evidence is collected into a per-partitionset cache, and regions are tried in order of the likelihood of success.
# 5.2 Control Plane and Capacity Management integration
The Cosmos DB Control Plane controls the horizontal scale and physical placement of partitions on hardware. Typical operations performed by the control plane include splitting partitions, migrating partitions, and adding a region to an account. The control plane also manages “Disaster Recovery” (DR) which is the process of reacting and responding to large-scale outages by forcibly failing over the write-region for every partition-set in one or more accounts away from an unhealthy region to a healthy region (during a large outage there may be 1000s or even 10,000s of accounts impacted). What these operations have in common is that they are all executed by the Cosmos DB Control Plane using account-level or partitionlevel locks. The various control plane workflows are written with the implicit and sometimes explicit assumption that they are being executed in exclusivity.
With Per-Partition Automatic Failover, this assumption no longer holds true as failovers are now initiated and executed in a decentralized fashion, without any synchronization or involvement from the control plane. While we could have decided to integrate Per-Partition Automatic Failover with the control plane for centralized management, this would have been contrary to our goals of horizontal scaling and high availability. The significant investment we have in the existing control plane operations prevented us from doing a large-scale refactoring of the existing workflows.
Through carefully reviewing most of the control plane workflows we identified the “account topology” to be the resource that required synchronization. The topology resource contains the current write-region for an account as well as the Global Configuration Number (GCN) which is the high-order bits for the partition-set’s epoch which is incremented during failovers as well as for certain control plane operations. Per-Partition Automatic Failover introduced four challenges:
1) There is no longer the concept of an account-wide writeregion in the topology; each partition-set’s topology, at any point in time, may have a different write-region.
2) Changes to the topology are no longer protected by an account-wide lock.
3) The Failover Manager State is now the source of truth for the GCN and current write-region, so any changes to the topology need to be reconciled against this state.
4) Control plane metadata updates lost in the event of a concurrent failover, leading to inconsistent state between control plane and data plane.
The approach we took to fix issues (1) and (2) was to rely on optimistic concurrency control; any update to the topology is required to do a compare and swap operation. Failure by the control plane to perform an update would require a retry or alternatively rollback and cancellation. To fix (3) we introduced the concept of an “topology upsert intent”; rather than directly modifying the partition topology, a control plane workflow expresses an “intent” to do so, e.g. to revoke write-status for a partition, which the Failover Manager attempts to carry out by executing a full CAS Paxos round to update the Failover Manager State according to the desired intent. The control plane workflow in turn monitors whether the intent gets honored and needs to act accordingly, by either completing the operation, retrying, or canceling with rollback. (4) was addressed by introducing strong consistency semantics for metadata writes and reads; this ensures that an acknowledged write from the control plane gets persisted across failovers.
This approach only imposed limited additional complexity on the existing workflows since they already needed to handle cancellation and rollback scenarios. With Per-Partition Automatic Failover, we introduced the possibility that e.g. a partition migration operation would need to be cancelled and rolled back in case a failover for the partition happened while the migration was in progress. However, it is very likely that the partition migration would have had to be rolled back anyways (most likely due to a timeout) as the reason for the failover in the first place was due to the partition being “unhealthy”.
# 5.3 Replication
One of our key design principles and goals was not having to rewrite the Cosmos DB core replication protocol; largely we wanted to treat replication as a “black box”. Our core replication protocol handles both in-region replication as well as cross-region replication and a rewrite would have increased both the scope and the risk of the project. With a few exceptions, we were able to accomplish this goal. This section shares insight into a few issues we ran into.
# 5.3.1 Partition reuse
Prior to Per-Partition Automatic Failover, bringing a partition back up following an outage entailed a full “reseed”, i.e. wiping all the data from the partition and copying all of the data from the current write-region partition. This could take hours depending on the amount of data in each partition-set.
With Per-Partition Automatic Failover, our target was to reduce the failback duration to seconds or minutes (approximately proportional to the length of the outage). To accomplish this, we needed to be able to reliably determine the logical sequence number (LSN) in the old write-region up to which we needed to retain the data; any LSNs above this number would either need to be discarded or reconciled depending on consistency level and user preferences. We refer to this data as “false progress”. To address this issue, we had to extend the replication protocol with a new dedicated “progress table” which tracks the LSNs written in each epoch. Using the progress table allowed us to undo any false progress as part of the failback process; this is needed to ensure consistency across partitions in the partition-set. It also enables us to only copy the delta of writes written to the new write-region during the duration of the outage.
# 5.3.2 Decentralized Cross Regional Leadership Election
Account level Geo-Failovers used to be coordinated by our centralized Cosmos DB Control Plane. This was a scaling bottleneck and a single point-of-failure. On the plus side, the control plane had a “global view” of each partition-set; by having the control plane assign and revoke leadership to a given region we were able to reduce the likelihood of “split brain” scenarios where more than one partition believes itself to be the write-partition.
By moving to a decentralized, distributed state-machine-based leader election, we introduced more scenarios where - e.g. during a network partition - multiple partitions may think they are the leader. Additionally, with Per-Partition Automatic Failover, partition-level failovers are no longer limited to infrequent large outages. In combination, this exposed previously unknown issues in our replication protocol.
Except for a couple of cases where we needed to make intrusive changes to the existing core replication protocol, we were able to leverage the Failover Manager and its Failover Manager State as the source of truth when it comes to cross-regional, geo leadership. This allowed us to accomplish our goals of minimizing changes to the existing core replication protocol.
# 6.1 Partition Failover
In this section, we go over the results from an exercise where we performed three power outages in a region hosting $^ { 4 , 3 0 0 + }$ writeregion partitions. Each power outage lasted for thirty minutes. The results demonstrate Per-Partition Automatic Failover’s ability to restore write-availability within our Recovery Time Objective goal of two minutes. The results also demonstrate how, following power being restored, we are able to quickly failback to the original, preferred write-region.
# 6.1.1 Account Failover setup
To demonstrate the results from the experiment, we set up a 3- region account with the topology according to Table 1.
Table 1: Account topology
Table 2 lists the timestamps when each power outage was initiated. For each power outage, we brought down all the data plane nodes in East Asia for 30 minutes until we restored power again.
Table 2: Power Outage Simulation Timestamps
Figure 6 shows how write-availability is maintained as the writeregion for all partition-sets switch from East Asia to Southeast Asia across the experiment.
# 6.1.2 Account Success Rates
SuccessfulWrite RequestsbyRegion
# 6.1.3 Partition Recovery Statistics
Across the three power outages, we recorded availability recovery time statistics for all partition-sets. Figure 7 demonstrates that availability is restored within less than 2 minutes for every partition-set, with the majority of the partition-sets showing recovery within a minute. Note that the Cosmos DB SDK implements client-side retries, so the customer measured availability loss is significantly less than the service-side metrics recorded in this graph imply.
Figure 6: Write throughput persists amidst power outages
Figure 7: Partition availability restoration time
# 6.1.1 Outage recovery detection durations
Figure 8: Recovery detection time captures the time taken across all partition-sets to automatically detect recovery from a power outage in the preferred write region (East Asia), as partitions reach a healthy state for initiation of catch-up and graceful failover process. As seen below, the majority of the partition-sets detect this in 1 minute or less.
Count of Partitions by Power Outage Recovery detection duration buckets
Figure 8: Recovery detection time
# 6.2 CAS Paxos
A key challenge in implementing decentralized Per-Partition Automatic Failover using CAS Paxos in Cosmos DB is the occurrence of dueling proposers. This phenomenon arises when multiple partitions in a partition-set simultaneously initiate CAS Paxos rounds, leading to conflicts that hinder efficient consensus.
Although dueling proposers is a well-known issue in consensus literature [14], its resolution typically involves designating a single distinguished proposer to serialize state updates. CAS Paxos, however, is inherently leaderless; any partition in a partition-set may propose state updates. Introducing a single distinguished proposer would undermine the decentralized design that is central to autonomous failover and reduced system dependencies.
To preserve CAS Paxos’s leaderless property, we investigated more effective ways to handle NAK (negative acknowledgment) messages. A common industry practice is to employ exponential backoff with jitter, where the retry delay grows exponentially with each attempt, and random jitter offsets synchronous retry collisions. This approach is often summarized by a formula of the form:
where 𝑎𝑡𝑡𝑒𝑚𝑝𝑡 is the retry attempt count and $\delta$ is the base delay. While effective in many distributed systems, selecting a suitable base delay in heterogeneous network environments can be problematic. For instance, round-trip latencies between a user region in West US and an acceptor store in East Asia may reach a P50 latency of $1 5 0 ~ \mathrm { m s }$ (see [15]). An optimal base delay in one region may be too short, or too long in another, either causing frequent conflicts or prolonging proposal times unnecessarily.
This variability motivated the development of an adaptive scheduling mechanism that dynamically adjusts delays based on real-time performance metrics. The goal was to optimize both latency and conflict reduction without violating the decentralized leadership principle of CAS Paxos.
# 6.2.2 Simulation Strategy
To analyze the impact of dueling proposers and evaluate our solutions, we built a custom discrete-event simulation framework. This simulator models message timing, network latencies, and consensus attempts, enabling us to replicate realistic distributed system behavior. Because the simulation is discrete-event based, we can compress years of system operation into a manageable timeframe, making it possible to study rare or edge-case scenarios.
We configured the simulator with parameters representative of a production environment and executed a series of experiments, each simulating one hour of operational time. During each experiment, we recorded metrics on both successful and failed proposer rounds as well as overall failure rates.
# 6.2.3 Improved Approach (Adaptive Scheduling and Time-Division Multiplexing)
To address the limitations of a static exponential backoff, we introduced an adaptive strategy that refines backoff intervals based on empirical data. We track the duration of the successful Phase 2 (accept) stage, computed as:
$$
D _ { \{ p h a s e 2 \} } = \ T _ { \{ p h a s e 2 b _ { e n d } \} } - \ T _ { \{ p h a s e 2 a _ { s t a r t } \} }
$$
where $T _ { \{ p h a s e 2 a _ { s t a r t } \} }$ is the timestamp at the start of Phase 2a and $T _ { \{ p h a s e 2 b _ { e n d } \} }$ marks completion of the Phase 2b phase. These durations are added as input to an exponential moving average (EMA) and standard deviation calculation, updated online using Welford’s algorithm. We store these statistics in the proposed value itself, ensuring consistency across distributed nodes.
When a proposer receives a NAK, it applies a delay calculated by:
$$
\tau _ { \{ N A K \} } = \left( \mu _ { \{ E M A \} } + \ \sigma \right) \cdot R a n d o m U n i f o r m \left( 0 , 2 ^ { \{ a t t e m p t - 1 \} } \right)
$$
where $\boldsymbol { u } _ { \{ E M A \} }$ is the EMA of successful phase 2 durations, $\sigma$ is the standard deviation, and 𝑎𝑡𝑡𝑒𝑚𝑝𝑡 is the retry attempt count. This statistically informed delay allows sufficient time for ongoing Phase 2 operations to conclude, reducing dueling conflicts.
In addition to adaptive backoff, we introduced time-division multiplexing to the Failover Manager’s scheduling logic. Rather than adding a random jitter to every proposer’s schedule, each proposer references the duration of the most recent successful proposal (excluding conflicts) and shifts its next proposal start by:
$$
\begin{array} { l l } { { { \cal D } _ { \{ p r o p o s a l \} } = { \cal T } _ { \{ p r o p o s a l _ { e n d } \} } - { \cal T } _ { \{ p h a s e 1 a _ { s t a r t } \} } } } \\ { { \tau _ { \{ n e x t \} } = { \cal T } _ { \{ i n t e r v a l \} } - { \cal D } _ { \{ p r o p o s a l \} } } } \end{array}
$$
where $D _ { \{ p r o p o s a l \} }$ is the observed time from Phase 1 start to proposal completion, and $T _ { \{ i n t e r v a l \} }$ is the fixed proposer run interval. By adaptively spacing proposals, each proposer reduces its likelihood of colliding with an ongoing round, improving efficiency and success rates.
# 6.2.3 Results
To ensure a comprehensive evaluation, we conducted 10,000 simulations, each representing one hour of operational time. To capture realistic network conditions, we introduced randomly assigned latencies and heterogeneous network characteristics, ensuring a broad range of failure scenarios and proposer contention levels.
The evaluation compared the improved approach, described in the previous section, against our initial implementation. In the initial implementation, conflict handling was based on an exponential backoff with a statically configured base delay, while state update scheduling relied on random jitter to reduce contention. However, as shown in the results, this approach often proved inefficient, as proposers could still initiate conflicting rounds, leading to increased failure rates and prolonged consensus times.
Each simulation included seven acceptors, mirroring production system characteristics. The lease enforcer timeout was set to 45 seconds, and proposers attempted state updates every 30 seconds. To assess system performance under varying contention levels, we tested configurations with 3, 5, 7, and 9 proposers, representing different geographical user regions.
A proposer successfully updates its state and renews its lease at time T₀. At $\mathrm { T } _ { 1 } \approx \mathrm { T } _ { 0 } + 3 0 \mathrm { s }$ , it attempts another update. If conflicts prevent completion of Phase 2 of Paxos, the proposer retries. A failure occurs when no successful update is performed within the lease enforcement window, meaning that at $\mathrm { T } _ { 2 }$ , where $\mathrm { T } _ { 2 } \mathrm { ~ - ~ } \mathrm { T } _ { 0 } \geq 4 5 \mathrm { s }$ , the lease is lost.
While additional system metrics were collected, the primary focus of this evaluation is the failure rate, as it directly reflects the system’s ability to mitigate proposer conflicts.
Figure 9: Failure Rate Reduction improvements with adaptive scheduling and time-division multiplexing
The failure rate comparison between the initial and improved implementations demonstrates a substantial reduction in proposer conflicts, particularly as the number of proposers increases. In the initial implementation, failure rates increased sharply with higher proposer counts, reaching $6 . 4 9 5 0 \%$ at nine proposers.
In contrast, the improved approach maintained consistently low failure rates across all test cases, with a maximum of $0 . 0 0 2 8 \%$ at nine proposers. The adaptive statistics-based conflict handling and structured state update scheduling significantly reduced proposer contention, allowing state updates to complete with minimal interference.
Our findings confirm that the improved approach substantially increases system reliability by reducing proposer conflicts and failure rates. By dynamically managing retries and structuring state updates, the system remains stable even under high contention. These optimizations ensure consistent performance, making this method a viable solution for large-scale distributed environments.
# 7 FUTURE WORK
The Per-Partition Automatic Failover feature is exciting and new, and as with all new features there is more work to do after the initial launch.
# 7.1 Optimizations
Our adaptive backoff strategy currently uses a single, combined exponential moving average calculated for all proposers in a partition-set. We are planning on separating this into using separate per-partition state, where each proposer's statistics are maintained separately. Initial experiments indicate that this strategy further decreases conflicts, especially in scenarios where there are pronounced latency divergence across the regions in a partition-set.
The heartbeating mechanism we implemented to detect failures consumes Cosmos DB resources. It is possible to elide heartbeat updates to the central state when every replica in a partition-set can be assured that all replicas are functioning correctly. This replaces an expensive CAS Paxos heartbeat with a small set of network packets. There is a danger to this optimization: under normal operating conditions, the Failover Manager’s acceptor state stores would be under small amounts of load, which would increase when there has been an error. This dual modality behavior introduces risk compared to a model where the load on the acceptor stores stays constant over time.
As discussed earlier, we considered a model using the user’s partitions as the state store for the Failover Manager. This model would modestly conserve resources, as most user accounts have a small number of regions and state updates would be limited to these regions. This model also eliminates a dependency on the globally selected regions that back the state stores: as implemented, failure to update a majority of the globally selected regions will prevent the correct operation of a partition-set. These models face significant complexity in the face of adding and removing regions.
For our control plane integration, we took a conservative approach in our first iteration by (mostly) cancelling an in-flight control plane operation if “interrupted” by a failover. Going forward, we will take a deeper look into control plane workflows to identify more scenarios where a control plane operation can be resumed following a failover instead of having to be rolled back. For example, there is no reason in theory for an “Add New Region” operation to be rolled back because we failed over the write-region for one or more partitions.
# 7.2 More external signals
The Failover Manager state machine can admit arbitrary external signals to trigger failovers; we merely need to implement them. Signals indicating a lack of successful traffic from the user’s application might be used to trigger failovers. This is common in network outage or misconfiguration scenarios. Integration of application-level monitoring and triggering failover is an area to be explored.
# REFERENCES
[1] D. Rystsov, "CASPaxos: Replicated State Machines without logs," 2018. [Online]. Available: https://arxiv.org/abs/1802.07000.
[2] Microsoft, "Azure Cosmos DB," [Online]. Available: https://learn.microsoft.com/en-us/azure/cosmos-db/. [Accessed 2025].
[3] Microsoft, "Consistency levels in Azure Cosmos DB," 2024. [Online]. Available: https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels. [Accessed 2025].
[4] Microsoft, "Partitioning and horizontal scaling in Azure Cosmos DB," 11 2024. [Online]. Available: https://learn.microsoft.com/en-us/azure/cosmosdb/partitioning-overview. [Accessed 2025].
[5] Microsoft, "Commonly asked Service Fabric questions," 2024. [Online]. Available: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabriccommon-questions#can-i-create-a-cluster-that-spans-multiple-azure-regions-ormy-own-datacenters. [Accessed 2025].
[6] Microsoft, "Windows Workflow Foundation Programming - Dynamic update," [Online]. Available: https://learn.microsoft.com/enus/dotnet/framework/windows-workflow-foundation/dynamic-update. [Accessed 2025].
[7] L. Lamport, "The Paxos Algorithm," October 2024. [Online]. Available: https://lamport.azurewebsites.net/tla/paxos-algorithm.html. [Accessed 2025].
[8] T. Grieger, "CASPaxos-tla," [Online]. Available: https://github.com/tbg/caspaxos-tla. [Accessed 2025].
[9] Microsoft, "Transactions and optimistic concurrency control," [Online]. Available: https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/databasetransactions-optimistic-concurrency.
[10] Microsoft, "Deploy cloud witness for a failover cluster," February 2025. [Online]. Available: https://learn.microsoft.com/en-us/windows-server/failoverclustering/deploy-cloud-witness. [Accessed 2025].
[11] L. Lamport, "The TLA+ Home Page," [Online]. Available: https://lamport.azurewebsites.net/tla/tla.html.
[12] Microsoft, "Understanding cluster and pool quorum," February 2025. [Online]. Available: https://learn.microsoft.com/en-us/windows-server/storage/storagespaces/quorum. [Accessed 2025].
[13] D. Lawrence, W. Kumari and P. Sood, "Serving Stale Data to Improve DNS Resiliency," March 2020. [Online]. Available: https://www.rfceditor.org/info/rfc8767. [Accessed 2025].
[14] L. Lamport, "Paxos made simple.," ACM SIGACT News (Distributed Computing Column), vol. 121, pp. 51-58, December 2001.
[15] Microsoft, "Azure network round-trip latency statistics," [Online]. Available: https://learn.microsoft.com/en-us/azure/networking/azure-networklatency?tabs=Americas%2CWestUS. [Accessed 2025].
[16] Microsoft, “Global distribution with Azure Cosmos DB- under the hood Microsoft Learn” [Online]. Available Global distribution with Azure Cosmos DB- under the hood | Microsoft Learn. [Accessed 2025].Conference Short Name:WOODSTOCK’18
[17] L. Lamport, D. Malkhi, and L. Zhou. Vertical Paxos and primary-backup replication. Technical report, Microsoft Research, 2009.
[18] L. Lamport, D. Malkhi, and L. Zhou. Reconfiguring a state machine. SIGACT News, 41(1), Mar. 2010.
[19] B. Liskov and J. Cowling. Viewstamped replication revis ited. Technical Report MIT-CSAIL-TR-2012-021, MIT Computer
[20] Iulian Moraru, David G. Andersen, and Michael Kaminsky. 2013. There is more consensus in Egalitarian parliaments. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles (SOSP '13). Association for Computing Machinery, New York, NY, USA, 358–372. https://doi.org/10.1145/2517349.2517350
[21] Microsoft, “Consistency level choices - Azure Cosmos DB | Microsoft Learn” [Online]. Available: https://learn.microsoft.com/en-us/azure/cosmosdb/consistency-levels
[22] Lamport, Leslie. Specifying Systems : the TLA $^ +$ Language and Tools for Hardware and Software Engineers. Boston :Addison-Wesley, 2003
[23] Fazlali, Mohammad Reza et al. “Raft Consensus Algorithm: an Effective Substitute for Paxos in High Throughput P2P-based Systems.” ArXiv abs/1911.01231 (2019)
[24] Microsoft, “Azure Cosmos DB SQL SDK connectivity modes,” [Online]. Available: https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/sdkconnection-modes
[25] Leslie Lamport. 1998. The part-time parliament. ACM Trans. Comput. Syst. 16, 2 (May 1998), 133–169. https://doi.org/10.1145/279227.279229 | Azure Cosmos DB is a cloud-native distributed database, operating at a
massive scale, powering Microsoft Cloud. Think 10s of millions of database
partitions (replica-sets), 100+ PBs of data under management, 20M+ vCores.
Failovers are an integral part of distributed databases to provide data
availability during outages (partial or full regional outages). While failovers
within a replica-set within a single region are well understood and commonly
exercised, geo failovers in databases across regions are not as common and
usually left as a disaster recovery scenario. An upcoming release of Azure
Cosmos DB introduces a fine grained (partition-level) automatic failover
solution for geo failovers that minimizes the Recovery Time Objective (RTO) and
honors customer-chosen consistency level and Recovery Point Objective (RPO) at
any scale. This is achieved thanks to a decentralized architecture which offers
seamless horizontal scaling to allow us to handle outages ranging from
node-level faults to full-scale regional outages. Our solution is designed to
handle a broad spectrum of hardware and software faults, including node
failures, crashes, power events and most network partitions, that span beyond
the scope of a single fault domain or an availability zone. | [
"cs.DB",
"H.2.4; H.2.7"
] |
# 1 Introduction
Inference-time scaling has emerged as a critical technique for enhancing the reasoning capabilities of LLMs [2, 18, 20, 21]. Methods such as Chain-of-Thought (CoT) [20] explicitly guide models through intermediate reasoning steps, while recent models like DeepSeek-R1 [8] and QWQ [17] incorporate reasoning capabilities implicitly during training. However, recent studies have shown that these models often suffer from excessive reasoning behaviors, such as frequent shifts in reasoning strategies or redundant processing, which can lead to substantial computational overhead [6, 19]. These inefficiencies introduce a novel security risk: the attacker can craft adversarial inputs to exploit them, significantly inflating inference-time resource usage.
Prior work has explored related threats in both language models and vision-language models (VLMs). For example, Sponge examples [16] increase computational costs by maximizing activation norms, while the NICGSlowdown attack [5] manipulates token logits to delay output generation. Similarly, Gao et al. [10] introduce verbose images to impose high inference latency and computational burden specifically on VLMs. More recently, Kumar et al. [13] propose the Overthink attack, which adopts an indirect prompt injection strategy and inserts a decoy to external resources. This compels the model to allocate additional reasoning resources toward solving an intermediary task before addressing the primary query. In contrast, our method directly perturbs the input to elicit excessive reasoning behavior, increasing computational overhead without degrading task performance or requiring external content. Moreover, our attack aligns with the Model Denial of Service (MDoS) threat as defined by OWASP, wherein adversarial inputs lead to resource exhaustion, degrading system responsiveness and service availability for other users.1
In this work, we introduce the first adversarial attack designed to exploit the reasoning inefficiencies in reasoning LLMs, thus inducing excessive computation during inference. Our approach constructs adversarial suffixes that prompt the model to engage in extended reasoning without compromising model utility. To optimize these suffixes, we propose three novel loss functions that encourage such reasoning behavior:
• Priority Cross-Entropy Loss prioritizes key tokens while masking less informative ones to enhance optimization efficiency. This loss leverages the autoregressive nature of LM to enable more targeted and effective gradient updates.
• Excessive Reasoning Loss increases the likelihood of branched or recursive reasoning, leading to greater computational overhead.
• Delayed Termination Loss encourages the model to defer the termination of reasoning and answer generation.
We optimize and evaluate our attacks for the GSM8K [7] and ORCA [14] datasets on DeepSeek-R1-Distill-Llama and DeepSeek-R1-Distill-Qwen. Our attacks consistently increase the reasoning length by over $3 \mathrm { x }$ to $9 \mathrm { x }$ using only 10 crafted adversarial tokens. Moreover, our attack demonstrates strong transferability across models on commercial platforms, including OpenAI o1-mini and o3-mini [11], DeepSeek-R1, and QWQ, suggesting a broader vulnerability among reasoning-optimized LLMs. These findings expose an underexplored issue. While such models are proficient in reasoning, they remain susceptible to targeted manipulations that exploit their reasoning mechanisms to induce significant computational overhead. Our results underscore the urgent need for inference-time defenses that can detect and mitigate excessive reasoning triggered by adversarial prompts, particularly in real-world deployments.
# 2 Methodology
This section introduces our adversarial attack framework, which aims to increase the computational overhead of reasoning LLMs by inducing excessive reasoning behavior. We first formalize the threat model, then describe the procedure for generating target outputs. Finally, we detail the loss functions that guide the optimization.
# 2.1 Threat Model
The primary objective of our attack is to generate inputs that compel the model to extend the reasoning processes as long as possible, thus significantly increasing the computational cost at inference time. Also, this manipulation should preserve the model’s utility to avoid suspicion. Similar to prior work [3, 4, 10, 22], we assume a white-box scenario in which the attacker has complete access to the model’s architecture, parameters, and gradients.
Following [4], we consider two primary use cases for our attack. In the first use case, a malicious user intentionally induces excessive computational load, degrading overall system performance and diminishing service quality for other users, akin to a DoS attack. In the second use case, a benign user queries the model within an autonomous system that processes untrusted third-party data (e.g., crafted adversarial data), resulting in significantly higher costs (e.g., money) than expected. As we later demonstrate, these crafted adversarial inputs exhibit strong transferability across different models.
# 2.2 Target Output Generation
Constructing an effective adversarial suffix requires defining a target output that can guide the optimization process. Prior work has adopted similar strategies in various contexts. For instance, ATA [9] uses the fixed string “Sorry, I’m unable to answer the question” to mislead the model into generating incorrect answers, while Zou et al. [22] target phrases such as “Sure, I can...” to bypass safety mechanisms and elicit unsafe behavior.
For our attack, a straightforward strategy is to sample multiple outputs from the target model and select the longest one as the optimization target. However, we find that this approach often fails to produce outputs of sufficient length. Another option is to use reasoning-inducing prompts such as CoT, which are designed to elicit a step-by-step reasoning path. Although promising, our experiments show that CoT prompts do not consistently generate longer outputs across various models and datasets.
To further increase target output length, we adopt DSPy [12], a recent prompt optimization framework that iteratively refines instructions to better satisfy a given objective. Specifically, we use a DSPy optimizer to refine CoT prompts on a small dataset with the goal of maximizing output length. The resulting optimized CoT prompts elicit substantially longer responses from the target model and serve as effective targets for crafting adversarial examples. We include the optimized prompt in App. A.1 and report output length statistics for different prompting strategies in Table 6.
# 2.3 Loss Design
To craft adversarial suffixes that trigger excessive reasoning behavior, we propose a composite loss function consisting of three components: Priority Cross-Entropy (PCE) Loss, Excessive Reasoning (ER) Loss, and Delayed Termination (DT) Loss. PCE Loss is designed to reduce optimization difficulty, while ER Loss and DT Loss target distinct aspects of the reasoning process. We detail each component below.
Priority Cross-Entropy Loss. Traditional adversarial attacks on LMs typically optimize a cross-entropy loss to maximize the likelihood of generating a specific target sequence (e.g., “Sure, I can ...”). Formally, given an input token sequence $x = \{ w _ { 1 } , w _ { 2 } , . . . , w _ { n } \}$ , the probability of generating the next token $w _ { n + 1 }$ is defined as:
$$
p ( w _ { n + 1 } ) = p ( w _ { n + 1 } \mid w _ { 1 } , w _ { 2 } , \ldots , w _ { n } ) .
$$
Accordingly, the standard cross-entropy loss used to maximize the likelihood of a target sequence $y$ , conditioned on a base input $x$ and an adversarial suffix $x ^ { \prime }$ , is defined as:
$$
\mathcal { L } _ { \mathrm { C E } } = - \frac { 1 } { | y | } \sum _ { t = 1 } ^ { | y | } \log p ( y _ { t } \mid \{ x , x ^ { \prime } \} , y _ { < t } ) .
$$
In typical adversarial settings, the target sequence $y$ is relatively short, often fewer than 10 tokens. However, to trigger excessive reasoning behavior, we must construct much longer targets (e.g., over 1,000 tokens). Uniformly optimizing over such long sequences is computationally inefficient, as many tokens (e.g., “the” and “I”) can be accurately generated even without the prompt, due to statistical priors learned during pretraining. To investigate this effect, we analyze the loss distribution of a target sequence with and without the input prompt. As shown in Fig. 1, our analysis reveals that only a small subset of tokens exhibits a significant increase in loss when the prompt is removed. This observation supports our hypothesis that informativeness is not uniformly distributed across tokens and that only a subset is highly dependent on the input prompt.
Building on this insight, we introduce a token-level importance mask that emphasizes tokens the model considers informative, thereby improving optimization efficiency. Specifically, for each target token $y _ { t }$ , we compute an importance score as the difference in log-probabilities with and
Figure 1: The perplexity of a reasoning sample in the output with and without the prompt. Bold tokens are assigned 1 in the mask.
without the input prompt:
$$
\operatorname { S } _ { t } = \log p ( y _ { t } \mid y _ { < t } ) - \log p ( y _ { t } \mid x , y _ { < t } ) .
$$
This score captures the degree to which each token’s prediction depends on the presence of the prompt. We then construct a binary mask $\mathcal { M }$ by selecting the top $K \%$ of tokens with the highest importance scores and assigning them a value of 1, while masking out the remaining tokens (assigned 0). The resulting PCE Loss is defined as:
$$
\mathcal { L } _ { \mathrm { P C E } } = - \frac { 1 } { | y | } \sum _ { t = 1 } ^ { | y | } \mathcal { M } _ { t } \cdot \log p ( y _ { t } \mid \{ x , x ^ { \prime } \} , y _ { < t } ) .
$$
By selectively focusing on prompt-sensitive tokens, this loss function enhances optimization efficiency and more effectively encourages the model to generate extended reasoning sequences during inference.
Excessive Reasoning Loss. Prior work [6, 19] has shown that LLMs trained for explicit reasoning often produce extended reasoning. In such cases, certain tokens, such as “Wait” and “Alternative”, frequently occur in these sequences, signaling branching or recursive reasoning steps. To exploit this behavior, we aim to increase the likelihood of generating such tokens during the reasoning. While constructing a manual list of indicative tokens is feasible, it is limited in scalability and generalization. Instead, we adopt a data-driven approach to automatically identify reasoningassociated tokens. As demonstrated in our ablation study (Sec. 3.3), this approach uncovers influential tokens that would likely be overlooked through manual inspection, underscoring the efficacy of our method.
Concretely, we extract the top $n$ most frequent tokens that appear in the first two positions of sentences generated during the Target Output Generation phase. These tokens are hypothesized to play a critical role in initiating new reasoning trajectories. Let $\mathcal { T }$ denote the resulting set of high-impact tokens. To promote their occurrence during generation, we define the Excessive Reasoning (ER) Loss as:
$$
\mathcal { L } _ { \mathrm { E R } } = - \frac { 1 } { \vert y \vert } \sum _ { t = 1 } ^ { \vert y \vert } \sum _ { \nu \in \mathcal { T } } \log p ( y _ { t } = \nu \mid \{ x , x ^ { \prime } \} , y _ { < t } ) .
$$
This objective increases the likelihood of generating tokens associated with recursive or exploratory reasoning, thereby inducing longer and more computationally intensive reasoning sequences.
Delayed Termination Loss. In many reasoning LLMs, the generation process typically begins with intermediate reasoning steps, which conclude with a designated end-of-thinking (EOT) token (e.g., </think>). Then, the model would generate an answer conclusion terminated by an end-of-sequence (EOS) token (e.g., $\mathsf { < e o s > }$ ). To prolong both the reasoning and answer conclusion phases, we aim to reduce the model’s tendency to emit these termination tokens during decoding. However, due to the stochastic nature of autoregressive generation, the precise timestep at which these tokens appear is not fixed. To address this, we adopt a strategy from prior work [5, 10], which minimizes the likelihood of generating termination tokens across all positions in the output sequence:
$$
\begin{array} { l } { \displaystyle \mathcal { L } _ { \mathrm { D T } } = \frac { 1 } { | y | } \sum _ { t = 1 } ^ { | y | } \left[ p ( y _ { t } = \mathrm { E O S } \mid \{ x , x ^ { \prime } \} , y _ { < t } ) \right. } \\ { \displaystyle + \left. p ( y _ { t } = \mathrm { E O T } \mid \{ x , x ^ { \prime } \} , y _ { < t } ) \right] . } \end{array}
$$
This objective discourages premature termination, encouraging the model to continue generating extended reasoning and answer conclusions before finalizing its output.
# 2.4 Optimization
Optimizing adversarial suffixes in the text domain presents a unique challenge due to the discrete nature of language. Unlike continuous domains (e.g., images), where gradients can be directly applied to pixel values, LMs operate on sequences of discrete tokens drawn from a fixed vocabulary. As a result, standard gradient-based optimization techniques cannot be directly applied to manipulate individual tokens.
To address this, we adopt the Greedy Coordinate Gradientbased Search (GCG) framework [22], which has demonstrated strong performance in adversarial text generation. GCG linearizes the loss landscape by computing gradients with respect to input embeddings and identifying substitutions that are most likely to improve the loss. Specifically, for a given token position $i$ in the suffix, we compute the gradient of the loss with respect to its embedding and search for the token $x _ { i } ^ { \prime }$ that maximally improves the objective. Formally:
$$
x _ { i } ^ { \prime } = \arg \operatorname* { m a x } _ { w \in V } \left. \nabla _ { e ( x _ { i } ) } \mathcal { L } , e ( w ) - e ( x _ { i } ) \right. ,
$$
where $\nabla _ { e ( x _ { i } ) } \mathcal { L }$ denotes the gradient of the loss with respect to the embedding of token $x _ { i }$ , and $e ( w )$ is the embedding of candidate token $w$ . This inner product quantifies the expected gain from substituting $x _ { i }$ with $w$ , and the best candidate is selected greedily. Our overall training objective combines the three loss components introduced previously:
$$
\begin{array} { r } { \mathcal { L } = \alpha \cdot \mathcal { L } _ { \mathrm { P C E } } + \beta \cdot \mathcal { L } _ { \mathrm { E R } } + \gamma \cdot \mathcal { L } _ { \mathrm { D T } } . } \end{array}
$$
In this work, we adopt a fixed-length suffix-based strategy in which a predetermined number of tokens are appended to the end of the original prompt. Each token in the suffix is iteratively updated using GCG to minimize the combined loss. Although this paper focuses on crafting adversarial suffixes, it is important to note that our approach is method-agnostic and can be adapted to various adversarial paradigms. For instance, alternative strategies such as character-level perturbations (e.g., typos) can also be incorporated, as shown in prior work [9]. This flexible framework facilitates the efficient generation of adversarial inputs tailored to different attack objectives and constraints.
# 3 Experiments
# 3.1 Experimental Setups
Models and Datasets. We optimize adversarial suffixes and evaluate them on two reasoning LLMs: DeepSeek-R1-distillLLaMA-8B and DeepSeek-R1-distill-Qwen-7B.2 Both models are distilled variants of DeepSeek-R1 and demonstrate strong performance on complex reasoning tasks. We report results under two decoding strategies: greedy decoding and sampling decoding. For sampling, we set the temperature to 0.6 and apply nucleus sampling with top- $p = 0 . 9 5$ . To assess cross-model transferability, we additionally evaluate the attack on larger-scale models, including o1-mini, o3-mini, DeepSeek-R1, and QWQ-32B, using their respective default decoding settings. Specifically, we interact with o1-mini and o3-mini via the OpenAI API, and with DeepSeek-R1 and QWQ-32B via the Baidu Cloud API, to simulate realworld deployment conditions. Our evaluation is conducted on two widely used mathematical reasoning benchmarks: GSM8K [7] and ORCA [14]. For each dataset, we randomly sample 50 examples for both optimization and evaluation.
Attack Setup. For target output generation, we employ the COPRO optimizer to construct prompts that induce extended reasoning trajectories. Specifically, we use 10 training examples from the GSM8K dataset for prompt optimization and evaluate the resulting prompt on a separate set of 10 randomly selected test samples. Due to computational constraints, we restrict target outputs to a maximum length of 3,000 tokens. For the PCE Loss, we set the token selection threshold $K = 1$ , and for the ER Loss, we use $n = 5$ . The overall loss function combines the three components using the following weighting coefficients: ${ \mathfrak { a } } = 1$ , $\beta = 5 0$ , and $\begin{array} { r } { \gamma = 1 } \end{array}$ . We fix the length of the adversarial suffix to 10 tokens. During optimization, we apply the GCG algorithm for 1,000 steps per input. The candidate pool size is set to 64, and at each step, the top 64 candidate tokens are retained.
Evaluation Metrics. To evaluate the effectiveness of our adversarial attack, we consider three primary metrics: (1) output sequence length (in tokens), (2) inference latency (in seconds), and (3) energy consumption (in Joules). Energy usage is measured using the NVIDIA Management Library (NVML), following the methodology introduced by Shumailov et al. [16]. To ensure consistency and fair comparison, all inference is performed using the HuggingFace pipeline [1] on a single hardware (NVIDIA A100 80GB). Each inference is repeated three times to reduce the impact of runtime variability. To assess model utility, we extract final answers from the generated outputs using “Meta-Llama3.1-8B-Instruct”, and compute accuracy by comparing the extracted answers against ground-truth labels. The exact prompt used for extraction is provided in App. A.1.
Baselines. To evaluate the effectiveness of our attack, we compare it against several baseline prompting strategies:
• Random: A suffix composed of 10 randomly sampled tokens. • Standard CoT [20]: A widely used CoT prompt that appends the phrase “Let’s think step by step.” • CatAttack [15]: A prompt-based adversarial strategy that appends the distractor statement “Interesting fact: cats sleep most of their lives,” which has been shown to induce incorrect reasoning outputs.
# 3.2 Main Results
Performance. We evaluate the effectiveness of our attack using six metrics: reasoning token length (Rea), answer token length (Ans), total output length (Full), inference latency (Lat), energy consumption (Ene), and task accuracy (Acc). Evaluations are conducted under both greedy and samplingbased decoding strategies. As shown in Table 1 and Table 2, our adversarial suffix substantially increases computational overhead while preserving task accuracy across all settings. For example, our attack causes LLaMA to generate significantly longer outputs on the GSM8K dataset with greedy decoding, increasing the average reasoning length by 3x from 574 to 1,914 tokens. This is accompanied by a corresponding increase in energy consumption (from 4,712J to 12,827J) and latency (from 22.2s to 54.9s). A similar trend is observed for Qwen, where the average reasoning length increases by $9 \mathrm { x }$ , demonstrating the effectiveness of our attack across different model architectures. Under sampling-based decoding, the attack remains robust. The reasoning length increases by $3 \mathrm { x }$ for LLaMA and $8 \mathrm { x }$ for Qwen on GSM8K, with similar results observed on the ORCA dataset.
In comparison, baseline prompting methods generally induce relatively short reasoning. For example, CoT prompts produce shorter outputs than our adversarial prompt on LLaMA for GSM8K under greedy decoding (711 vs. 2,074 tokens), indicating the limitations of standard methods in eliciting excessive reasoning behavior. More broadly, our results suggest that reasoning LLMs are resistant to short prompting, as neither CoT nor CatAttack reliably trigger long reasoning. Interestingly, we observe a consistent inverse correlation between the lengths of reasoning and answer segments. We hypothesize that as the model allocates more capacity to the reasoning phase, the corresponding answer portion becomes shorter. Importantly, the increase in reasoning length does not degrade task accuracy; in many cases, it correlates with improved performance. This suggests that excessive reasoning may enhance the model’s problem-solving capabilities. Thus, our attack exhibits a dual effect: it exposes a vulnerability in inference-time efficiency while potentially enhancing the model’s reasoning capabilities. As a result, such adversarial behaviors may evade detection by standard evaluation metrics that focus solely on output correctness, highlighting the need for more comprehensive evaluation frameworks.
Table 1: The token length for reasoning (Rea), answer (Ans), and full output (Full); inference latency (Lat, in seconds); energy consumption (Ene, in joules); and task accuracy (Acc). Experimental results across methods under greedy decoding. Bold indicates the best result.
Table 2: Experimental results across methods under sampling decoding.
Analysis. To determine whether our adversarial suffixes truly elicit excessive reasoning rather than merely increasing output length, we conduct an analysis of the generated outputs. First, we observe a substantial increase in the average number of reasoning sentences. For LLaMA, the average rises from 31 to 88, and for Qwen, from 7 to 74, when comparing outputs generated from original prompts to those generated with adversarial suffixes. This pronounced increase suggests that our attack significantly extends the number of reasoning paths, rather than merely inflating output length.
Second, we analyze the distribution of the first two tokens in each reasoning sentence, comparing outputs generated with and without adversarial suffixes as shown in Fig.2. The results reveal distinct lexical patterns between the two models. For example, LLaMA more frequently uses deliberative tokens such as “Alternatively” and “Wait”, which are often associated with recursive reasoning. In contrast, Qwen shows lower sensitivity to “Alternatively”, suggesting that the expression of excessive reasoning may manifest differently across architectures. Moreover, Qwen does not exhibit the same degree of excessive reasoning as LLaMA under standard conditions. However, it remains vulnerable to such behavior when exposed to adversarial suffixes. Furthermore, the presence of tokens such as “Let”, “Maybe”, and “Hmm”, which are difficult to detect through manual inspection, highlights the effectiveness of our ER Loss when combined with automated token selection. This approach effectively surfaces subtle prompts capable of inducing excessive reasoning behavior.
Transferability. We evaluate the transferability of our adversarial suffixes to larger commercial language models, including o1-mini, o3-mini, DeepSeek-R1, and QWQ. Specifically, we test adversarial suffixes optimized on the LLaMA and Qwen models for the GSM8K dataset, with results summarized in Table 3. Our findings show that these adversarial suffixes generalize effectively, consistently promoting longer output sequences without degrading task accuracy. For the OpenAI model family, both LLaMA- and Qwen-optimized suffixes successfully increase output length. For example, suffixes optimized on LLaMA lead to a 245-token increase in total output length for o3-mini, and Qwen-optimized suffixes yield a 596-token increase.
Table 3: Transferability analysis of adversarial suffixes originally optimized for LLaMA and Qwen.
Figure 2: Token counts between the generated outputs from the original and adversarial prompts.
In contrast, transferability to DeepSeek-R1 appears to depend on the source model. Qwen-optimized suffixes result in a 260-token increase, whereas LLaMA-optimized suffixes fail to induce longer outputs. We hypothesize that this discrepancy is due to tokenizer compatibility, as DeepSeek-R1 shares the same tokenizer with Qwen but not with LLaMA. A similar pattern is observed for QWQ, which also uses the Qwen tokenizer and shows greater sensitivity to Qwenoptimized suffixes. These results suggest that while architectural differences influence the degree of computational overhead, tokenizer alignment plays a critical role in the transferability of adversarial prompts. Notably, Qwen appears to be a more effective proxy than LLaMA for crafting transferable adversarial suffixes across commercial systems.
Figure 3: Impact of varying the top-K most informative tokens on LLaMA under greedy decoding.
# 3.3 Ablation Studies
We conduct a series of ablation studies to assess the impact of different experimental configurations, including the introduction of the PCE Loss, the individual contribution of each loss component, and the effect of alternative target construction strategies.
PCE Loss. We begin by evaluating the effectiveness of the proposed PCE Loss by varying the proportion of top- $K$ tokens from $100 \%$ to $1 \%$ , as shown in Fig. 3. The results show that focusing optimization on the top $5 \%$ , and particularly the top $1 \%$ of tokens, consistently outperforms applying loss uniformly across all tokens. Peak performance is observed when focusing solely on the top $1 \%$ , with the number of reasoning tokens increasing from 660 to 1,100. This pattern suggests that selectively emphasizing a small subset of salient, prompt-dependent tokens can more effectively induce extended reasoning behavior. Additionally, we observe an inverse relationship between the number of reasoning and answer tokens, implying a redistribution of the model’s generative capacity toward reasoning content. These findings underscore the value of targeted token optimization and demonstrate that prioritizing high-impact tokens is more effective than uniformly distributing the loss across the entire sequence.
Loss Objectives. Second, we evaluate the individual contributions of each loss function and assess their collective impact as presented in Table 4. The results show that optimizing each loss independently leads to an increase in output sequence length, and the combination of all three loss functions yields the most substantial gains in both sequence length and computational burden. For instance, the full composite loss achieves the longest average output (up to 2,074 tokens), the highest inference latency (54.9 seconds), and the greatest energy consumption (12,827J). These results underscore the synergistic effect of combining all three objectives.
Table 4: Ablation study of loss objectives combinations on LLaMA under greedy decoding.
Table 5: Ablation study of different target constructions with our proposed loss function on LLaMA under greedy decoding.
To further analyze the behavior encouraged by the ER Loss, we visualize a word cloud of the most frequently prioritized tokens in Fig. 4. Common deliberative tokens identified in prior work, such as “Alternatively” and “Wait”, are prominently featured. In addition, our method surfaces previously underexplored tokens such as “Maybe” and “Hmm”, which act as effective triggers for extended reasoning. These findings confirm that the joint loss formulation effectively amplifies reasoning behavior while preserving task accuracy, and that the ER Loss successfully uncovers subtle lexical cues indicative of recursive reasoning.
Target Output Construction. Finally, we evaluate several strategies for constructing target outputs to guide adversarial optimization, as summarized in Table 5. The comparison includes a raw baseline (no additional prompt), a standard CoT prompt, and a DSPy-optimized CoT prompt. Interestingly, we find that the standard CoT prompt does not consistently produce longer reasoning sequences; in some cases, it even results in shorter outputs than raw prompting, highlighting its limitations in eliciting extended reasoning. In contrast, DSPy-optimized CoT prompts increase the average output length from 1,283 to 2,074 tokens under greedy decoding compared to CoT prompts, with corresponding increases in both energy consumption and task accuracy. These results highlight the critical role of target output quality in guiding adversarial optimization. Longer reasoning sequences, especially those produced via DSPy, serve as more effective targets for inducing excessive computation. This reinforces the importance of target construction in maximizing the efficacy of our attack. | Recent reasoning large language models (LLMs), such as OpenAI o1 and
DeepSeek-R1, exhibit strong performance on complex tasks through test-time
inference scaling. However, prior studies have shown that these models often
incur significant computational costs due to excessive reasoning, such as
frequent switching between reasoning trajectories (e.g., underthinking) or
redundant reasoning on simple questions (e.g., overthinking). In this work, we
expose a novel threat: adversarial inputs can be crafted to exploit excessive
reasoning behaviors and substantially increase computational overhead without
compromising model utility. Therefore, we propose a novel loss framework
consisting of three components: (1) Priority Cross-Entropy Loss, a modification
of the standard cross-entropy objective that emphasizes key tokens by
leveraging the autoregressive nature of LMs; (2) Excessive Reasoning Loss,
which encourages the model to initiate additional reasoning paths during
inference; and (3) Delayed Termination Loss, which is designed to extend the
reasoning process and defer the generation of final outputs. We optimize and
evaluate our attack for the GSM8K and ORCA datasets on
DeepSeek-R1-Distill-LLaMA and DeepSeek-R1-Distill-Qwen. Empirical results
demonstrate a 3x to 9x increase in reasoning length with comparable utility
performance. Furthermore, our crafted adversarial inputs exhibit
transferability, inducing computational overhead in o3-mini, o1-mini,
DeepSeek-R1, and QWQ models. | [
"cs.CR",
"cs.LG"
] |
# I. INTRODUCTION
Code review (CR) is a cornerstone of modern software engineering, ensuring code quality, defect detection, and team knowledge sharing. Nevertheless, as software systems scale and development cycles accelerate, traditional manual code review practices struggle with inefficiencies, reviewer fatigue, and inconsistent outcomes. The rapid development of Large Language Models (LLMs) [1]–[3] has enabled AI to demonstrate strong capabilities across various software engineering tasks, including code generation and bug detection. While their integration into the coding activities has been smooth, the full potential of LLMs in code review remains underexplored.
Through a combination of observational research and field experiments conducted at WirelessCar Sweden $\mathbf { A } \mathbf { B } ^ { 1 }$ (where some but not all teams are permitted to utilize AI in development), this study investigates how LLMs can be meaningfully integrated into modern code review workflows to improve developer experience and potentially support review efficiency. The research is structured around two core questions: one diagnostic and one exploratory. The first question (RQ1) focuses on understanding current code review practices within the company and identifying opportunities for AI assistance, while the second question (RQ2) explores how developers perceive and interact with two variations of LLM-based tools during review tasks.
To address these research questions, we conducted a twophase empirical study (Sec. IV) at WirelessCar Sweden AB.
Phase 1 involved a field study through semi-structured interviews to uncover practical challenges in existing code review workflows and to identify potential areas where AI assistance could be beneficial. Importantly, based on these findings, we designed and implemented two variations of LLM-assisted review tools, one being an AI-led co-reviewer and the other an interactive assistant, integrated with retrieval-augmented generation (RAG) to provide contextual support.
In Phase 2, we evaluated these tools in a real-world field experiment involving practicing developers. The results show that while the AI-led mode was generally preferred, especially for large or unfamiliar pull requests, preferences were context-dependent. Participants valued the tool’s ability to provide faster understanding, improved thoroughness, and helpful contextual insights, though issues such as trust, false positives, and interface limitations were noted.
Altogether, our findings (Sec. V) suggest that LLMs can meaningfully augment, rather than replace, human reviewers, and highlight the importance of adaptive integration strategies that respect developers’ workflow preferences and domain knowledge. Furthermore, our results point to clear design directions for future tools: AI assistance should be seamlessly embedded into existing developer environments (e.g., GitHub, IDEs, Slack), offer concise and structured feedback with minimal latency, and support both proactive and reactive interaction modes. For maximal utility, the assistant must be contextaware and capable of leveraging relevant project artifacts like code diffs, source files, and requirement tickets. Notably, participants also envisioned the assistant being valuable during the pre-review phase, helping authors improve pull requests before submission. These practical insights inform how LLMs can be responsibly and effectively integrated into real-world software development workflows.
# II. RELATED WORK
Since the emergence of ChatGPT, large language models have seen widespread adoption in all software engineering tasks, such as code generation, test case creation, or documentation. We refer readers to a review paper [4] published in 2024 for a summary of recently conducted research activities. In particular, recent works [5]–[10] have explored the application of AI, in specific LLMs, to automate or support code review tasks. The early result from Tufano et al. [10] presents a deep learning model trained to replicate reviewer-suggested code changes, aiming to partially automate the code review process. In the era of LLMs, the work from Tufano et al. [5] evaluates the measurable impact of AI-generated code reviews in controlled experiments over code with injected issues and code smells. It focuses on the performance of the LLM itself and the quantifiable effects on issue detection, time spent, and reviewer behavior. The work from Alami et al. [9] performs qualitative, interview-based exploration of developers’ emotional and cognitive responses to AI-provided feedback compared to human-provided feedback in code reviews. The work from Vijayvergiya et al. [7] showcases the deployment and evaluation of a large-scale automated system that enforces coding standards and code smells using LLMs in code reviews. The work from Lin et al. [6] studies how to fine-tune LLM for better issue detection in code reviews. Similarly, the work from Rasheed et al. [8] develops a multi-agent LLM-based system for performing autonomous code reviews, focused on technical accuracy, issue detection, and actionable suggestions.
Summarizing the above results, one of the critical gaps is the study of the preferred interaction for an engineer assigned with the code-review task in collaboration with an LLM-enabled review assistant, which leads to the primary focus of our work (RQ2). We focus on understanding how LLMs can support, not replace, reviewers via practical integration strategies. We do not consider LLMs as human replacements mainly due to the fundamental concern that LLMs are still prone to hallucinations [11]. Another differentiator is the introduction of semi-structured interviews and thematic analysis to understand the real potential of LLMs in code reviews, reflecting the pain points of an organization developing complex software (RQ1). Understanding the pain points (RQ1) leads to the design of experiments and prototypical LLM-assisted code review tools to assess the preferred interaction (RQ2).
# III. RESEARCH QUESTIONS
This study investigates how LLMs can be meaningfully integrated into modern code review workflows to improve developer experience and potentially support review efficiency. The research is structured around two core questions: one diagnostic and one exploratory. The first question focuses on understanding current code review practices and identifying opportunities for AI assistance, while the second question explores how developers perceive and interact with LLMbased tools during review tasks.
RQ1: What practices, challenges, and expectations characterize modern code review processes, and where do developers see potential for AI-based assistance?
This question aims to identify how code reviews are currently performed within the company, what challenges developers face, and how AI can be introduced in the code review process, what tasks it can effectively support, and the optimal balance between automation and human involvement.
RQ2: How do developers perceive LLM-assisted code review tools, and what is the preferred interaction?
This question explores the qualitative aspects of AI usage, including developer trust, satisfaction, and potential barriers to adoption, by considering different interaction modes. By answering this question, insights are provided into the usability, limitations, and acceptance of AI-assisted code reviews, helping to inform design decisions and future development of such tools.
# IV. METHODOLOGY
Towards the above research questions, grounded in the literature review, we adopt a two-phase research design that is aimed at understanding both the current code review processes in a natural environment and then intervening with an AI-based solution to observe the effects (see Fig. 1 for the research flowchart). Phase 1 involves an exploratory case study at WirelessCar to thoroughly understand existing manual code review practices, identify specific pain points, and potential AI application areas. The goal is to identify typical workflow steps, uncover key pain points, clarify why certain inefficiencies arise, how success is evaluated, capture the technical and organizational environment, and where AI might offer possible improvements. Phase 2 aims to assess how software developers at WirelessCar experience and evaluate the integration of a software artifact featuring LLM-assisted code review tools into their review process, thereby identifying design implications and best practices.
Importantly, rather than focusing on performance metrics or direct comparisons to existing tools as conducted in prior works, our emphasis on Phase 2 is on capturing qualitative insights into how developers interact with the AI assistant, what kinds of support they find valuable, and how such a tool might be ideally integrated into real-world code review practices. To make such an assessment possible, we created two variations of LLM-enabled review, namely the AI-led mode (co-reviewer) and the interactive mode. The design of the two modes is based on challenges and improvement opportunities identified during Phase 1 as well as insights collected from the literature.
Due to our research being qualitative in nature, in both phases, semi-structured interviews were conducted to receive feedback, and thematic analysis [12] has been applied to analyze the interview scripts being collected.
Fig. 1. Flowchart detailing the research workflow
A. Phase 1: Understanding the existing manual code review practices (field study)
In our field study, semi-structured interviews were conducted. Such a format balances consistency where all participants receive a core set of similar questions with flexibility where researchers can ask follow-up or clarifying questions as fits the flow of the interview [13]. The interview questions were developed around domains such as code review processes, challenges in code reviews, measuring code review success, and current and potential AI use cases to cover the previously mentioned objectives. The interviews were designed to fit comfortably within a 30-minute timeframe. However, in practice, the interviews ranged from 15 to 40 minutes each, with most lasting approximately half an hour.
We used convenience sampling [14] by interviewing people within the WirelessCar who were able and willing to participate. Participants were recruited via announcements on Slack channels and informal Slack messages that announced the study’s purpose and form of the interview. Interviewees who handled different parts of the code and were reviewers at varying seniorities were sought. Additionally, some interviewees were from the same development team to gain multiple perspectives from the same team. Despite being limited in randomness, we believe our sampling still enables insights into a relatively wide range of review practices.
A total of seven participants were interviewed and are listed in Table I. The interviewees varied in gender and age, and the sample included software engineers, developers, security engineers, and quality assurance specialists across four development teams. Teams ranged in size from under five members to around sixteen members. The interviewees had different areas of expertise, with varying levels of experience and skill. The goal was to include individuals who could discuss both strategic and day-to-day review practices. Data saturation was reached after the seventh interview, as no new information or themes were emerging. This decision is grounded in the principle that qualitative data collection can be considered sufficient when additional interviews fail to yield new insights [15]. Data saturation is defined as the point at which no new themes are observed in the data, and studies show that although saturation often occurs within twelve interviews, the basic elements of major themes are frequently present as early as six [15]. The current sample is somewhat heterogeneous in terms of role and team, and the consistency of responses across participants suggested that the core aspects had been adequately captured. Although an eighth interview had been scheduled, the participant canceled, but given that saturation had already been achieved, it was decided not to reschedule the interview.
TABLE I OVERVIEW OF INTERVIEW PARTICIPANTS IN PHASE 1
All interviews were held in English and conducted in the participant’s active work environment, either on-site at WirelessCar’s office or remotely via Microsoft Teams (two participants were interviewed remotely). This alignment with genuine working conditions is consistent with a field study approach since it allows developers to reference real pull requests, team communication channels, and examples of ongoing tasks. They could describe, for example, how a large refactoring pull request (PR) or an urgent bug fix impacted their review process in real-time. Conducting interviews in this natural setting helps reinforce the authenticity of participant responses and results in findings consistent with everyday workflows at WirelessCar.
# B. Phase 2: Evaluating the LLM-enabled code review (field experiment)
1) Data Collection: To explore developer experience with the AI assistant during code reviews, two data collection methods were employed to capture user perspectives and behavioral interaction patterns. The primary data source consisted of post-interaction interviews with each participant. These were supported by a secondary data source consisting of researcher observation notes recorded during review sessions.
TABLE II OVERVIEW OF EXPERIMENT AND INTERVIEW PARTICIPANTS IN PHASE 2
Participants were recruited again through convenience sampling, where individuals who participated in the earlier phase of the study were invited to return for Phase 2. However, only five were available and agreed to participate again. To reach a total of ten participants, five additional individuals were recruited via internal Slack channels, where the study’s purpose and structure were briefly described. Of the ten participants, four belonged to the team responsible for the pull requests used in the experiment, while the remaining six were from other teams, allowing comparison across varying levels of codebase familiarity. Table II provides an overview of the participants, including their roles and team affiliations. The code used in the experiment originated from Team A, which is the team associated with the familiar participants.
2) Experiment Setup: The field experiment is designed to evaluate the developers’ experiences with LLM-based code review assistance under two distinct interaction modes that were selected based on the findings from the Phase 1 field study. As indicated by the interview data in that earlier phase, developers most frequently identified a need for clearer up-front summaries of code changes as well as on-demand explanations for specific architectural or contextual details.
In each iteration of the experiment, a single participant engaged in a traditional code review scenario, conducted within their familiar development environment using their usual tools and platforms. In addition to these standard resources, the participant was provided access to our created LLM code review assistant as illustrated in Fig. 2. The task assigned to the participant was to perform two code reviews, each on a different pull request from two separate repositories within the company. The participant followed a different interaction mode with the AI assistant for each review. All sessions took place in conditions that mirrored each participant’s normal working situation as closely as possible.
The two interaction modes are described below:
Mode A: Co-Reviewer – In this mode, the AI assistant automatically generated a summary of the code under review, highlighting major changes and any potential points of interest or concern, before the reviewer started their own examination. The reviewer could then use this information to guide their review, optionally asking the AI-assistant follow-up questions. The participant could query the AI for clarification or more details about the summarized areas. This mode directly targeted the challenge previously identified as lacking immediate context, particularly for large or complex PRs.
Fig. 2. Screenshot of the LLM-assisted code review interface in Mode B, reviewing a pull request from an open-source project available at https://github.com/ogen-go/ogen/pull/1440.
Mode B: Interactive Assistant – In this mode, the reviewer reviewed the code in their typical manner. The AI assistant did not proactively provide a summary or suggest issues upfront. Instead, the reviewer was free to consult the AI on demand, requesting clarifications about specific parts of the code or asking higher-level architectural questions. Reviewers were encouraged to ask targeted queries rather than requesting an overall summary of the changes. Here, the AI assistant only responded when explicitly prompted. This design choice not only reflected the Phase 1 feedback on needing a lightweight, on-demand tool but also addressed an identified challenge in related studies [5], where automatically highlighted lines can cause reviewers to miss other important areas. By making the AI passive, reviewers maintained their usual workflow and examined the entire codebase without unconsciously depending on the AI’s initial hints.
To minimize review variability while still enabling comparison across interaction types, two specific pull requests of similar size and complexity were selected from the WirelessCar codebase. The selected pull requests each involved a moderate amount of change, requiring genuine reviewer effort without being excessively large or trivial. Before the first experiment session, a pilot run was conducted with two internal developers who were not part of the main participant pool. The pilot involved a walkthrough of the tool, the review task, and the interaction modes, followed by an informal test of the tool using the selected pull requests. The pilot was used to assess the suitability of the selected PRs, detect any major usability or technical issues in the tool, and evaluate whether the introduction and guidance provided to participants were sufficiently clear and effective. Each participant in the actual experiment then reviewed both PRs, using Mode A for one PR and Mode B for the other. The assignment of modes to PRs was rotated across participants to mitigate ordering effects. By having all participants conduct the same two code reviews with alternating modes, this approach allowed for a more controlled comparison of interaction styles while ensuring the tasks remained realistic and relevant to actual code review practices.
To investigate how varying levels of codebase familiarity may influence the use of the AI assistant, both developers belonging to the team that owns the selected PRs and developers from unrelated teams were invited to participate. Phase 1 interviews indicated that limited contextual knowledge can have negative effects on the review quality, and therefore, measuring differences in AI reliance across these two participant groups was expected to produce further insights.
Before starting the code review sessions, each participant was given a short onboarding briefing to explain the tool’s features as well as the structure and setup of both the study and the experiment. The two different interaction modes were introduced, along with guidance on how to effectively prompt the AI assistant to obtain useful and relevant responses. The participant then completed the two review sessions consecutively. While no in-depth feedback or direct assistance was provided during the sessions, limited guidance was offered when needed, for example, if participants inquired about specific ways of interacting with the assistant. Participants were also encouraged to think aloud during the experiment, often verbalizing their reasoning, confirming the AI’s suggestions, or commenting on its usefulness. If the assistant’s response did not meet expectations, participants were occasionally guided to try rephrasing or retrying the query. Researchers were present to observe the sessions, record notes, and perform the postsession data collection. Following the review sessions, short semi-structured interviews were conducted with the participant to reflect on their experience across the two modes and in comparison to their regular code review workflow.
3) Artifact Implementation: The software artifact developed for this study was a web-based chat interface designed to explore and evaluate different interaction styles in AI-assisted code reviews. Its primary purpose was to enable structured experimentation by allowing researchers to observe how developers interact with AI assistance in different contexts. The source code for the artifact is freely available (under GPLv3)
Fig. 3. Agentic tool structure in Co-Reviewer mode.
on GitHub: frontend2 and backend3. While not intended to be a production-level system, the artifact was designed to be realistic and usable enough to engage developers meaningfully. It supported live interaction through a chat interface, maintained session-specific context, and allowed researchers to configure session parameters such as interaction mode and PR data source.
The artifact consisted of a chat interface connected to a backend system that integrated OpenAI’s $\mathsf { o } 4 - \mathsf { m i n i } ^ { 4 }$ language model via API. The artifact was also supported by a Retrieval Augmented Generation (RAG) infrastructure built with LlamaIndex5. This setup enabled the AI assistant to produce more informed and context-aware responses by using project data such as code diffs, related source code files, and associated feature requirements (Jira tickets). The RAG index had to be manually prepared and indexed before experiments, ensuring complete control over the data available to the model in each experimental session. As highlighted by the literature and the Phase 1 interview results (see Sec. V-A2), lacking a broader repository context can lead to superficial AI feedback that overlooks critical design or architectural concerns. This RAG setup ensured that the LLM can reference deeper project-level information on demand. This setup not only enhanced the AI assistant’s capacity to generate context-aware suggestions but also directly targeted the gap in existing tools and recent studies on the subject.
Internally, the assistant interacts with three core semantic tools:
search_pr: accesses PR diffs and metadata. search_code: provides the full, unmodified source code of the repository.
search_requirements: contains the feature requirement (the Jira ticket) motivating the PR.
In Mode A (Co-Reviewer), a fourth tool, start_review, was added. This tool contained a sub-agent that was designed to perform an initial, structured code review based on the full PR data and guided by a detailed review-specific prompt. Unlike the main agent, this sub-agent did not use search_pr, as all PR data was injected into its initial context via a prompt. This ensured that the agent considers everything in the PR data and examines each file change. By retrieving the PR data via a query engine, the agent might not consider all the data as required when generating a complete code review of all changes. The agentic structure for this setup is shown in Fig. 3.
# V. RESULTS
# A. Results from Phase 1
Six themes emerged from the thematic analysis of the qualitative data and can be seen in Table III.
1) Observed Code Review Process: When reviewing the informal review process at WirelessCar and their practices, it appears to be similar to the typical asynchronous, toolsupported nature of modern code review (MCR) practices. However, the reliance on informal assignment and communication sometimes through Slack introduces variability, which might contrast with the structured practice of assigning a specific reviewer to the code patch commonly found in MCR practices. Additionally, WirelessCar places significant emphasis on contextual understanding and relies heavily on team members with domain expertise to assess the critical components. This contrasts with the broader responsibilitysharing model commonly found in large organizations that apply MCR.6
2) Common Challenges in Code Reviews: One of the most frequently mentioned challenges across the developer interviews was the issue of delayed reviews. Situations were described where PRs remained unreviewed for extended periods, often requiring repeated reminders for a reviewer to take action. One interviewee described it as:
“Sometimes you need to ping people more often, and sometimes the PR is very big, so people don’t dare to pick it up” [P2]
Several interviewees reported difficulties regarding reviewing large or complex PRs. PRs that combine new features, refactoring, and changes to infrastructure often become overwhelming. They note that this can result in more superficial reviews or longer delays.
Context switching was also noted as another major challenge. Developers mentioned that the cognitive burden of pausing ongoing work to review code, not at all related to their current work, to be challenging. This requires additional time to regain focus, and one interviewee highlighted the time lost cause of this:
“As soon as you need to context switch, even if it’s just a three-minute thing, it’s 20 minutes of lost time.” [P7]
Several interviewees explained that they sometimes lack sufficient context when reviewing the code. Sometimes, important details about why the change was made or its expected impact are missing from the PR description. This increases the time it takes for the reviewer to comprehend the PR and effectively point out defects or problems.
3) Current Use of AI in Software Development: The interviews revealed that common AI tools like GitHub Copilot7 and ChatGPT8 are commonly used for development tasks. Interviewees mentioned tasks like writing boilerplate code, assisting with syntax, and quickly generating documentation. One example mentioned was:
“[...] as to help to create the boilerplate stuff, it’s outstanding, right? I mean, you do it in 30 seconds instead of a couple of hours. So I try to use it as much as possible during the development process.” [P7]
However, not all teams are allowed to use AI-generated code or share information (such as source code) with AI tools. Furthermore, the teams that are allowed to use AI tools only utilize them via an enterprise subscription, where the data provided is not used for training the models utilized by the tools. This ensures that no data is leaked from the organization via the use of AI tools.
All interviewees who use AI tools reported positive experiences in their development work. None of the interviewees reported using AI tools as part of the formal code review process. One interviewee did mention that some reviewers might occasionally paste code into tools like ChatGPT for clarification or explanation during reviews. However, beyond such informal use, AI has not been formally integrated into the code review workflow in any of the interviewed teams.
4) Potential AI Use Cases in Code Review: The interviewees expressed interest in potential AI integrations with the code review processes. They mentioned features like summarizing PR changes and accompanying descriptions where AI could generate a concise summary and help reviewers to quickly understand the intent of a code change. One interviewee mentioned:
“When you create the pull request, an AI bot could say, ’Hey, you’re trying to achieve this—do you want this as your summary or description?” [P5]
Interviewees also mention possibilities such as AI assisting in validating whether code meets stated requirements. Additionally, interviewees highlighted the potential for AI to detect hidden bugs or vulnerabilities, such as race conditions, dependency issues, or other subtle defects that human reviewers might overlook. As one developer put it:
TABLE III IDENTIFIED THEMES AND THEIR DESCRIPTIONS FROM ANALYSIS OF INTERVIEW DATA IN PHASE 1.
TABLE IV IDENTIFIED THEMES AND THEIR DESCRIPTIONS FROM ANALYSIS OF DATA FROM PHASE 2.
“An AI would probably be able to identify a race condition, for instance, which, as I said, is an example that’s borderline impossible to catch on the fly. In three minutes, you’re never going to find that.” [P7]
Finally, some interviewees mentioned potential drawbacks of integrating AI into the code review process. These were mostly concerns about security risks and false positives, which could reduce the reviewer’s trust in the AI assistant and divert their attention from real issues. As one interviewee put it:
“The problem with those kinds of checks is that, if they’re not good enough, you stop reading them. We see that all the time, you get flooded with false positives, and then you miss the real issues because you start ignoring the feedback.” [P7]
# B. Results from Phase 2
Table IV provides an overview of the themes along with descriptions of the themes.
1) Accuracy, Reliability, and Trust: Participants frequently commented on the accuracy of the assistant’s feedback and how this influenced their trust in the tool. Several interviewees described the assistant as generally accurate and capable of identifying relevant issues. For example, one participant
stated:
“As I see it, they were quite accurate [...] It was quite nice, not all of them, but a lot of them.” [P9]
In multiple cases, the assistant’s output was described as confirming the developer’s own thoughts or surfacing something they might not have otherwise caught. Observation notes also reflect this, with one session noting that “the user said the AI caught exactly what he was looking for in a certain file.” In another session, the reviewer remarked that “the summary gave something that he would not have seen.”
Several participants also reported instances where the assistant produced incorrect or unclear suggestions. One participant questioned whether the tool was even “doing what it was asked,” while another described the assistant incorrectly flagging a missing import. One participant noted:
“Sometimes it says some slightly strange things.”
When asked about concerns, participants expressed different perspectives. Several participants warned of the risk of overrelying on the assistant, especially in Mode A, where the assistant led the review:
“It feels like I might get a bit colored by getting the improvements from the LLM [...] I feel like maybe I could miss something else, because I would focus on those improvements a lot.” [P3]
While some view using AI tools as risky, not all participants saw the assistant as risky. Some viewed it as a low-stakes addition to the workflow, especially when its use remained optional, even if it is not always accurate:
“As long as there’s an opt-out option, there’s no real harm in it.” [P5] “If we miss 10 [issues] today, we might miss two with a tool like this.” [P9]
The role of trust emerged as a key factor in shaping how participants viewed the tool. A few emphasized that for the tool to be useful or even used in general, it must be trusted, but not blindly. Misplaced trust in the tool could lead to wasted effort if the tool isn’t accurate:
“So I think that worked really well, especially with larger [PRs] [...] you can at least use it if you trust it.” [P7] “I could go on for hours, just to realize I can never do this. Then I’ve just lost a few hours trying to pursue something that wasn’t possible.” [P5]
2) Efficiency and Thoroughness: A strong theme was the AI’s impact on code review efficiency and the thoroughness or quality of reviews. Developers reported that the AI assistant could speed up the review process, reduce reviewers’ workload on tedious tasks, and potentially catch more issues, although some participants remarked that it sometimes focuses on low-priority findings, introducing noise.
Beyond efficiency, participants also pointed to improvements in review quality. Some noted that the assistant could identify issues that might otherwise go unnoticed:
“There are probably findings you get in the report that you don’t find when you do it manually.” [P10]
Additionally, interviewers felt that the assistant would be particularly helpful on large pull requests where it would be difficult for a human reviewer to catch everything:
“It’s taken someone two weeks to write it [...] giving it 15 minutes, you won’t have a chance to understand it, at least not well enough to find the hard stuff. I would imagine that the tool would actually raise a flag for a potential deadlock or race condition as well.” [P7]
For some participants, the assistant reduced the effort involved in reviewing by removing the need to search through the codebase or external documentation:
“But I really like the integration with the requirements part, because if I open up [a PR] and I don’t know what it’s about. [...] The first thing I do every time is I open the [Jira] ticket anyway, because I need to see what is supposed to have been achieved here. So I think that’s a really nice functionality to have” [P3]
However, some participants noted that the assistant occasionally surfaced low-priority or unclear findings. This was also observed during review sessions, where participants sometimes expressed difficulty distinguishing important findings from minor ones, especially in lengthy summaries generated by the assistant.
3) Design Expectations and Limitations: Many participants shared expectations about how an AI assistant for code review should behave and be integrated. A recurring theme was the desire for seamless integration into existing workflows and tools. Rather than switching to a new interface, several participants expressed that it would be preferable to access the assistant directly from familiar environments like GitHub, Slack, or their IDEs. As one participant put it:
“I think most of the developers don’t want to use something new, like a [new] UI, but rather have an integration to what exists.” [P11]
Building on this, another participant described a preference for having the assistant’s comments embedded directly into GitHub’s interface, with expandable in-line comment boxes, while also having the ability to further ask questions in a chat interface. Yet another participant described how having it in the pipeline through a Slack bot could be useful.
“Maybe a Slack auto bot could even be triggered on each message [...] and a full review could be dropped as a message under that thread.” [P11]
Beyond integration, participants also critiqued how the LLM assistant’s output was presented. Several interviewees felt that the LLM feedback was overly long or difficult to scan. One participant noted:
“I mean, what’s important is that it very clearly lists the file. I’d rather have it list the file and the line number and be very specific, in a short way.” [P5]
Response time was another point of friction. While some delays were tolerated, long wait times were cited as a major barrier to adopting the tool in real development workflows:
“I think the speed and accuracy are mainly what need to be improved. [...] I wouldn’t use this if it took, I don’t know, how many minutes it took for it to respond.” [P3]
One participant reflected on whether the quality of interaction also depended on their own ability to ask good questions:
“Maybe my way of asking questions was also wrong. I did not always feel like I got the response that I was asking for. So, I might need to learn how to be more detailed in my questions.” [P10]
Many limitations were traced to a lack of access to broader context, such as architectural documentation, internal conventions, or metadata:
“Ideally, you want to inject as much relevant information as possible [...] like the JIRA ticket, relevant [documentation] pages, the codebase itself, the README, and any similarly named repositories that might be connected to the same service.” [P5]
Some participants also highlighted that the LLM-assistant’s usefulness depended in part on how good the documentation and PR descriptions are to begin with. During the testing, one reviewer reflected that the tool could help keep the documentation up-to-date by suggesting updates that align with PR changes.
4) Usage Contexts and Interaction Patterns: Participants expressed a range of preferences, strategies, and situational factors that shaped how they interacted with the AI assistant, often shaped by the review context. These patterns included both the predefined interaction modes: Mode A (Co-Reviewer) and Mode B (Interactive Assistant), as well as emergent workflows that blended or extended beyond them.
Many participants found Mode A (Co-Reviewer) especially helpful for getting oriented in a pull request. They described the high-level summaries and suggestions provided at the beginning of the review as useful for gaining quick context, particularly in unfamiliar or complex codebases:
“I prefer this one [Mode A] where you actually get the overview directly [...] it had a lot of good pointers, that it already found.” [P12] “The first engine [Mode A] that gave me a breakdown of everything [...] that was quite clever, and I would gladly use that.” [P7]
Mode A was also described as particularly useful for lowrisk PRs:
“Let’s say the change is relatively small and it’s not causing any risk, then I would definitely go with the first one [Mode A], where I let AI do most of the work.” [P11]
Many participants saw the assistant, especially mode A, as valuable for newcomers:
“I think if I were in a new team, and I am unsure what is happening, then it could be really good to start with a summary.” [P8] “Yeah, if you can write questions like ’What is this?’ or ’What is this really about?’, it could also be a very good tool to get to know the codebase and to learn as a new guy.” [P9]
Additionally, some saw mode A as useful in teams where code review standards are lower or in teams that prefer other methods for reviewing code, such as pair programming. In such cases, the assistant would serve as a fallback mechanism.
“I think it would be great for those who usually just skim through and say, ’It looks good to me’. [...] I think the biggest effect would be for those developers, I guess, and those teams.” [P5]
Mode B was less preferred in general, but some participants remarked that in cases where they are already familiar with the codebase or if they wanted to maintain full control over the review process, they would prefer Mode B:
“But if it’s in some codebase I already know, some codebase where we have a lot of experience and have worked in it a lot. It could probably be nice to have [Mode B].” [P8]
In some cases, participants preferred a combination of both modes or expressed that the preferred interaction mode depended on the situation:
“I think I’m 50/50 [...] both are useful. One is on demand, the other one is on its own.” [P1]
Several participants proposed additional usage patterns that were not strictly defined by the study design. For instance, some saw mode A as useful for the author before submitting the PR, rather than during review:
“I feel it might not be as much of a review help. I think it might be a pre-review help.” [P10]
Others proposed an alternative interaction mode where engineers conducted a human-led review first and then used the assistant to validate or catch anything they might have missed:
“You start out with it just to sum up what the code is doing. Then I look for issues, and then I can ask, ’Are there any further issues?” [P3]
Participants also frequently noted that Mode A was especially helpful for large PRs:
“Especially for large PRs, it’s nice to get the breakdown on what’s happening [...] because usually, you always have to do that sort of manually anyway.” [P3]
However, there was also some uncertainty about how effective the assistant would be at scale. One participant expressed concerns about the assistant’s ability to handle very large codebases or complex business logic:
“I think it’s going to be a bottleneck for such things, because there will be so many moving parts in it, so much business logic going around.” [P1]
# VI. IMPLICATIONS
Our study yields several practical implications for integrating LLMs into code review workflows. First, AI assistance should be embedded within developers’ existing tools (such as GitHub, GitLab, IDEs, or Slack) to minimize friction and support natural adoption. Second, output must be concise, well-structured, and actionable, prioritizing critical findings with precise references to affected files and lines. Fast response times are essential to preserve reviewer flow, although more comprehensive agentic reviews may be acceptable when integrated into automated pipelines. Supporting both proactive (AI-led summaries) and reactive (on-demand Q&A) modes of interaction is key, with a general developer preference for AI-led summaries in large or unfamiliar pull requests. For meaningful support, the assistant must be context-aware and have access to relevant information such as code diffs, source files, and requirement documents. While our tool addressed this via a retrieval-augmented setup, participants still highlighted the need for deeper contextual integration. Finally, an LLM-enabled assistant also shows promise as a pre-review aid, helping authors catch simple issues before submitting a pull request, thus improving code quality upstream in the development lifecycle.
# VII. CONCLUDING REMARKS
This paper presented a field study and field experiment conducted at WirelessCar to explore the integration of Large Language Models (LLMs) into real-world code review workflows. The study surfaces persistent challenges in current review practices, such as context switching, reviewer fatigue, inconsistent review depth, and developer perceptions of how LLMs can augment the process. By evaluating two interaction modes (AI-led reviews and on-demand assistance), we found that developers generally value AI-generated summaries and contextual clarifications, particularly in large or unfamiliar pull requests. However, concerns around trust, false positives, response latency, and integration friction remain. While most participants preferred the AI-led mode in unfamiliar or lowrisk scenarios, preferences were context-dependent, with some favoring human-led reviews when code familiarity or criticality increased.
This study contributes practical insights into how LLMs can complement human reviewers, rather than replace them. Our findings suggest a promising path forward: integrating AI assistance more tightly into existing development environments, improving response quality and speed, and offering adaptive interaction modes tailored to developer needs.
# REFERENCES
[1] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
[2] A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei et al., “Qwen2.5 technical report,” arXiv preprint arXiv:2412.15115, 2024.
[3] T. Mesnard, C. Hardin, R. Dadashi, S. Bhupatiraju, S. Pathak, L. Sifre, M. Rivie\`re, M. S. Kale, J. Love et al., “Gemma: Open models based on gemini research and technology,” arXiv preprint arXiv:2403.08295, 2024.
[4] X. Hou, Y. Zhao, Y. Liu, Z. Yang, K. Wang, L. Li, X. Luo, D. Lo, J. Grundy, and H. Wang, “Large language models for software engineering: A systematic literature review,” ACM Transactions on Software Engineering and Methodology, vol. 33, no. 8, pp. 1–79, 2024.
[5] R. Tufano, A. Martin-Lopez, A. Tayeb, S. Haiduc, G. Bavota et al., “Deep learning-based code reviews: A paradigm shift or a double-edged sword?” arXiv preprint arXiv:2411.11401, 2024.
[6] H. Y. Lin, P. Thongtanunam, C. Treude, and W. Charoenwet, “Improving automated code reviews: Learning from experience,” in International Conference on Mining Software Repositories (MSR). ACM, 4 2024, pp. 278–283.
[7] M. Vijayvergiya, M. Salawa, I. Budiselic´, D. Zheng, P. Lamblin, M. Ivankovi´c, J. Carin, M. Lewko, J. Andonov, G. Petrovi´c, D. Tarlow, P. Maniatis, and R. Just, “AI-assisted assessment of coding practices in modern code review,” in Proceedings of the 1st ACM International Conference on AI-Powered Software. Association for Computing Machinery, 2024, pp. 85–93.
[8] Z. Rasheed, M. A. Sami, M. Waseem, K.-K. Kemell, X. Wang, A. Nguyen, K. Systa¨, and P. Abrahamsson, “AI-powered code review with LLMs: Early results,” arXiv preprint arXiv:2404.18496, 2024.
[9] A. Alami and N. A. Ernst, “Human and machine: How software engineers perceive and engage with AI-assisted code reviews compared to their peers,” arXiv preprint arXiv:2501.02092, 2025.
[10] R. Tufano, L. Pascarella, M. Tufano, D. Poshyvanyk, and G. Bavota, “Towards automating code review activities,” in 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 2021, pp. 163–174.
[11] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin et al., “A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions,” ACM Transactions on Information Systems, vol. 43, no. 2, pp. 1–55, 2025.
[12] V. Braun and V. C. and, “Using thematic analysis in psychology,” Qualitative Research in Psychology, vol. 3, no. 2, pp. 77–101, 2006.
[13] O. A. Adeoye-Olatunde and N. L. Olenik, “Research and scholarly methods: Semi-structured interviews,” JACCP, vol. 4, no. 10, pp. 1358– 1367, 2021.
[14] I. Etikan, S. A. Musa, R. S. Alkassim et al., “Comparison of convenience sampling and purposive sampling,” American journal of theoretical and applied statistics, vol. 5, no. 1, pp. 1–4, 2016.
[15] G. Guest, A. Bunce, and L. Johnson, “How many interviews are enough?: An experiment with data saturation and variability,” Field Methods, vol. 18, no. 1, pp. 59–82, 2006. | Code reviews are a critical yet time-consuming aspect of modern software
development, increasingly challenged by growing system complexity and the
demand for faster delivery. This paper presents a study conducted at
WirelessCar Sweden AB, combining an exploratory field study of current code
review practices with a field experiment involving two variations of an
LLM-assisted code review tool. The field study identifies key challenges in
traditional code reviews, including frequent context switching, insufficient
contextual information, and highlights both opportunities (e.g., automatic
summarization of complex pull requests) and concerns (e.g., false positives and
trust issues) in using LLMs. In the field experiment, we developed two
prototype variations: one offering LLM-generated reviews upfront and the other
enabling on-demand interaction. Both utilize a semantic search pipeline based
on retrieval-augmented generation to assemble relevant contextual information
for the review, thereby tackling the uncovered challenges. Developers evaluated
both variations in real-world settings: AI-led reviews are overall more
preferred, while still being conditional on the reviewers' familiarity with the
code base, as well as on the severity of the pull request. | [
"cs.SE"
] |
# I. INTRODUCTION
Point Clouds (PC), as one of the most representative forms of data in immersive media, are experiencing increasing demand in various fields, such as autonomous driving [1] and medical imaging [2]. A point cloud consists of a collection of discrete points, each described by its coordinates in 3D space, along with additional attributes such as color and normal vectors. Given the inevitable distortions introduced to point clouds in practical applications, which impact the perceptual quality, research on PCQA has become a hotspot. PCQA can be roughly classified into subjective and objective quality assessment. Subjective quality assessment is considered the most reliable method, involving the invitation of viewers to evaluate the quality of distorted point clouds in a controlled testing environment. Objective quality assessment explores metrics that are strongly correlated with human perceptual quality, aiming to replace subjective evaluations in practical applications and thereby reduce time and costs.
In recent years, advancements in 3D acquisition devices have made VR and AR more accessible than ever. To provide users with more interactive and immersive experiences, DPC has gained significant attention. Unlike static point clouds, DPCs incorporate a temporal dimension, which enables a more realistic representation of 3D environments, simulating the dynamic nature of the real world. However, due to the large volume of data they contain, DPCs require more efficient compression and transmission techniques before practical utilizations. Similar to static point clouds, these processes incur distortion and impact perceived quality. Consequently, Dynamic Point Cloud Quality Assessment (DPCQA) has become an increasingly important research focus in both industry and academia.
TABLE I: PCQA database survey.
Currently, significant progress has been made in Static Point Cloud Quality Assessment (SPCQA), but research on DPCQA remains limited. For comparison, we list existing PCQA databases in Table I. Previous studies have conducted DPCQA evaluation by proposing new benchmarks. For example, vsenseVVDB [7] and vsenseVVDB2 [8] investigate the impact of compression on point clouds. However, these databases have two main drawbacks. 1) Limited Scale. Compared to SPCQA databases, existing DPCQA databases are typically small in scale, regardless of reference or distorted samples. 2) Lack of Distortion Types. These databases focus solely on traditional compression algorithms, overlooking emerging learning-based compression techniques and distortions from other scenarios. The above weaknesses limit the generalizability of these databases, and also hinder the development and validation of objective DPCQA metrics. Especially, the Call for Proposals (CfP) for learning-based DPC compression technology within Moving Picture Experts Group (MPEG) - WG 2 [9] highlights the need for reliable objective DPCQA metrics. Besides, although many high-performing objective SPCQA metrics have been developed, whether they are suitable for DPCs is uncertain.
Fig. 1: The projection of reference samples in DPCD.
In view of the above challenges, to effectively promote the development of DPCQA and relevant algorithms such as compression and transmission of DPCs, we create a largescale DPCQA database named DPCD, which contains rich contents and multiple types of distortion. 15 high-quality reference DPC sequences are selected and seven types of distortion are injected at different levels, resulting in a total of 525 distorted DPCs. To conduct subjective experiments, all samples are rendered into Processed Video Sequences (PVS) and participants are invited to score them in a lab environment to collect MOSs. The diversity of source content, the accuracy of the MOSs, and the influence of different types of distortion are demonstrated. Finally, we evaluate the performance of multiple objective metrics and conduct detailed analysis of results to provide useful insight for future DPCQA research.
# II. DATABASE CONSTRUCTION
# A. Reference Selection and Pre-processing
Reference Selection. Given that the primary application of DPCs is social communication within extended reality, we choose human DPCs for this study. To effectively advance the development of standard compression algorithms, we utilize DPC sequences [10], [11] provided by the MPEG and convert some dynamic meshes [12], [13] from MPEG into point clouds using grid sampling with a resolution of 1024, as proposed in [14]. Moreover, we select some DPC sequences from other DPCQA database [8].
In total, 15 human DPCs are chosen as reference samples. Specifically, “longdress”, “loot”, “soldier”, and “redandblack” are from 8i Voxelized Full Body Dataset [10], “dancer”, “model”, “basketball-player”, and “exercise” are from Owlii Dataset [11], “AxeGuy”, “Matis”, and “Rafa2” are from vsenseVVDB2 [8], “mitch”, “thomas”, and “football” are from [12], and “levi” is from [13]. Figure 1 shows snapshots of all the reference DPCs, we list the number of points in the first frame of each reference sample in Table II.
Pre-processing. To ensure consistency in format and eliminate any potential factors that may introduce distortion, the DPCs are preprocessed. For the four DPCs in the Owlii Dataset, each sequence contains 600 frames, from which we select the first 300 frames. For Levi with only 150 frames, we index the reverse order of the original 150 frames as frames 151 through 300 as proposed in [15]. As a result, all processed DPC sequences consist of 300 frames. Additionally, all samples are converted to the UTF-8 encoding format.
TABLE II: Point count in the first frame of the references.
# B. Distortion Generation
To investigate the impact of typical distortion types on perceptual quality during application, we consider seven distortion types. First, we include traditional compression algorithms, as they are standardized by MPEG. Specifically, we choose two patterns of Geometry-based Point Cloud Compression (G-PCC) [16] and one pattern of Video-based Point Cloud Compression (V-PCC) [16]. Additionally, we select DDPCC [17], a learning-based DPC compression method. To simulate distortions arising from factors such as acquisition noise and resampling, we select Color Noise (CN), DownSampling (DS), and Geometry Gaussian Noise (GGN). For each distortion type, we apply three to six different levels, with processing details as follows:
•G-PCC: G-PCC encode point clouds in 3D space using octree or trisoup (triangle soup) methods. Color attributes can be encoded using either Region Adaptive Hierarchical Transform (RAHT) or Predicting/Lifting (PredLift) transform. We employ Octree-RAHT and Trisoup-RAHT for lossy compression. Octree-RAHT applies six distortion levels with Quantization Parameters (QP) of 51, 46, 40, 34, 28, and 22, while TrisoupRAHT uses four levels with QPs of 40, 34, 28, and 22.
V-PCC-C2RA: The V-PCC using Category 2 Random Access (C2RA) mode is applied with five distortion levels, with QPs and occupancy map precision set as described in [16]. Both geometric and color attributes are lossily compressed.
D-DPCC: D-DPCC utilizes sparse convolution for compression. Following previous studies [18], [19], we adjust the Lagrange multiplier $( \lambda )$ to control bitrate. To avoid information leakage, models trained on 8i Dataset are tested on other samples, while those trained on Owlii Dataset are tested on 8i Dataset. Three distortion levels are set with $\lambda$ of 0.1, 1, 10.
•CN: Color noise affects the RGB values of points. We randomly modifies the RGB values of each point according to varying probabilities. Specifically, we randomly select $1 0 \%$ , $3 0 \%$ , $4 0 \%$ , $5 0 \%$ , $6 0 \%$ , and $7 0 \%$ of points in each frame, and noise values of $\pm 1 0$ , $\pm 3 0$ , $\pm 4 0$ , $\pm 5 0$ , $\pm 6 0$ , $\pm 7 0$ are added equally across the RGB channels. For example, for the first distortion level, we randomly select $1 0 \%$ points and modify their RGB values by a value within $\pm 1 0$ . If the modified color value (denoted as $c$ ) exceeds the valid range, we apply clipping: if $c < 0$ , set $c = 0$ ; if $c > 2 5 5$ , set $c = 2 5 5$ .
Fig. 2: The visual effects caused by different distortion types.
DS: Down-sampling is a simple yet effective method to reduce data complexity. We use the Matlab function pcdownsample() to apply six distortion levels, with sampling rates set to 0.85, 0.7, 0.55, 0.4, 0.25, and 0.1.
•GGN: GGN applys a geometry shift to each point in the point cloud. We set six levels, using random gaussian distribution with a mean of 0 and standard deviations of $0 . 0 5 \%$ , $0 . 1 \%$ , $0 . 2 \%$ , $0 . 5 \%$ , $0 . 7 \%$ , and $1 . 2 \%$ of the Bounding Box (BB) coordinates.
We present local results for each distortion type at the maximum distortion level in Figure 2, which indicates that the visual effects caused by different distortion types are distinct.
# C. PVS Generation
DPC sequences can be rendered as 2D videos or presented immersively in 3D scenes using VR devices. While ITU-R BT.500 [20] and ITU-T P.910 [21] provide detailed guidelines for video-based methods, no authoritative standards exist for VR-based subjective experiments, and the interactive nature of VR introduces variability in participants’ viewing experiences. Therefore, we adopt a video-based approach.
With regard to the camera path, research [22] shows that participants focus mainly on human faces, with camera path variations having minimal impact as long as facial features are clearly visible. Therefore, we use a simple rendering method, fixing the camera distance and orienting it toward the front of the human model (Figure 1). DPCs are rendered into $1 0 2 4 \times 1 0 2 4$ images using Open3D and converted into a smooth PVS at 30 fps with FFMPEG’s libx264 standard.
# D. Subjective Experiment
1) Training and Test Session: To ensure the reliability of the collected subjective scores, we divide the database into two parts: training set and test set. The training set consists of 14 samples, which vary in point cloud content, distortion types and levels. The test set, including remaining 511 samples, is further divided into 7 subgroups of 73 samples to avoid the effects of visual fatigue.
Fig. 3: Diversity of the source content and MOS distribution.
The subjective evaluation is carried out using the Double Stimulus Impairment Scale (DSIS), and the 11-grade voting scale proposed by ITU-T P.910 [21] is used. The experiments are conducted on an AOC U27N3G6R4B monitor with resolution of $3 8 4 0 \times 2 1 6 0$ , in an indoor environment under normal lighting conditions. During the test session, each distorted sample is displayed for 10 seconds, followed by 5 seconds for scoring. For participants involved in multiple testing units, sufficient rest time is provided between adjacent units.
A total of 26 participants are recruited for the subjective experiment. For each basic test unit, scores are collected from 22 different participants.
2) Outlier Removal: After collecting the subjective scores, we filter outliers from the raw scores to ensure data accuracy and reliability. Specifically, two consecutive steps are used. In the first step, we exclude outliers based on two “trap” samples in the test set [23]. First, we randomly select one sample from each subgroup to repeat, ensuring that the two PVSs are not played consecutively. Secondly, we insert one sample of extremely poor quality into each subgroup. If the score difference of the duplicated PVSs or the score of the very low quality PVS is higher than 2, the scores collected from this viewer are considered incorrect. In the second step, we apply the outlier detection method recommended in ITUR BT.500 [20]. As a result, five viewers are identified and removed from the subjective scores.
# III. DATABASE VALIDATION
# A. Diversity of Content
To validate the diversity of content, we calculate spatial perceptual information (SI) and temporal perceptual information (TI) [24] to measure geometry and dynamism complexities, respectively. More specifically, 15 reference PVSs are used to calculate SI and TI, and the results are shown in Figure 3a. The relatively uniform distribution of the scatter points indicates the diversity of the source content in DPCD.
# B. MOS Analysis
We present the MOS distribution in Figure 3b, where for each score segment, DPCD has at least 50 samples, indicating that the proposed database covers a wide range of quality scores. It is worth noting that the overall MOS of our database is relatively high, with the majority of scores falling within the range of 6 to 10. This phenomenon may be attributed to several factors. On the one hand, the dynamic properties of the samples may mask the distortions, making them less noticeable. On the other hand, the vividness and realism of the human samples may draw participants’ attention primarily to the human themselves or their motion, rather than the distortion details.
Fig. 4: The MOS distribution under different distortion types.
To validate the accuracy of the MOSs and analyze the impact of different distortions on subjective perception, line graphs of MOSs against distortion types and levels are shown in Figure 4. And from these graphs, we can draw the following conclusions: 1) Each graph reveals a general negative correlation between MOS and distortion level, where individual reversals do not affect the overall trend. 2) The lowest score range for Trisoup-RAHT distortion is higher than that for Octree-RAHT. This is because at higher distortion levels, Octree-RAHT significantly decreases the point number and leads to severe information loss, whereas Trisoup-RAHT retains basic geometry and texture information, albeit with distorted quality. 3) The distortion introduced by V-PCC has a relatively small impact on perceived quality. It is worth noting that, the movement of the “football” cause some points to exceed the coordinate range, resulting in unnatural appearance and lower MOSs. 4) The distortion introduced by D-DPCC compression is limited. In our experiments, we found that when the lambda is reduced to around 0.1, the bitrate stabilizes and no longer decreases, indicating that this is the maximum achievable distortion. And at the highest distortion level, the MOSs for all samples remain relatively high, which shows the potential of learning-based DPC compression methods. 5) Samples disturbed by CN provide higher MOSs. This is mainly because CN does not deform the geometry structure of point clouds, and complex textures usually mask tiny noise [25]. 6) For DS, samples with fewer points than other references (e.g., “AxeGuy,” “Matis,” and “Rafa2”) exhibit a sharp decline in MOS as the distortion level increases. In contrast, slight
DS has little impact on the other samples with a sufficiently high original point count, as they remain relatively dense even after downsampling. This states that as point clouds become absolutely sparse, perceptual quality is significantly affected. 7) All the lines of GGN are closely aligned. The MOSs for all samples decrease significantly as the distortion level increase, indicating that the human visual perception is highly sensitive to geometric shifts.
# IV. OBJECTIVE METRICS TESTING
# A. Metric Selection and Indicators
Considering the lack of research on objective DPCQA, we test the performance of existing objective SPCQA metrics on DPCD, which can be primarily divided into three categories: point-based, image-based and video-based metrics.
We select 9 point-based metrics adopted by MPEG, 10 widely used image-based metrics and 1 video-based metric. For both point-based and image-based metrics, we average the scores across 300 frames for each DPC. Three common indicators are employed to quantify the efficiency of the objective metrics: Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), and Root Mean Square Error (RMSE). To ensure consistency between the value ranges of the predicted scores and MOSs, a nonlinear four-parameter logistic fitting function is used to align their ranges following [26].
# B. Overall Performance
The performance of the metrics on the entire database are shown in the “Overall” columns of Table III. Based on these results, the following conclusions can be drawn: 1) Among the point-based metrics, the two MSE-based P2point approaches yield the best. In comparison, P2plane underperforms, likely due to the errors introduced during estimation of normal vectors. Additionally, normalizing the computation results using bounding boxes and converting them to the corresponding
TABLE III: The performance results of objective metrics. “P” stands for point-based metrics, “I” represents image-based metrics and “V” represents video-based metrics. The symbol ‘–’ indicates that the results of the metric for samples with this kind of distortion are meaningless. Best and second performance results are marked in RED and BLUE.
PSNR values improves performance by standardizing the scale. 2) Among the image-based metrics, DISTS and LPIPS achieve the highest performance. By leveraging networks pretrained on large-scale image datasets, these metrics effectively capture representative features, thereby enhancing their generalizability. 3) The video-based metric VMAF, despite considering temporal information, does not yield superior results. This may be because VMAF primarily focuses on temporal variations in natural scenes, while our database comprises individual human point cloud samples. 4) Despite the inherent information loss in image-based metrics, their performance can rival that of point-based metrics. This can be attributed primarily to the fact that image-based metrics excel at extracting texture information, while point-based metrics tend to focus more on geometry and may not fully exploit multimodal data. 5) All the no-reference metrics report noticeably poorer performance compared to full-reference metrics. The lack of reference samples as a benchmark prevents accurate assessment of distortions, thus limiting the evaluation accuracy.
# C. Analysis by Type of Distortion
For a more comprehensive analysis, we further provide the SRCC results for different types of distortion in Table III. The following conclusions can be derived from these results: 1) The two MSE-based P2point approaches demonstrate the best performance on G-PCC. Since G-PCC typically introduces geometric distortions, P2point metrics, which directly measure the Euclidean distance between corresponding points in the distorted and reference point clouds, are more sensitive to such distortions. 2) P2plane MSE PSNR performs the best on V-PCC, while P2plane MSE performs the best on DDPCC. MSE-based metrics outperform Hausdorff distancebased metrics, as the latter involve maximum pooling, which may cause outliers with large coordinate values in the point cloud to negatively impact the final result. 3) DISTS demonstrates robustness across various distortions and achieves the best results on CN, DS, and GGN, with SRCC values of approximately 0.929, 0.879, and 0.955, respectively, due to its ability to effectively capture both local and global information.
# D. Weakness of Current Metrics
Overall, current metrics exhibit several limitations, which are summarized as follows: 1) For point-based metrics, while MSE-based P2point metrics perform well, they still have room for improvement. Additionally, the high computational complexity makes them impractical for real-world applications. 2) Image and video-based metrics may suffer from information loss during projection, potentially masking original distortions. Moreover, their performance can be influenced by background information, leading to unstable scores across different contents. 3) No approach consistently performs well across all distortion types. Specifically, while P2point is sensitive to traditional compression, it struggles with measuring color distortions. LPIPS and DISTS are effective for CN but perform poorly on traditional compression methods. Moreover, most metrics exhibit inferior performance on the learning-based DPC compression. Traditional point-based metrics, as well as existing image-based and video-based metrics, may overlook the unique characteristics and distortions of DPCs, leading to inaccurate quality prediction on specified distortions. Therefore, there is a strong need for effective objective metrics specifically tailored to DPCs. And our proposed database may facilitate the design of such metrics. | Recently, the advancements in Virtual/Augmented Reality (VR/AR) have driven
the demand for Dynamic Point Clouds (DPC). Unlike static point clouds, DPCs are
capable of capturing temporal changes within objects or scenes, offering a more
accurate simulation of the real world. While significant progress has been made
in the quality assessment research of static point cloud, little study has been
done on Dynamic Point Cloud Quality Assessment (DPCQA), which hinders the
development of quality-oriented applications, such as interframe compression
and transmission in practical scenarios. In this paper, we introduce a
large-scale DPCQA database, named DPCD, which includes 15 reference DPCs and
525 distorted DPCs from seven types of lossy compression and noise distortion.
By rendering these samples to Processed Video Sequences (PVS), a comprehensive
subjective experiment is conducted to obtain Mean Opinion Scores (MOS) from 21
viewers for analysis. The characteristic of contents, impact of various
distortions, and accuracy of MOSs are presented to validate the heterogeneity
and reliability of the proposed database. Furthermore, we evaluate the
performance of several objective metrics on DPCD. The experiment results show
that DPCQA is more challenge than that of static point cloud. The DPCD, which
serves as a catalyst for new research endeavors on DPCQA, is publicly available
at https://huggingface.co/datasets/Olivialyt/DPCD. | [
"cs.CV",
"cs.DB"
] |
# I. INTRODUCTION
IVEN the vast amounts of satellite data coming from multiple constellations, such as Copernicus Sentinel-2, the focus has recently been shifted to Self-Supervised Learning (SSL). Masked Autoencoders [1] and Contrastive Learning [2] aim at reducing the need for large labeled datasets and have led to a growing interest in Foundation Models (FM) for Earth Observation (EO) [4], [5]. FMs are pretrained on satellite data to learn robust feature representations and, then, are fine-tuned on smaller labeled datasets for various downstream tasks. FMs use different architectures for the encoding (and decoding) of the data and features, which lead to different performance results for the downstream tasks, e.g. pixel-wise regression.
In this work, we focus not only on scaling the volume of the pretraining data, but also on scaling the number of parameters and the architecture, studying the transition from CNN-based models to Vision Transformer (ViT) backbones combined with UPerNet decoder. The main contributions of this paper are:
We scale the PhilEO pretraining from the PhilEO Globe 0.5TB dataset to the full MajorTOM 23TB dataset, as well as to the specialized FastTOM 2TB subset (i.e. without oceans and ice), with different number of model parameters and architectures1. We demonstrate that larger pretraining data reduces the Root Mean Squared Error (RMSE), resulting in a percentage improvement $1 2 . 0 \%$ for road density regression for the $n$ -shot $n = 5 0 0$ .
In addition to dataset scaling, we study the scaling of the model parameters and the architecture, comparing CNNand ViT-based models. We evaluate the effectiveness of the scaling-up on the PhilEO Bench, covering road and building density estimation and land cover mapping.
We develop and compare different architectures for the EO Foundation Models in order to select which one to scale-up. For road density regression, we show that the PhilEO ViT UPerNet outperforms the PhilEO ViT with a convolutional decoder in RMSE, with a percentage improvement of $3 4 . 2 \%$ for the $n$ -shot $n = 1 0 0$ . On the same task, the PhilEO Geo-Aware U-Net model achieves the same performance as the PhilEO ViT UPerNet.
The rest of the paper is organized as follows. Section II provides background on the PhilEO Geospatial Foundation Model (GFM) and the PhilEO Bench evaluation, as well as the datasets MajorTOM and FastTOM. Section III describes the proposed PhilEO MajorTOM 23TB and FastTOM 2TB models. Section IV presents the related work. We evaluate our models on the PhilEO Bench in Sec. V for road and building density estimation and land cover mapping. We present the results of the PhilEO MajorTOM 23TB and FastTOM 2TB models, the PhilEO 200M FastTOM, the PhilEO Geo-Aware U-Net, and the PhilEO ViT UPerNet 100M FastTOM.
# II. BACKGROUND
PhilEO GFM. The PhilEO model [3] employs a combination of masked reconstruction and geo-location estimation as pretext tasks to train on the PhilEO Globe 0.5TB S2L2A dataset, which contains only land, excluding oceans and ice. PhilEO Globe also includes four time steps: 4 seasons from the same year. The architecture is a modified U-Net CNN model.
PhilEO Bench. To enable fair evaluation and comparison of GFMs, [3] introduced the PhilEO Bench which includes a global labeled Sentinel-2 dataset for the standardized downstream tasks: road density estimation, building density regression, and land cover mapping. The bench also incorporates several existing GFMs, such as [6] and [7], allowing for consistent comparisons across models.
1We release the code of our model in: http://github.com/ESA-PhiLab/ PhilEO-MajorTOM.
Fig. 1. High-level overview of ViT-UPerNet. The dimensions of the feature map $C _ { 4 }$ are maintained. The other feature maps are upsampled, respectively downsampled. The resulting feature map is used for downstream processing.
MajorTOM and FastTOM datasets. The MajorTOM dataset [15] comprises approximately 60TB of unlabeled global data, covering most of the Earth, including oceans and ice. In this work, we focus on the MajorTOM Core-S2L2A subset, 23TB. FastTOM is a 2TB specialized subset of MajorTOM CoreS2L2A, containing only land and excluding oceans and ice, making it more task-specific for terrestrial downstream tasks.
# III. PROPOSED MODEL
# A. PhilEO MajorTOM 23TB and PhilEO FastTOM 2TB
We extend our previously proposed PhilEO model by pretraining it on the MajorTOM Core-S2L2A dataset [15], which contains 23TB data covering the vast majority of the Earth’s surface, including land, oceans, and ice. To the best of the authors’ knowledge, this is the first instance where the full 23TB MajorTOM dataset is used to pretrain a GFM. The PhilEO MajorTOM model was trained using the Leonardo Davinci-1 Supercomputer, using between four and eight compute nodes, each equipped with four NVIDIA A100 GPUs (40GB VRAM), resulting in a total of 16-32 GPUs. This HPC configuration provided approximately 2.5–5 petaFLOPS of compute performance. Without such infrastructure, training on a single GPU would have taken several months of continuous computation, highlighting the critical role of HPC resources in enabling this work. To handle the scale of the dataset and model, we used PyTorch’s Distributed Data Parallel (DDP) to accelerate training. For larger ViT-based models still ongoing training on MajorTOM, we are transitioning to Fully Sharded
TABLE I COMPARISON OF THE PHILEO MAJORTOM MODEL TO OTHER GFMS WITH RESPECT TO THE FEATURES OF THE PRETRAINING DATASET.
HLS $\mathbf { \Sigma } = \mathbf { \Sigma }$ Harmonised Landsat S-2; fMoW $\mathbf { \Sigma } = \mathbf { \Sigma }$ Functional Map of the World. Here, the MajorTOM dataset includes oceans and ice, while FastTOM and PhilEO Globe do not.
Data Parallel (FSDP) to improve memory efficiency and support larger model scales. In addition, mixed-precision training (BF16/BF32) was used for improved throughput and reduced memory usage. Alongside the MajorTOM pretraining, we also trained a model variant on the FastTOM 2TB subset, which contains only land. FastTOM serves as a proxy for MajorTOM: strong performance on FastTOM is expected to translate into good performance when scaling to the full MajorTOM 23TB dataset. This strategy allows for efficient experimentation and architecture selection before undertaking full-scale pretraining.
# B. PhilEO ViT UPerNet
PhilEO Bench [3] standardizes model comparison by using a common convolutional decoder based on a U-Net-like architecture [8]. In this work, we introduce an alternative decoder strategy by implementing the UPerNet decoder [10], shown in Fig. 1, in the PhilEO Bench, in order to compare models with a ViT backbone. The UPerNet design [10] draws inspiration from human visual perception, combining hierarchical feature extraction through a Feature Pyramid Network (FPN) [21] with global context aggregation via a Pyramid Pooling Module (PPM). The FPN aggregates features across multiple scales through a top-down pathway with lateral connections, while the PPM captures context at various spatial resolutions. ViTUPerNet architectures have been shown to achieve state-ofthe-art performance in various computer vision tasks [11]- [14], including segmentation and classification. For models using a ViT backbone, we extract intermediate feature maps, $\{ C _ { 2 } , C _ { 3 } , C _ { 4 } , C _ { 5 } \}$ in Fig. 1, across different layers. These feature maps are then appropriately resized (upsampled or downsampled) to align spatial dimensions. The highest-level feature map $C _ { 5 }$ is processed by the PPM, which applies pooling operations at multiple scales to capture contextual information at different resolutions. The outputs of the PPM and the FPN are then fused through a series of $1 \times 1$ convolutions and element-wise additions, producing the final set of multi-scale feature maps, $\{ P _ { 2 } , P _ { 3 } , P _ { 4 } , P _ { 5 } \}$ in Fig. 1. These aggregated features are then used for downstream pixel-wise prediction tasks, including regression and segmentation.
Fig. 2. Road density estimation: Evaluation in RMSE at different $n$ -shots, where the best performing model is PhilEO Geo-Aware U-Net. The PhilEO ViT UPerNet black line is below the Geo-Aware U-Net red line. PhilEO ViT UPerNet outperforms ViT CNN and ViT CNN Grouped Channels (GC) [3].
# IV. RELATED WORK
Recent studies on GFMs, summarized in Table I, highlight the increasing potential of learning robust feature representations from large-scale Earth Observation (EO) data, later used for fine-tuning on various downstream tasks. However, most existing studies have been limited by relatively modest dataset sizes and have not explored training GFMs at the scale of more than 20TB. This motivates our work, where we develop and pretrain the PhilEO MajorTOM model on the full 23TB MajorTOM Core-S2L2A dataset. In addition, few-shot learning has gained attention within the EO community, as it enables models to generalize effectively from limited labeled data. Prior work [17], [18] demonstrated the advantages of few-shot learning for land cover classification, showing that models can maintain strong performance even with a small number of examples.
The synergy between Foundation Models and few-shot learning was further explored in [19], showcasing improved segmentation performance through combined training strategies. Additionally, frameworks such as [20] have proposed systematic approaches to low- and few-shot adaptation in the context of Foundation Models.
In addition, to this day, U-Net models are still considered the default benchmark for semantic segmentation, often outperforming new architectures. [16] benchmarked the performance of Swin-UPerNet against a U-Net on semantic segmentation use cases. It was shown that the Swin-UPerNet model is a good competitor for the U-Net.
# V. EVALUATION AND RESULTS
We evaluate the proposed models, comparing different architectures and analyzing the impact of scaling-up pretraining. Our evaluation focuses on the three PhilEO Bench downstream tasks: road density estimation, building density regression, and land cover mapping. Before training the models directly on MajorTOM, we first train them on the $0 . 5 \mathrm { T B }$ PhilEO Global dataset in order to select the best architecture to scale-up.
Fig. 3. Building density regression: Evaluation in RMSE at various $n$ -shots. The PhilEO ViT UPerNet black line is below the Geo-Aware U-Net red line.
Fig. 4. Land cover mapping: $^ n$ -shot evaluation of models in accuracy.
# A. PhilEO ViT UPerNet pretrained on PhilEO Globe 0.5TB
As shown in Fig. 2, the PhilEO ViT UPerNet model consistently outperforms the PhilEO ViT model with a convolutional decoder across all $n$ -shot settings for road density estimation. Notably, the PhilEO ViT UPerNet achieves comparable performance to the PhilEO Geo-Aware U-Net model [3], highlighting the effectiveness of the UPerNet decoder for pixel-wise regression tasks.
In Fig. 3, the PhilEO ViT UPerNet demonstrates improved RMSE performance for building density estimation compared to the convolutional decoder ViT baseline. For instance, in Fig. 3, at $n = 5 0$ , the PhilEO ViT UPerNet achieves a RMSE of 0.08944, compared to 0.2 for the convolutional decoder ViT model, corresponding to an improvement of $5 5 . 2 8 \%$ . At $\textit { n } = \ 1 0 0$ , the ViT UPerNet model further improves to a RMSE of 0.08367, versus 0.1 for the convolutional decoder ViT baseline, leading to a $1 6 . 3 3 \%$ relative gain.
In Fig. 4, the PhilEO ViT UPerNet also outperforms the ViT CNN model across all $n$ -shot experiments for land cover mapping, in accuracy. Moreover, both the PhilEO ViT UPerNet and the Geo-Aware U-Net outperform a fully supervised U-Net model (without pretraining), particularly in low-shot regimes. Overall, the PhilEO Geo-Aware U-Net achieves slightly better performance on average than the PhilEO ViT UPerNet across the three downstream tasks, as shown in Figs. 2-4.
Fig. 5. Road density estimation in RMSE, comparing the models: PhilEO Globe 44M, PhilEO MajorTOM and FastTOM, PhilEO 200M FastTOM, ViT UPerNet 100M FastTOM and PhilEO Globe ViT UPerNet 300M. The models PhilEO MajorTOM and FastTOM outperform at most $n$ -shots PhilEO Globe.
# B. Scaling-up: Evaluation of PhilEO MajorTOM 23TB GFM
According to the results in Section V-A, we scale-up the bestperforming architecture, the PhilEO Geo-Aware U-Net, and we pretrain it on the full 23TB MajorTOM dataset, as well as on the FastTOM 2TB specialized subset. We study the scaling-up to 23TB (and 2TB) data, as well as the number of parameters 200M (versus 44M), and the architecture, i.e. ViT UPerNet (versus Geo-Aware U-Net [3]).
Road density estimation. Fig. 5 shows the RMSE performance for road density estimation across different $n$ -shot settings. For all $n$ -shots, the scaled-up PhilEO MajorTOM 23TB model outperforms the Geo-Aware U-Net model pretrained on the PhilEO Globe $0 . 5 \mathrm { T B }$ dataset. At $n = 1 0 0$ , the PhilEO MajorTOM 23TB model achieves a RMSE of 0.06724, compared to 0.07070 for the model pretrained on the PhilEO Globe $0 . 5 \mathrm { T B }$ dataset. This corresponds to an improvement of $4 . 8 9 \%$ . For $n = 5 0$ and 100, the PhilEO 44M MajorTOM model outperforms the FastTOM and PhilEO Globe pretrained baselines, demonstrating the benefits of large-scale pretraining even when fine-tuning with limited labeled data.
Additionally, we develop and evaluate two larger models, as shown in Fig. 5:
PhilEO 200M FastTOM: a 200M-parameter Geo-Aware U-Net variant pretrained on FastTOM, • ViT 100M FastTOM: a Vision Transformer model (approximately 100M parameters, with depth 32, number of heads 16, and embedding dimension 512) pretrained on FastTOM with a UPerNet decoder utilizing intermediate layers 5, 15, 23, and 31.
Both larger models show strong performance, with PhilEO 200M FastTOM outperforming the others at several $n$ -shots.
Building density regression. As shown in Fig. 6, similar trends are observed for building density estimation. At $n =$ 100 and 500, both the PhilEO 44M MajorTOM and FastTOM models outperform in RMSE the PhilEO Globe 44M baseline.
Fig. 6. Building density in RMSE for PhilEO Globe, PhilEO MajorTOM and FastTOM, PhilEO 200M FastTOM, ViT FastTOM and PhilEO Globe ViT.
Fig. 7. Land cover in accuracy for the different models, at various $n$ -shots.
In addition, the PhilEO 200M FastTOM model again performs best across most $n$ -shot settings.
Land cover mapping. Fig. 7 presents accuracy results for land cover semantic segmentation. Across all $n$ -shot settings, the PhilEO Geo-Aware U-Net, pretrained on the smaller PhilEO Globe $0 . 5 \mathrm { T B }$ dataset, consistently outperforms models pretrained on the PhilEO MajorTOM and FastTOM datasets. A likely explanation for the superiority of PhilEO Globe in land cover is that it includes seasonal variations, i.e. 4 seasons, 3 months separation over 1 year, 4 time steps. For land cover, seasonality is beneficial, i.e. seasonality is more correlated with land cover than with roads or buildings: PhilEO Globe includes 4 time steps (i.e. 1 year), rather than MajorTOM, one random time step [15]. Also, it is important to note that PhilEO Globe and FastTOM contain only land, while MajorTOM includes a broader variety of scenes, such as land, oceans, ice, forests, and deserts. Since our downstream tasks focus solely on land, the models pretrained on PhilEO Globe and FastTOM benefit from prior knowledge that aligns with the evaluation setting, while MajorTOM-pretrained models do not.
To better understand the impact of specialized versus general-purpose pretraining, we compare the performance of models trained on specialized land-only datasets (PhilEO Globe $0 . 5 \mathrm { T B }$ and FastTOM 2TB) with the performance of models trained on a large general-purpose dataset (MajorTOM 23TB) which includes oceans and ice. Given that all models share the same Geo-Aware U-Net architecture (approximately 44M parameters) and that the smaller datasets use a land mask to exclude oceans and ice, we expect models trained on
Land Cover Classification on MajorTOM,in Precision per Class
Land Cover Classification on FastTOM,in Precision per Class
Fig. 8. Land cover mapping: PhilEO MajorTOM model precision per class.
Fig. 9. Land cover mapping: PhilEO FastTOM model precision per class.
PhilEO Globe and FastTOM to perform better in downstream land-focused tasks. Indeed, in most cases, this expectation holds. However, for the road density estimation task under low-shot (few-shot) learning settings, the model pretrained on MajorTOM surprisingly outperforms the specialized models in terms of RMSE. This suggests that despite the broader and noisier pretraining data, large-scale general pretraining can be advantageous under certain conditions.
This study allows us to analyze two key factors:
Specialization effect: Does pretraining on land-only images give an advantage when downstream tasks are also land-focused? • Scaling effect: How does increasing the pretraining dataset size from $0 . 5 \mathrm { T B }$ and 2TB (specialized) to 23TB (general-purpose) impact downstream performance?
We study how GFMs scale and explore how the PhilEO models behave, their scaling behavior. Pretraining on MajorTOM represents a truly general-purpose approach, aligning with the concept of a GFM for Earth observation, rather than a Land-only model. GFMs pretrained on general-purpose satellite data, and not only on land, are Earth Observation models and not only Land Observation models (or Ocean Observation models). In contrast, models like SeCo [24] focus on land observations, while others, like HydroFM [25] (Table I), specialize in oceans. Our results show that although specialized pretraining offers an initial advantage, scalingup the pretraining on diverse general-purpose datasets, when scaling-up is performed appropriately, can eventually lead to GFMs that outperform specialized models, even in tasks where prior domain-specific knowledge initially seemed crucial.
Per-class results for land cover mapping. We evaluate and examine the precision metrics for each class across different $n$ -shots settings in Fig. 8, focusing on the PhilEO 44M MajorTOM 23TB model’s performance on the PhilEO Bench land cover semantic segmentation task, which includes 11 classes based on the ESA WorldCover classification scheme. Similarly, Fig. 9 presents the class-wise precision metrics for the PhilEO 44M FastTOM 2TB model on the same land cover semantic segmentation task. The results from both figures demonstrate that precision generally improves across most classes as the number of $n$ -shot examples increases. | Today, Earth Observation (EO) satellites generate massive volumes of data,
with the Copernicus Sentinel-2 constellation alone producing approximately
1.6TB per day. To fully exploit this information, it is essential to pretrain
EO Foundation Models (FMs) on large unlabeled datasets, enabling efficient
fine-tuning for several different downstream tasks with minimal labeled data.
In this work, we present the scaling-up of our recently proposed EO Foundation
Model, PhilEO Geo-Aware U-Net, on the unlabeled 23TB dataset MajorTOM, which
covers the vast majority of the Earth's surface, as well as on the specialized
subset FastTOM 2TB that does not include oceans and ice. We develop and study
various PhilEO model variants with different numbers of parameters and
architectures. Finally, we fine-tune the models on the PhilEO Bench for road
density estimation, building density pixel-wise regression, and land cover
semantic segmentation, and we evaluate the performance. Our results demonstrate
that for all n-shots for road density regression, the PhilEO 44M MajorTOM 23TB
model outperforms PhilEO Globe 0.5TB 44M. We also show that for most n-shots
for road density estimation and building density regression, PhilEO 200M
FastTOM outperforms all the other models. The effectiveness of both dataset and
model scaling is validated using the PhilEO Bench. We also study the impact of
architecture scaling, transitioning from U-Net Convolutional Neural Networks
(CNN) to Vision Transformers (ViT). | [
"cs.CV"
] |
# 1 Introduction
360 cameras, a.k.a, panoramic cameras, provide an ultra-large field of view (FoV) of $3 6 0 \times 1 8 0$ for the surrounding environment. Therefore, 360 video enables comprehensive situational awareness that surpasses the FoV limitations of perspective 2D cameras. This makes 360 camera-based scene understanding popular and crucial for applications such as autonomous driving [1, 2, 3, 4, 5], robotics [6, 7], and virtual reality [8, 9]. A commonly used representation for 360 videos is the equirectangular projection (ERP), which maps the spherical content onto a 2D rectangular plane to ensure compatibility with the standard imaging pipeline. However, ERP poses several challenges specific to 360 video, including projection distortions in polar regions, and horizontal discontinuities [5] that break content continuity across the left and right borders. These challenges significantly increase the cost and complexity of manual annotation for 360 videos.
Although several 360 video benchmarks [5, 10, 11] have been proposed for scene understanding tasks, such as segmentation and tracking, the scale and diversity of these datasets remain limited by far in the community, especially with the recent emergence of foundation models [12]. For segmentation, 360VOS [10] contains 290 panoramic sequences annotated across 62 categories, while PanoVOS [5] provides 150 high-resolution videos with instance masks. For the tracking task, 360VOT [13] focuses on single-object tracking with 120 omnidirectional videos covering 32 object types, and QuadTrack [11] introduces a small-scale multi-object tracking benchmark under non-uniform motion. However, the modest size and task-specific design of these datasets limit their ability to support large-scale, generalizable model training. In contrast, recent 2D video benchmarks such as YouTubeVOS [14], with over 3,400 videos and 540K segmentation annotations, and SA-V [12], with 50.9K videos and over 640K masklets, have demonstrated the value of dense, large-scale annotation for training foundation models.
This disparity raises a key scientific question: Can we build a large-scale 360 video dataset with rich annotations for both segmentation and tracking tasks, while substantially reducing the human labeling cost? In this paper, we present Leader360V (Sec. 3.1), the first largescale $( 1 0 \mathrm { K } + )$ , real-world 360 video dataset with dense, frame-level annotations for scene understanding tasks across segmentation and tracking. Leader360V covers 198 object types and covers a wide variety of scenes, including both indoor and outdoor environments, as shown in Fig. 2. Leader360V is constructed by integrating existing public datasets with our self-collected 360 videos captured in diverse real-world environments, yielding a scalable and representative benchmark for panoramic understanding.
To enable the construction of Leader360V, we also propose $\mathrm { A } ^ { 3 } 3 6 0 \mathrm { V }$ (Automatic Annotate Any 360 Video)(Sec. 3.2), a novel annotation pipeline tailored for 360 videos. $\mathrm { A } ^ { 3 } 3 6 0 \mathrm { V }$ is designed to reduce manual labeling burden while
Figure 2: Samples from different scenarios of the Leader360V dataset.
maintaining high annotation quality through a three-phase pipeline: Initial Annotation Phase(Sec. 3.2.1): We first extract keyframes and use multiple 2D segmentors (e.g., CropFormer [15], OneFormer [16]) to generate semantic and instance segmentation proposals. These outputs are unified and aligned via LLM-based semantic matching, then refined through a Semantic- and Distortionaware Refinement (SDR) Module that leverages SAM2 to produce high-quality panoramic masks. Auto-Refine Annotation Phase(Sec. 3.2.2): For subsequent keyframes in the video, we iteratively propagate annotations and identify low-quality regions based on mask coverage. Frames failing coverage thresholds are reprocessed using a GPT-guided Motion-Continuity Refinement (MCR)
Table 1: Comparison of 360 video datasets on segmentation (VOS) and tracking (VOT). “Mobile”: videos shot with motion. “Still”: videos shot without any motion. “Vehicle”: videos shot on vehicles. “Human”: videos shot by humans while walking or running. “Attr”: characteristics of tracking (Single-Object and Multi-Object Tracking) and segmentation (Partial Frame and Whole Frame Segmentation). “Auto”: no human involvement except revise. “Manu”: no assistant model involvement.
module, which resolves annotation inconsistencies across left-right ERP and recovers missing masks caused by occlusion or distortion. Manual Revise Phase(Sec. 3.2.3): Finally, human annotators validate and correct the outputs from the previous stages. Multi-annotator review ensures consistency and completeness across frames, producing the final high-quality annotations.
Extensive validation confirms the effectiveness of our pipeline. User studies show that $\mathbf { A } ^ { 3 } 3 6 0 \mathbf { V }$ significantly reduces annotator workload while preserving annotation quality. Experiments on standard 360 video segmentation and tracking benchmarks demonstrate that Leader360V enhances model performance, paving the way for robust, scalable, and generalizable 360 video understanding.
In summary, our contributions are three-fold: (I) We propose Leader360V, the first large-scale $( 1 0 \mathrm { K } + )$ , labeled real-world 360 video dataset specifically designed for instance segmentation and tracking in diverse and dynamic environments. $\mathbf { \Pi } ( \mathbf { I I } )$ We also propose $\mathbf { A } ^ { 3 } \mathbf { 3 6 0 V }$ (Automatic Annotate Any 360 Video) pipeline, which integrates pre-trained 2D segmentors with large language models to automate the annotation process and significantly reduce human effort without compromising label quality. (III) Extensive user studies and experimental results validate the effectiveness of our Lead360V and proposed pipeline and highlight the potential of Leader360V to advance robust 360 video understanding.
# 2 Related Works
Video-based panoramic datasets for object tracking and segmentation. 360 video, with its omnidirectional coverage, offers advantages over conventional 2D video, such as a broader field of view, richer spatial context, and greater understanding of the continuous scene. These benefits have led to the development of various 360 video datasets across different tasks, including object tracking [13, 11, 17], and segmentation [5, 10, 18]. Object tracking in 360 videos has been explored through single-object and multi-object tracking benchmarks. For instance, 360VOT [13] provides the first dataset for omnidirectional single-object tracking, while QuadTrack [11] captures non-uniform motion using a quadruped robot to establish a multi-object tracking challenge. Segmentation, which is more annotation-intensive, demands pixel-level masks and is mainly represented by datasets focused on instance and panoptic segmentation, such as PanoVOS [5] and 360VOS [10]. These datasets help address 360-specific challenges such as distortion and content continuity. However, most existing datasets remain limited in scale and task diversity, restricting their ability to support robust and generalizable learning. To this end, we introduce Leader360V, a large-scale 360 video dataset constructed by integrating publicly available resources and newly self-collected videos, enhanced by an automatic annotation pipeline for multi-task learning. Detailed comparison is shown in Tab. 1.
Automated Annotation Frameworks for Scalable Dataset Construction. As large-scale video datasets continue to grow, the demand for efficient annotation has led to the emergence of semiautomatic and automatic pipelines aimed at reducing manual labeling costs. For the annotation of 360 video, it is a complex task that necessitates specialized attention due to its unique characteristics, such as severe distortion, a wide field of view, and discontinuous context across panoramic borders. 360 video annotation methods such as 360Rank [19] and PanoVOS [5] adopt semi-supervised pipelines using pre-trained segmentors and keyframe propagation, but still rely heavily on manual mask drawing and semantic labeling, limiting scalability in 360 settings. However, these methods are not essentially different from 2D video annotation strategies [12, 20, 21] and do not take into account the special characteristics of 360 videos. To address these gaps, we learn from recent automatic annotation systems [22, 23], which have incorporated large language models (LLMs) to further reduce human involvement. We propose $A ^ { 3 } 3 6 0 \dot { V } ,$ a unified annotation framework tailored for 360 videos. By integrating LLMs for semantic role assignment and pre-trained 2D segmentors for initial mask generation, $A ^ { 3 } 3 6 0 V$ enables scalable segmentation and tracking from keyframes to full video sequences under omnidirectional conditions.
Large-Scale 2D Video Segmentation and Tracking Datasets. Compared to the challenges faced in constructing large-scale 360 video datasets, the field of 2D video understanding has witnessed the emergence of numerous large-scale datasets for segmentation and tracking tasks. YouTube-VOS [14], LVOS [20], MeViS [24], VIPSeg [21], SA-V [12] for segmentation task and TrackingNet [25], LaSOT [26], TAO [27] for tracking task have a large base or long shots of a single video, the largest of which can exceed 50K, and a single video can exceed 7 hours. Motivated by the gap between the rapid expansion of 2D video resources and the limited availability of large-scale 360 video datasets, we introduce Leader360V, a richly annotated 360 video dataset designed for segmentation and tracking tasks.
Higher Diversity in Scenarios, Especially Cityscape. Existing 360 video datasets provide limited coverage of cityscape scenarios. This gap hinders the development of practical 360VOTS applications in real-world urban environments. Therefore, we prioritize the inclusion of a wide range of urban environments in Leader360V, capturing variations in architectural styles, traffic conditions, and other dynamic elements. Our Leader360V is also rich in categories, as shown in Fig. 3
Table 2: Our Data Source. “Pct”: percentage of selected data. “Sel”: specific number of selected data. VG: Video Generation. VC: Video Caption
# 3 Methodology
# 3.1 The Leader360V Dataset
To address the scarcity of large-scale 360 video datasets, we present Leader360V—the first realworld dataset of this scale with diverse scene dynamics and comprehensive annotations for instance segmentation (Leader360V-S) and tracking (Leader360V-T).
# 3.1.1 Date Source Analysis
The Leader360V dataset includes videos collected from existing 360 video datasets, such as 360VOTS [10], PanoVOS [5], etc. The specific information is shown in Tab. 10. To address the limited scene diversity in prior datasets, we additionally collect new videos and relabel existing videos. Departing from the collection protocols used in previous 360 video datasets, our self-collected videos exhibit three distinctive properties, as described below.
Richer Data Acquisition Methods. We employ a variety of recording techniques to capture diverse camera motion patterns, including static camera setups, handheld recordings by moving photographer, and vehicle-based capture. In contrast to previous datasets that rely on limited recording methods, our approach enriches the diversity of 360 videos by simulating a wider range of real-world camera movements, as shown in Tab. 1.
More Various Perspectives. We collect data from multiple viewpoints and perspectives within each scenario. For example, in vehicle-based videos, we include footage from both the roof and the side of the car. This multi-perspective collection is often ignored by previous works.
Figure 3: Category distribution of Leader360V dataset.
360 Video "" EntitySegmentor Entities Results %\$ VOT Model & SAM ! 大 SAM2 Cropformer )\$% + EdgeTAM Initial Annotation Phase \$\$ Padding frame \$% E-SAM SAM2MOT … O PanopticSegmentor &\$ Semantic- '\$ Sliding window patches Mask2Former and 0 p OneFormer &&" RefMinoedumlent Distortion- Video labeling 益品 anno(tation irnectohgenpizreovoibdjedc timofagbeo.uYnoduirng box Q tahnes fwoellroswhionugldtecmopmlpaltet:ely match Label|Thing/Stuff|” Panoptic Results &'" + Text prompt Semantic label checker & Motion-continuity RefineLmeftn/tRiMghotdule , Entity T Auto-Refine Annotation Phase border mask” )\*.,\$ × 峰 \$\* \$- (/\*01 Cover rate > ! 一 ("\*(,"\$ ("(" ("( “Existing L VOT 1 mask” “No, it’s a new Model :p“lAesasaeptreollfemssei ownhaeltmhearskthaencnuortraetnotr, ("\*(" )") Object retriever mask” blank mask appears on the left or tohbejetc-t1tfhratmhea,soralarenaedwy aopbjpecatr?e”d in ("\*(+"\$ “New mask” D DiSsetomratinotinc--aawnadre #) Blank area checker )") \$\* Refinement Module Refined (/01 annotation +\$#( FiFrsatmfrea tmae ,ot#atEiontni $F _ { t } ^ { l }$ mSatsitk -e3d p3artdcph n,o!ptBilcarneksumltassk 炸虾所 \*2 $\boldsymbol { M } _ { t - 1 } ^ { a }$ Existing mask in t-1 frame True False $Y _ { r e f }$ Annotation checker Modification Annotator Final video (3"(.- - Pick $\cdot$ Iteration $\textcircled{5}$ Stitch $\approx$ Cut On-off Comments Refine annotation
# 3.1.2 Pre-Processing
All videos in our Leader360V, whether sourced from existing
datasets or self-collected, underwent a standardized pre-processing stage to ensure consistency and quality within the Leader360V dataset. This process included video resizing $( 2 0 4 8 \times 1 0 2 4 )$ , video clipping, face anonymization, and other privacy-preserving operations. Additionally, we removed biased videos and balanced the distribution of different scenarios. Videos shorter than 5 seconds were excluded, and videos exceeding 30 seconds were clipped to a maximum duration of 30 seconds. The resulting videos range from 10 to 20 seconds in length, with an average duration of 15 seconds. To protect the privacy of both camera operators and passersby, we anonymized the videos by detecting faces in each frame and applying blurring filters, following a procedure similar to that described in [8].
# 3.2 Automatic Annotation Pipeline
Due to the inherently large field of view (FoV), severe geometric distortion, and content discontinuities, annotating 360 video becomes particularly challenging and labor-intensive. To alleviate the burden on human annotators, we propose the Automatic Annotate Any 360 Video $( \mathrm { A } ^ { 3 } 3 6 0 \mathrm { V } )$ pipeline, as shown in Fig. 4, which efficiently integrates pre-trained 2D segmentors and large language models (LLMs) to streamline the labeling process. $\mathrm { A } ^ { \mathrm { 3 } } 3 6 0 \mathrm { V }$ operates through a three-stage pipeline: Initial Annotation Phase (Sec. 3.2.1), Auto-Refine Annotation Phase (Sec. 3.2.2), and Manual Revise Phase (Sec. 3.2.3), which will be introduced in detail below.
# 3.2.1 Initial Annotation Phase
In the Initial Annotation Phase, given a 360 video $\nu _ { i }$ , $\mathrm { A } ^ { 3 } 3 6 0 \mathrm { V }$ begins by selecting the first frame $F _ { 1 }$ as the starting point for annotation. To mitigate the issue of horizontal content discontinuity caused by ERP—particularly at the left and right image borders, we first apply horizontal padding to $F _ { 1 }$ , resulting in an extended frame denoted as $F _ { 1 } ^ { p }$ . The $F _ { 1 } ^ { p }$ is then divided into a series of overlapping patches using a horizontal sliding window. Each of these patches is subsequently processed by a diverse set of pre-trained segmentors, which we categorize into two groups: the first group comprises entity segmentors (e.g., SAM [30], CropFormer [15], E-SAM [31]), which produce class-agnostic instance-level masks capturing perceptual entities without relying on predefined taxonomies. We denote their output on frame $F _ { 1 }$ as $\bar { \mathcal { E } } _ { 1 } ^ { i }$ . The second group consists of panoptic segmentors (e.g., Mask2Former [32, 33], OneFormer [16], OMG-Seg [34]), each trained on different datasets to generate class-aware predictions. We denote one model’s output on frame $F _ { 1 }$ as $\mathcal { P } _ { 1 } ^ { i }$ . These models produce segmentation results aligned with various large label spaces (e.g., COCO [35], ADE20K [36], and Cityscapes [37]), enriching the annotation pool with complementary semantic categories.
Figure 5: Illustration of the process of SDR Module.
To address the object distortion int :r“Youdareuachelepfudl sbistayn toERP and unify t chateg rysfreommtheathrneetciurrcenst across heterogeneous predictions from multiple segmentors, we propose the Semanticand Distortion-aware Refinement (SDR) Module, as illustrated in Fig. 5. This module plays a central role in the Initial Annotation Phase by consolidating out
puts from both entity and panoptic segmentors into a coherent, distortion-aware annotation for the first frame $F _ { 1 }$ . While the framework supports an arbitrary number of panoptic segmentors, we illustrate our approach using three representative models in this paper. Based on the overlapping patch divisions, the SDR module first aggregates patch-wise predictions from the 2D segmentors into full-frame segmentation maps, denoted as $\mathbf { \bar { \mathcal { E } } } _ { 1 } ^ { i }$ , $\bar { \mathcal { P } } _ { 1 } ^ { i }$ , $\mathcal { P } _ { 2 } ^ { i }$ , and $\bar { \mathcal { P } } _ { 3 } ^ { i }$ , using a window-stitching operation defined in Eq. (1).
$$
\mathrm { M a t c h } ( M _ { k } , M _ { l } ) = \left\{ \begin{array} { l l } { { 1 , } } & { { \mathrm { i f } \mathrm { I o U } ( M _ { k } , M _ { l } ) > \tau , } } \\ { { 0 , } } & { { \mathrm { o t h e r w i s e } } } \end{array} \right.
$$
where $M _ { k }$ and $M _ { l }$ denote instance masks predicted from overlapping regions of different patches, and $\tau$ is a predefined threshold to determine whether two masks represent the same object.
To resolve class labeling inconsistency, we incorporate a large language model (LLM)-based semantic label checker within SDR. For each entity mask proposal from $\mathcal { E } _ { 1 } ^ { i }$ , the pipeline retrieves corresponding label candidates from $\mathcal { P } _ { 1 } ^ { i }$ , $\mathcal { P } _ { 2 } ^ { i }$ , and $\mathcal { P } _ { 3 } ^ { i }$ , and feeds them into the semantic label checker via a structured prompt $T _ { s }$ . The semantic label checker selects the most semantically appropriate label, yielding a harmonized set of final semantic labels for all entities. To obtain distortion-aware masks, we leverage the robustness of video foundation models by feeding each $M _ { 1 } ^ { p }$ as a mask prompt into the model in an iterative manner. Taking SAM2 [12] as an example, we first input $M _ { 1 } ^ { p }$ to obtain a coarse prediction $\boldsymbol { M } _ { T } ^ { p }$ . To improve its reliability under 360 distortion, we perform Mask Prompt Shifting by applying spatial shifts to $M _ { T } ^ { p }$ and refeeding the shifted masks into SAM2. Since SAM2 returns a single mask per query, this process yields a set of candidate masks $\mathcal { M } _ { T } ^ { p } = \{ M _ { T } ^ { p , \delta } \mid \delta \in \mathcal { D } \}$ . We then select the most frequently returned result as the final refined mask $\bar { M } _ { T } ^ { p }$ :
$$
\bar { M } _ { T } ^ { p } = \underset { M \in \mathcal { M } _ { T } ^ { p } } { \arg \operatorname* { m a x } } \ \sum _ { \delta \in \mathcal { D } } \mathbb { I } \left[ \mathrm { I o U } \left( M , \mathrm { S A M 2 } ( \mathrm { S h i f t } ( M _ { 1 } ^ { p } , \delta ) ) \right) > \tau \right] ,
$$
$\mathcal { D }$ denotes the set of shift directions, $\mathbb { I } [ \cdot ]$ is the indicator function, and $\tau$ is the IoU threshold for mask consistency. The ${ \bar { M } } _ { T } ^ { p }$ is used to track the entity across subsequent frames, generating a sequence of annotations $\mathcal { L } _ { 1 } ^ { p }$ , which, combined with LLM-verified labels, form the initial video annotation $Y _ { \mathrm { i n i } }$ .
# 3.2.2 Auto-Refine Annotation Phase
Given the initial annotated frame $Y _ { \mathrm { i n i } }$ produced in the previous stage, the Auto-Refine Annotation Phase aims to propagate and correct annotations across the remaining frames in the 360 video $\nu _ { i }$ . This stage iteratively processes each frame $F _ { t }$ using a coverage-guided strategy and performs dynamic refinement for missing or misaligned regions. At each timestamp $t$ , we evaluate the coverage rate of the current annotation $Y _ { \mathrm { i n i } } ^ { t }$ against a predefined threshold $\rho$ . If the coverage is sufficient (i.e., coverage rate $> \rho ,$ ), we accept $\bar { Y } _ { \mathrm { i n i } } ^ { t }$ and use it to generate the initial annotation for the next frame, $Y _ { \mathrm { i n i } } ^ { t + 1 }$ . Otherwise, the Motion-Continuity Refinement (MCR) Module is triggered to improve the annotation quality before propagation.
8
You‘re a panoramic image segmentation check. Please evaluate the segmentation mask I provide.
Here is a panoramic image and its colored masks.
Please score the segmentation quality $( 1 \tilde { \mathbf { \Gamma } } ^ { 1 0 ) }$ and tell me where the bad segmentation and unsegment parts are located (e.g. equator, south/north pole).
I would rate the segmentation quality at a 7 out of 10. Here's the location of bad segmentation:
The left and right portions of the image where the hands are located are not well-segmented, with some overlap and unclear delineation of the objects. The segmentation around the bag and hand in the bottom left and right is not accurate, missing some details. The segmentation for the grass in the bottom right is not accurate. The transition areas between different segments, such as between the grass and water, have some inaccuracies and could be more precise. Overall, the segmentation captures the general layout of the scene but misses finer details and accuracy in the peripheral areas.
Figure 6: Visualization of an example of the feedback provided by the annotation checker, who scores the annotation and points out where bad annotations lie.
To identify unannotated areas in the current frame, we employ an LLM-based agent, referred to as the Blank Area Checker, which is guided by a task-specific text prompt $T _ { B }$ . The prompt instructs the agent to infer the nature of each blank region. Based on this semantic inquiry, the blank region is classified into one of three types: 1. Left/Right Border Mask: If the blank region lies near the left or right boundary of $Y _ { \mathrm { i n i } } ^ { t }$ , we crop and horizontally stitch the current frame $F _ { t }$ to form a complete view of the context $F _ { t } ^ { l }$ . This operation addresses the content discontinuities inherent in ERP, allowing entity segmentors to reprocess the region with improved spatial continuity. 2. Existing Mask: If the blank mask $M _ { B } ^ { i }$ corresponds to a previously annotated object $M _ { t - 1 } ^ { a }$ , we invoke an LLM-based agent, referred to as the Object Retriever, to search for a matching mask within the prior frame’s annotation $Y _ { \mathrm { i n i } } ^ { t - 1 }$ . If a match is successfully retrieved, the blank region inherits the same semantic label. Otherwise, it is reclassified as a new mask. 3. New Mask: If the area represents a newly emerged object not seen in earlier frames, we treat the $M _ { B } ^ { i }$ as a novel instance and re-enter it, together with the current frame $F _ { t }$ , into the SDR module. This process yields a refined entity segmentation and assigns an accurate semantic label. After resolving all incomplete regions, the annotated frame is updated to $Y _ { \mathrm { r e f } } ^ { t }$ . This refined annotation is then passed to a VOT Model (e.g., SAM2 [12], EdgeTAM [38], SAM2MOT[39]) for temporal smoothing and consistency adjustment. The final result is appended to the refined annotation set $Y _ { \mathrm { r e f } }$ , which accumulates high-quality annotations.
# 3.2.3 Manual Revise Phase
Although the Auto-Refine phase significantly reduces the need for human intervention, ensuring high-quality and consistent annotations across the entire video $\nu _ { i }$ still requires a final verification step. In this stage, we introduce an LLM-based agent, referred to as Annotation Checker, which analyzes the refined annotation $Y _ { \mathrm { r e f } }$ and generates natural language modification suggestions, denoted as $C _ { M }$ . These comments highlight potential issues in spatial consistency, class accuracy, or temporal coherence, as shown in Fig. 6. A group of human annotators then reviews and edits $Y _ { \mathrm { r e f } }$ based on the LLM-generated feedback $C _ { M }$ , making targeted refinements rather than re-annotating from scratch. This human-in-the-loop revision process results in the final high-fidelity annotation set, denoted as Yfinal.
# 4 Experiment
# 4.1 Implementation details
Auto-Annotation Settings. During the dataset construction, we employ CropFormer [15] as the entity segmentation model and OneFormer [16] as the panoptic segmentation model. Furthermore, GPT-4o [40] is incorporated as an LLM to function as a checker for semantic labels, blank areas, and annotations.
Table 3: Evaluation on samples of Leader360V for SAM [30] -based methods.
Evaluation Subset. We selected 500 videos as our sample dataset, ensuring that the distribution of scenarios and categories was similar to that of the entire dataset. Inspired by 360VOTS [10] and Pano ${ \mathrm { V O S } } ^ { * }$ [5], we divided the 500 videos into a training set (250), a validation set (125), and a test set (125). For the validation set and test set, $66 \%$ of the clips are clipped from the original train set videos as val and test sets, and the rest are used as the train set. The remaining clips in the validation and test sets were selected from new and unseen scenarios.
Evaluation Metric. For VOS task, we choose region accuracy $( \mathcal { I } )$ , boundary accuracy $( \mathcal { F } )$ , and combined average $( \mathcal { I } \& \mathcal { F } )$ as evaluation metrics, following the standard protocol [43, 5]. For the VOT task, we utilize metrics of dual success $( S _ { d u a l } )$ and dual precision $( P _ { d u a l } )$ , following 360VOTS [10].
# 4.2 Comparison Result Analysis
Results via SAM-based Model. Inspired by [5], we assess various SAM [30] versions on our Leader-360V test set, as shown in Tab. 3. Due to the domain gap between 2D and 360 images, PerSAM [41] shows poor performance. Similarly, SAM-PT [42], a SAM-based VOS model, also delivers unsatisfactory results. Additionally, GoodSAM [4], a 360 image segmentation model, is evaluated and yields disappointing outcomes. These results highlight the need for further exploration to bridge the domain gap and improve tracking performance for 360 videos.
Results of VOS Task. We demonstrate the effectiveness of the Leader360V dataset for the VOS task in Tab. 4. While traditional 2D models show unsatisfactory performance on 360 video (e.g., XMem at 42.4 in terms of $\mathcal { I } \& \mathcal { F } )$ , PSCFormer, trained specifically on our train subset, exhibits significant improvement ( $\mathbf { + 3 6 . 3 }$ for $\mathcal { I } \& \mathcal { F } )$ . This highlights the necessity of Leader360V for the 360VOS task.
Table 4: Qualitative comparison between VOS mod-Table 5: Qualitative comparison between VOT models for 2D video and 360 video on samples of the els for 2D video and 360 video on samples of the Leader360V dataset. Leader360V dataset.
Results of VOT Task. Tab. 5 presents comparison results among several trackers for both 2D and 360 tasks. Based on the quantitative results, it is evident that our dataset significantly enhances the performance of the tracker. The performance of 2D tracker SimTrack [49], which is originally designed for 2D video tasks, is obviously improved, $\mathbf { + 1 2 . 6 }$ for $S _ { d u a l }$ and $\mathbf { + 1 1 . 7 }$ for $P _ { d u a l }$ . However, the performance of the popular 360 model [50] on our dataset does not meet our expectations.
Results of domain transfer evaluations. Tab. 6 illustrates the domain transfer results of state-ofthe-art video object segmentation (VOS) models from conventional planar-domain datasets (e.g., YouTubeVOS[51]) to our panoramic Leader360V benchmark. We observe a significant performance degradation across all methods when directly applying models trained on YouTubeVOS to 360 video content. Specifically, the combined region and boundary accuracy metric $\mathcal { I } \& \mathcal { F }$ consistently drops by over 10–18 points. This highlights the substantial domain gap between conventional narrow field-of-view videos and equirectangular panoramic inputs, which introduce geometric distortion, boundary discontinuities, and changes in object appearance and motion patterns. However, after fine-tuning these models on the training split of Leader360V, we observe a notable recovery in performance. The $\mathcal { I } \& \mathcal { F }$ scores increase by 10–14 points across models, demonstrating that VOS methods can adapt effectively to the challenges of 360 scenes when provided with appropriate training data. Notably, methods like XMem [44] and $\scriptstyle \mathrm { X M e m + + }$ [54] exhibit strong adaptability, suggesting that memory-based and transformer-based architectures may be better suited for handling panoramic temporal dynamics. These results emphasize the importance of domain-specific training and the role of a diverse and semantically rich dataset like Leader360V in bridging the generalization gap. Additionally, the improvement indicates that label distribution mismatch—where many rare or scene-specific categories are underrepresented in planar datasets—is another contributing factor, and Leader360V’s curated taxonomy helps alleviate this issue.
Table 6: Evaluation of domain transfer (from YouTubeVOS [51] to our Leader360V Test).
Table 7: Evaluation of domain transfer (from TrackingNet [55] to our Leader360V Test).
Tab. 7 presents the domain adaptation analysis for visual object tracking (VOT) models under similar transfer settings. Interestingly, while VOS models demonstrate reasonable recovery after fine-tuning, VOT models suffer even more pronounced performance degradation when evaluated directly on Leader360V. Performance increases of 8–14 points post-finetuning still fall short of the baseline accuracy on planar video benchmarks. This suggests that object tracking in 360 video is inherently more challenging due to compounded issues such as viewpoint wrap-around, severe distortion at the poles, and the loss of consistent object appearance across time.
These findings underscore one key insight: Leader360V serves as not only a benchmark but also an effective training resource that improves model generalization and robustness under 360 conditions. Together, these results validate our motivation to construct Leader360V and demonstrate its value in promoting research into panoramic video understanding tasks.
# 4.3 Ablation Study
Effectiveness of Phase I. In Tab. 8, we sampled 100 frames from the dataset that necessitate auto-refinement, using the final annotations as the ground truth benchmark.
Figure 7: Example visualizations of the sequential application of entity segmentor, 2D segmentor, and semantic labSAeMl2cPrhopeacgaktioenr in SDR Module.
Figure 8: Example visualizations for the ablation of Auto- of Manual Modification. Figure 9: Example visualizations for the ablation Refine Annotation Phase.
Table 8: Comparison of results from different components.
The outputs from the 2D segmentor and the semantic label checker in Phase I, are evaluated. The initial performance of the 2D segmentor is hindered by the exclusion of masks with uncertain labels, resulting in relatively low accuracy scores. Nevertheless, the SDR Module facilitates the assignment of suitable labels to previously unlabeled masks, which substantially enhances the $\mathcal { I } \& \mathcal { F }$ metric by $+ 4 2 . 2$ . An example of mask results at various stages is depicted in
Fig. 7. Upon comparison, it is evident that the 2D segmenter encounters difficulties in annotating novel objects that fall outside its distribution, and some pre-existing labels exhibit low accuracy, especially those located at a distance. The semantic label checker adeptly addresses these challenges by supplementing new labels and unifying existing labels from our category spaces, thereby enhancing overall accuracy. As demonstrated in Fig. 7, instances initially labeled ambiguously as "tree" and "vegetation" are ultimately unified as "tree." Additionally, the label "signboard, sign," which was overlooked by the 2D segmenter, is successfully added by the semantic label checker.
Effectiveness of Phase II. Tab. 8 presents a comparison of SAM2 outputs and Phase II’s autorefinement process against final annotations. SAM2 results in large blank areas for updated frames, leading to low $\mathcal { I } \& \mathcal { F }$ scores. Phase II’s process increases these scores by $\mathbf { + 4 0 . 1 }$ , refining existing masks and adding new ones via the MCR Module. Fig. 8 illustrates three cases. The first case involves a deer that exits the right boundary of the frame and reenters from the left boundary, a scenario caused by the panorama effect. In Phase II’s SDR Module, the left part of the deer is successfully segmented and assigned the same object ID as the right part, ensuring continuity. The second case illustrates a person who is obscured in the last frame but appears in the current frame. Here, the MCR segments the person and assigns a new label appropriately. The final case highlights a failure in SAM2’s tracking, caused by panoramic distortion, which introduces a domain gap. The MCR Module corrects the segmentation error and restarts tracking at this frame, effectively restoring consistency.
Effectiveness of Phase III. The Comparison between auto-refinement masks and manual-refined masks is shown in Fig. 9. Although auto-refinement masks from our $\mathrm { A } ^ { 3 } 3 6 0 \mathrm { V }$ pipeline demonstrate high quality, we still manually revise these masks to further enhance performance. During the revision, we specifically address issues related to object boundaries, label hallucinations, and incorrect masks.
# 4.4 Discussion
Flexibility of $\mathbf { A } ^ { 3 } \mathbf { 3 6 0 V } .$ $\mathrm { A } ^ { 3 } 3 6 0 \mathrm { V }$ ’s flexibility stems from its modular design, allowing users to select from various 2D segmentors for auto-annotation. For entity segmentation, options include SAM [30], CropFormer [15], and E-SAM [31], providing robust object delineation. For panoptic segmentation, models like Mask2Former [32, 33], OneFormer [16], and OMG-Seg [34] can be integrated for comprehensive scene understanding. $\mathrm { A } ^ { 3 } 3 6 0 \mathrm { V }$ also supports the flexible selection of LLMs for label checking, ensuring compatibility with different user needs. This pipeline is versatile, applicable to both 360 and 2D videos, making it suitable for diverse video annotation tasks and adaptable to various datasets and applications. More discussions are in the Appendix. | 360 video captures the complete surrounding scenes with the ultra-large field
of view of 360X180. This makes 360 scene understanding tasks, eg, segmentation
and tracking, crucial for appications, such as autonomous driving, robotics.
With the recent emergence of foundation models, the community is, however,
impeded by the lack of large-scale, labelled real-world datasets. This is
caused by the inherent spherical properties, eg, severe distortion in polar
regions, and content discontinuities, rendering the annotation costly yet
complex. This paper introduces Leader360V, the first large-scale, labeled
real-world 360 video datasets for instance segmentation and tracking. Our
datasets enjoy high scene diversity, ranging from indoor and urban settings to
natural and dynamic outdoor scenes. To automate annotation, we design an
automatic labeling pipeline, which subtly coordinates pre-trained 2D segmentors
and large language models to facilitate the labeling. The pipeline operates in
three novel stages. Specifically, in the Initial Annotation Phase, we introduce
a Semantic- and Distortion-aware Refinement module, which combines object mask
proposals from multiple 2D segmentors with LLM-verified semantic labels. These
are then converted into mask prompts to guide SAM2 in generating
distortion-aware masks for subsequent frames. In the Auto-Refine Annotation
Phase, missing or incomplete regions are corrected either by applying the SDR
again or resolving the discontinuities near the horizontal borders. The Manual
Revision Phase finally incorporates LLMs and human annotators to further refine
and validate the annotations. Extensive user studies and evaluations
demonstrate the effectiveness of our labeling pipeline. Meanwhile, experiments
confirm that Leader360V significantly enhances model performance for 360 video
segmentation and tracking, paving the way for more scalable 360 scene
understanding. | [
"cs.CV"
] |
# 1 Introduction
Time series foundation models (TSFMs) have emerged as a transformative direction within the time series forecasting (TSF) community [2, 42, 8]. By pretraining on extensive time series datasets, these models possess universal knowledge, enabling them to achieve impressive zero-shot performance on various forecasting tasks. Despite significant advancements in TSFM research, current studies predominantly focus on model pretraining and zero-shot evaluation, while paying limited attention to the critical challenge of effectively finetuning these universal models for specific downstream tasks. In contrast, finetuning pretrained models has become the standard pipeline for real-world applications in domains such as natural language processing (NLP) and computer vision (CV). Research in these fields has revealed key challenges in finetuning foundation models, including preserving pretrained knowledge [24], avoiding overfitting [15], and ensuring efficient adaptation [13, 47].
Existing finetuning strategies for TSFMs often rely on naive approaches, such as full finetuning or linear probing [2, 12, 11]. While these methods may offer performance gains, we argue that naive finetuning is suboptimal for TSFMs as it fails to account for the intrinsic multi-scale properties of both time series data and TSFMs. As a data modality generated from continuous real-world processes, time series are inherently entangled and can be decomposed across multiple scales [23, 18]. A time series can exhibit distinct temporal patterns at different sampling scales. For instance, as shown in
Figure 1: (a) Multi-scale property in time series foundation model (TSFM) finetuning. Finetuning TSFMs on the original scale may overlook potential temporal patterns in time series and underutilize their multi-scale forecasting capabilities learned during pretraining. (b) Causal graph for forecasting of TSFMs. Nodes denote the abstract data variables and directed edges denote the causality, i.e. cause $$ effect. Scale $S$ acts as a confounder, influencing both input context series $X$ and model’s activated knowledge $M$ (shown in red).
Figure 1 (a), energy consumption measured at the hour level shows micro-scopic local usage patterns, whereas daily records suppress these finer details, highlighting macro-scopic consumption trends instead. This multi-scale nature poses additional challenges, as naive finetuning tends to overfit the model to patterns at the original scale, overlooking the latent dynamics that prevail at coarser scales. From a modeling perspective, TSFMs pretrained on extensive, multi-scale datasets are inherently equipped with robust multi-scale forecasting capabilities. However, naive finetuning fails to harness this potential, as it restricts learning to the original scale. Consequently, it underutilizes the pretrained knowledge of TSFMs, capturing only partial temporal patterns. Such failure not only limits the generalizability of TSFMs across scales but also leads to suboptimal downstream performance.
To address the aforementioned challenge, we begin by analyzing the finetuning process of TSFMs through a causal lens. The relationship among key variables is shown in Figure 1(b). Specifically, the objective of finetuning is to adapt the model $P ( { Y | } X )$ to capture temporal patterns and better predict the horizon $Y$ given the context $X$ . However, the presence of scale $S$ as a confounder introduces spurious correlations between context $X$ and the knowledge $M$ activated within TSFM, causing the model to rely on correlations that lack causal grounding. Directly forecast with $P ( { Y | } X )$ would mistakenly associate non-causal but positively correlated context $X$ to horizon $Y$ . To overcome this, we propose using the interventional distribution $P ( { Y } | d o ( X ) )$ , which isolates the true causal effect of $X$ on $Y$ by blocking the influence of the confounder $S$ . We will elaborate on how this is achieved through backdoor adjustment [27] in Section 3.
This causal perspective highlights the need for explicitly modeling multiple scales during TSFM finetuning. However, integrating multi-scale modeling in this context remains underexplored and presents several non-trivial challenges—despite its success in standard time series forecasting modeling [34, 41, 40]. First, most TSFMs tokenize time series through patching [25], resulting in tokens at different scales exhibiting varying resolutions and temporal dynamics. This discrepancy complicates the finetuning of the unified input projection and attention weights. Second, applying attention across multi-scale tokens can introduce spurious dependencies due to misaligned time indices, making it difficult to capture true temporal relationships. Thus, the attention mechanism must account for or bypass index-related biases. Finally, since the model produces separate predictions at each scale, effectively aggregating these multi-scale outputs is essential for accurate and robust forecasting.
To close the gap, we propose a novel encoder-based TSFM finetuning framework using multi-scale modeling, namely MSFT. Our contributions are summarized as follows:
1. Building on causal insights, we identify the limitations of naive finetuning for TSFMs and propose a multi-scale modeling approach for TSFM finetuning. To the best of our knowledge, this is the first work to introduce multi-scale modeling into TSFMs.
2. We propose MSFT, a simple yet effective finetuning framework for encoder-based TSFMs. MSFT begins by downsampling time series into multiple scales and independently tokenizing each scale at its own resolution. Scale-specific modules are applied to the input projection and attention layers to activate scale-specific knowledge. Decoupled dependency modeling is then performed on the concatenated multi-scale sequence, enabling the model to capture both within-scale (via
in-scale attention) and cross-scale (via cross-scale aggregator) dependencies. Finally, a learnable weighting strategy is employed to aggregate the multi-scale prediction results.
3. Our extensive evaluation on various datasets for Long Sequence Forecasting [43] and Probabilistic Forecasting [42] demonstrates that MSFT not only significantly improves the fintuning results of TSFMs but also surpasses other state-of-the-art models trained from scratch.
# 2 Preliminaries
Problem Formulation. We first define the TSF task, in which the model predicts a horizon window given a context window. Let $C$ denote the context length and $H$ the horizon length. Context window $\mathbf { \breve { X } } \in \mathbb { R } ^ { C \times D }$ and horizon window $\mathbf { Y } \in \mathbb { R } ^ { H \times D }$ are consecutively extracted from the same time series $\mathbf { x } _ { 1 : T } = ( \mathbf { x } _ { 1 } , \mathbf { x } _ { 2 } , \ldots , \mathbf { x } _ { T } )$ , where $D$ is the feature dimension at each time step. The sample at time step $t$ is denoted as $( \mathbf { X } _ { t } , \mathbf { Y } _ { t } )$ , where $\mathbf { X } _ { t } = \left( \mathbf { x } _ { t - C } , \ldots , \mathbf { x } _ { t - 1 } \right)$ and $\mathbf Y _ { t } = ( \mathbf x _ { t } , \dots , \mathbf x _ { t + H - 1 } )$ . Given a model parameterized by $\theta$ and a training dataset ${ \mathcal { D } } ^ { \mathrm { t r a i n } } = \{ ( { \bf X } _ { t } , { \bf Y } _ { t } ) \} _ { t = 1 } ^ { T _ { o } }$ , the objective is to learn the model parameter $\theta ^ { * }$ to achieve minimum error oDn the test{ing set $\mathcal { D } ^ { \mathrm { t e s t } } \bar { = } \{ ( { \bf X } _ { t } , { \bf Y } _ { t } ) \} _ { t = T _ { o } + 1 } ^ { T }$ .
Multi-Scale Generation. In multi-scale modeling, the standard approach for generating multi-scale sequences is based on average pooling [34, 41]. Given a training sample $( \mathbf { X } , \mathbf { Y } )$ , both context and horizon windows are downsampled into multiple temporal scales using non-overlapping average pooling. Specifically, downsampling factor is commonly set to 2, resulting in a set of scales defined by $1 , 2 , \ldots , 2 ^ { K }$ , where $K$ is the number of downsampled scales. Let $s$ denote the set of multi-scale time series as $\boldsymbol { S } = \{ \mathbf { S } _ { 0 } , \ldots , \mathbf { S } _ { K } \}$ , where $\mathbf { S } _ { i } = ( \mathbf { X } ^ { i } , \dot { \mathbf { Y } } ^ { i } )$ corresponds to the $i$ -th scale series, formed by concatenating the downsampled context $\mathbf { X } ^ { i } \in \mathbb { R } ^ { C _ { i } \times D }$ and downsampled horizon $\mathbf { Y } ^ { i } \in \mathbb { R } ^ { H _ { i } \times D }$ . Here, $\begin{array} { r } { C _ { i } = \lceil \frac { C } { 2 ^ { i } } \rceil } \end{array}$ and $\begin{array} { r } { H _ { i } = \lceil \frac { H } { 2 ^ { i } } \rceil } \end{array}$ . Note that $\mathbf { S } _ { 0 }$ represents the input series at the original scale.
Encoder-based TSFM. We outline the architectural framework of existing encoder-based TSFMs [42, 12, 11] from a high-level perspective. These models adopt an encoder-only Transformer [38] architecture and segment univariate time series into a sequence of patch tokens [25]. While multivariate extensions are supported in some models [42, 11], we focus on the univariate case for illustration $D = 1 { \ddot { } }$ ), without loss of generality. The pretraining is conducted by masked reconstruction [9]. Given a time series $( \mathbf { X } , \mathbf { Y } )$ , the series is segmented into non-overlapping patch tokens of size $P$ , resulting in a sequence of patches $\pmb { x } \in \mathbb { R } ^ { N \times P }$ , where $\begin{array} { r } { N = \lceil \frac { C } { P } \rceil + \lceil \frac { H } { P } \rceil } \end{array}$ . The goal is to forecast the predictive horizon by $\hat { \mathbf { Y } } = f _ { \theta } ( \pmb { x } )$ , where $f _ { \theta }$ is a transformer with the block number $L$ and model dimension $d$ . Specifically, Equation 1 represents the procedure of calculating $\hat { \mathbf { Y } } = f _ { \theta } ( \pmb { x } )$ :
$$
\boldsymbol { h } ^ { 0 } = \mathrm { I n P r o j e c t } ( \boldsymbol { x } ) ; \quad \boldsymbol { h } ^ { l } = \mathrm { A t t n B l o c k } ( \boldsymbol { h } ^ { l - 1 } ) , \boldsymbol { l } = 1 , . . . , L ; \quad \hat { \mathbf { Y } } = \mathrm { O u t P r o j e c t } ( \boldsymbol { h } ^ { L } )
$$
Let $\pmb { h } ^ { l } \in \mathbb { R } ^ { N \times d }$ represent the token embeddings produced by layer $l$ . The input projection InProject embeds patch tokens into input embeddings $\boldsymbol { h } ^ { \mathrm { \bar { 0 } } }$ . Each AttnBlock consists of a multi-head selfattention layer, followed by a feed-forward network (FFN) and normalization layers. The output projection OutProject maps the output embeddings $h ^ { L }$ to the prediction $\hat { \mathbf Y }$ , either directly [12, 11] or indirectly by first producing distributional parameters from which $\hat { \mathbf Y }$ is sampled [42]. We summarize the architectural features and training losses of each model in Appendix B.2.
# 3 Multi-Scale Finetuning of TSFM
# 3.1 Multi-Scale Effect on TSFM: A Causal View
As we discussed in Section 1, both time series data and TSFMs exhibit multi-scale properties. We take scale into account during TSFM finetuning and construct a Structural Causal Model (SCM) [28] as illustrated Figure 1 (b). The nodes denote the abstract data variables, and the directed edges denote the causality, i.e., cause $$ effect. Denoting input context window data as $X$ , scale as $S$ , and prediction horizon window data as $Y$ , we discuss the rationale for each link as follows:
$\ b X s$ . Given an observed recording of the context period, the input context series $X$ is directly influenced by the scale $S$ . Although corresponding to the same temporal range, $X$ exhibits different temporal patterns and resolutions at different sampling rates.
$S \to M \gets X$ . We denote $M$ as the activated knowledge within the pretrained TSFM’s knowledge space, conditioned on input context. $S \to M$ indicates that the scale of data activates the corresponding scale-specific knowledge in the TSFM. Meanwhile, $X M$ reflects that the TSFM activates context-specific knowledge with the input data $X$ .
Figure 2: (a): The intervened Structural Causal Models (SCM) and overall MultiScale FineTuning (MSFT) framework, which directly model $P ( { Y } | d o ( X ) )$ ; (b): Challenges in directly applying the framework. Left: Downsampling and patching process for constructing multi-scale sequences. Patch tokens at different scales have varying resolution and schematics. Right: Directly applying selfattention over multi-scale embeddings leads to biased cross-scale attention due to misaligned time id.
$X \to Y M$ . This link represents that the model utilizes the activated knowledge $M$ to generate predictions $Y$ based on the lookback context data $X$ .
It is evident that scale $S$ is a confounder that induces spurious correlations between input context series (via $S X$ ) and activated knowledge of TSFM (via $S M$ ). The former captures the multi-scale properties of time series, while the latter corresponds to the multi-scale capabilities of TSFM. Scale $S$ ultimately affects the forecasting of the prediction horizon via the backdoor path $X S M Y$ . Naive finetuned forecaster for $P ( { \cal Y } | { \cal \bar { X } } )$ overlooks the impact of this backdoor path, learning forecasting only at the original scale. This oversight would mistakenly associate non-causal but positively correlated input context to forecast horizon in the original scale, resulting in problematic forecasting.
# 3.2 Causal Intervention via Backdoor Adjustment
Given this, we propose using $P ( { Y } | d o ( { X } ) )$ as the new finetuned forecaster, which eliminates the confounding effect of $S$ and captures the true causal relationship from $X$ to $Y$ . As the “physical" intervention is impossible, we apply the backdoor adjustment [27] to “virtually" realize $P ( { \bar { Y } } { | } { \dot { d } } o ( X ) )$ by (1) blocking the link $S \to X$ and (2) S. As illustrated in Figure 2 (a, left), we have:
$$
P ( Y | d o ( X ) ) = \sum _ { s } P ( Y | X , S = s , M = g ( X , s ) ) P ( s )
$$
where $g$ is a function to activate scale-specific knowledge of input. Grounded in this causal formulation, we design the MultiScale FineTuning (MSFT) framework to instantiate the intervention-based forecasting process shown in Equation 2. As shown in the right panel of Figure 2(a), the framework stratifies the confounder $S$ by down-sampling the original time series into multiple scales. Each scale captures distinct statistical properties of the series and corresponds to a specific value $s \in S$ .
Specifically, multi-scale series $\pmb { S } = \{ \mathbf { S } _ { 0 } , \ldots , \mathbf { S } _ { K } \}$ is generated through the process described in Section 2. Each scale series $\mathbf { S } _ { i }$ is segmented into scale-specific patch tokens $\mathbf { \Psi } _ { { \pmb x } _ { i } } ^ { \star } \in \mathbb { R } ^ { N _ { i } \times P }$ , where $N _ { i }$ is the number of patches for scale $i$ . The scale-specific input embeddings are computed by $\pmb { h } _ { i } ^ { 0 } = \mathrm { I n P r o j e c t } ( \pmb { x } _ { i } )$ . Following the design of masked encoder [9], the embeddings falling within the forecast horizon are replaced with the learnable [mask] embedding. The input embeddings from all scales are concatenated into a multi-scale sequence, ${ \pmb h } _ { 0 } = \mathrm { C o n c a t } ( { \pmb h } _ { 0 } ^ { 0 } , { \pmb h } _ { 1 } ^ { 0 } , \cdot \cdot . . , { \pmb h } _ { K } ^ { 0 } )$ , which is then passed to the Transformer for processing.
# 3.3 Challenges
Although the framework of Figure 2(a) can be directly applied without initiation of $M = g ( X , s )$ , we argue that it leaves following challenges unaddressed. First, the token schematics and intra-scale dependencies vary significantly across scales. As shown in the left part of Figure 2(b), patch tokens at different scales exhibit distinct resolution and temporal schematics. When directly finetuning the
Input/Output Projections Self-attention Blocks Decoupled Dependency Modeling 1 □ ② Output Embeddings ×L @Coss-seale Aggregator Linear Lincar Lincar Add & Norm 4 4 Feed Forward Temporal Alignment 1 华 Input Projection CoarsetoFine Fine toCoarse I for Repeat (↑)/Average (↓) ↑ Add & Nom epeat Average □ soM MMMW 1
① Scale0 Scale1 wout 电 中 1
③ Forecasts Repea Average Cross-scale 中 ↑ Up-sample&Mixing ↑ ↑ ① 2.1 自 1 S2 wy > 4 ① In-scalen S0 山 Output Projection 二 回 ? V ①In-scale Attention In-scale Mask Min □ Trainable Pretrained Parameters WLORA WORA WyLORA Min ? Q 图 Frozen Pretrained Parameters 1 Decoupled Dependency Modeling 。 O Multiplication Scale-specific New Trainable Parameters InputEmbeddings Softmax
input projection layer over all scales, each scale inherently tends to learn its own specific intra-token patterns, which can lead to interference across scales and suboptimal performance. Moreover, the resolution discrepancy induces scale-inequivalent inter-token correlation, requiring the attention mechanism to capture scale-specific dynamics rather than assuming uniform interaction patterns.
Second, standard self-attention introduces misleading cross-scale dependencies due to mismatched time (position) indices. Since time indices are independently generated within each scale, tokens with the same index at different scales (shaded in gray in Figure 2(b)) correspond to different temporal ranges. When self-attention is directly applied over over the concatenated multi-scale embedding sequence, attention scores across scales become biased: tokens attend more to others with the same time index, regardless of actual temporal relevance (see the right part of Figure 2(b)). This leads attention to capture spurious temporal correlations and attend to semantically irrelevant tokens.
Finally, the model generates distinct predictions at each scale, and effectively mixing multi-scale predictions remains a non-trivial challenge. Although cross-scale information is partially fused through attention, prior studies [41] have shown that explicitly combining multi-scale predictions improves forecasting performance. However, naively averaging predictions across scales fails to account for their semantic and temporal heterogeneity, potentially leading to suboptimal results.
# 4 Methodology
To address the aforementioned challenges, we propose MSFT to realize the high-level framework in Figure 2(a) as an effective multiscale finetuning strategy. Specifically, to activate scale-specific knowledge, we freeze the pretrained parameters and introduce scale-specific, parameter-efficient modules into the $\textcircled{1}$ input projection and $\textcircled{2}$ attention layers. To eliminate the cross-scale attention bias and correctly capture temporal correlations, we propose a decoupled token dependency modeling mechanism: $\textcircled{1}$ in-scale self-attention captures within-scale dependencies, while $\textcircled{1}$ cross-scale aggregators explicitly fuse information across scales, ensuring correct temporal alignment between tokens. Finally, we apply multi-scale mixing to the $\textcircled{3}$ output projection, combining scale-specific predictions with learned weights. Figure 3 illustrates our MSFT method.
Scale-specific Knowledge Activation. To address the problem of scale-variant token resolution, instead of directly finetuning the unified input projection layer across all scales, we freeze the pretrained input projection and introduce a scale-specific adapter for each scale, implemented as a linear layer $\operatorname { L i n e a r } _ { i }$ . Now, the input embeddings of scale $i$ is computed as $\pmb { h } _ { i } ^ { 0 } = \mathrm { L i n e a r } _ { i } ( \mathrm { I n P r o j e c t } ( \pmb { x } _ { i } ) )$ .
Conditioned on the pretrained embeddings, these adapters independently learn specific representations at variant resolutions, effectively avoiding interference across scales.
Similarly, to enhance the attention mechanism’s ability to capture scale-variant dynamics, we incorporate independent LoRA [13] modules for each scale. Specifically, we freeze the pretrained attention weight matrices, and the FFN block, and introduce a set of LoRA modules for each scale. Since both input embeddings and attention weights reflect scale-activated TSFM knowledge, this design serves as the implementation of $g$ in Equation 2, enabling the activation of scale-specific knowledge $M$ .
Decoupled Token Dependency Modeling. To ensure attention blocks capture the multi-scale embedding sequence’s correct dependencies, we decouple the token dependency modeling into two parts: within-scale and across-scale dependencies. Specifically, for tokens within the same scale, if they share the same resolution—dependencies, they can be directly learned via self-attention. Thus, we only apply an in-scale attention mask ${ \bf { M } } _ { \mathrm { { i n } } }$ to ensure that each token attends only to tokens from the same scale.
To aggregate the knowledge between tokens from different scales, we add a cross-scale aggregator after the attention operation. The aggregator consists of two branches, namely coarse-to-fine andfine-to-coarse, where temporal-aligned token-level information fusion is iteratively conducted between consecutive scales in two directions. First, since tokens at different scales correspond to varying resolutions, it is necessary to map embeddings to a shared space before fusion. To this end, following [29, 30], we adopt a linear mapping $\phi _ { i , j } ^ { l ^ { - } }$ to project token embeddings from scale $i$ to the embedding space of scale $j$ in each layer $l$ , where the mapped embeddings are defined as $\tilde { { \pmb { h } } } _ { i , j } ^ { l } = \phi _ { i , j } ^ { l } ( { \pmb { h } } _ { i } ^ { l } ) = \tilde { \pmb { w } } _ { i , j } ^ { l } { \pmb { h } } _ { i } ^ { l } + { \pmb { b } } _ { i , j } ^ { l }$ .
Based on this mapping, token embeddings from one scale are projected to the adjacent scale and then fused according to their temporal alignment. We define the cross-scale token-wise fusion for the coarse-to-fine (C2F) and fine-to-coarse (F2C) branches as follows:
$$
\begin{array} { r l r } { \mathrm { C 2 F } \colon } & { h _ { i - 1 } ^ { l } = h _ { i - 1 } ^ { l } + \mathrm { R e p e a t } ( \tilde { h } _ { i , i - 1 } ^ { l } ) , } & { \mathrm { f o r ~ } i \in \{ K , . . . , 1 \} } \\ { \mathrm { F 2 C } \colon } & { h _ { i + 1 } ^ { l } = h _ { i + 1 } ^ { l } + \mathrm { A v g P o o l } ( \tilde { h } _ { i , i + 1 } ^ { l } ) , } & { \mathrm { f o r ~ } i \in \{ 0 , . . . , K - 1 \} } \end{array}
$$
where Repeat $( \cdot )$ duplicates each coarse-scale token in $\tilde { h } _ { i , i - 1 } ^ { l }$ along the sequence dimension to match the finer-scale resolution, based on their temporal correspondence. Conversely, $\operatorname { A v g P o o l } ( \cdot )$ aggregates groups of fine-scale tokens in $\tilde { h } _ { i , i + 1 } ^ { l }$ by averaging them according to the downsampling factor, thereby aligning them to the coarser-scale resolution. Finally, the outputs from the two branches are combined by averaging their updated token embeddings. This decoupled two-stage design enables the model to capture temporal dependencies within each scale while effectively fusing complementary information across scales, leading to improved multi-scale temporal understanding.
Multi-scale Mixing. In the output projection, each scale independently predicts a forecasting horizon $\hat { \mathbf { Y } } _ { i }$ based on its scale-specific tokens $ { \boldsymbol { h } } _ { i } ^ { L }$ from the final layer embedding $h ^ { L }$ . The training objective is formulated as a weighted summation of the scale-wise forecasting losses $\mathcal { L } _ { \mathrm { p r e d } , i }$ (e.g., MSE or NLL). Since different scales may exhibit varying forecasting abilities and contribute differently to the final performance, we assign a learnable weight $w _ { i }$ to each scale, corresponding to the prior $P ( s )$ in Equation 2. The weights $w _ { i }$ are obtained by applying a softmax function over a set of learnable parameters during training: $\begin{array} { r } { \mathcal { L } _ { \mathrm { p r e d } } = \sum _ { i = 0 } ^ { K + 1 } w _ { i } \mathcal { L } _ { \mathrm { p r e d } , i } } \end{array}$ . During inference, we upsample the forecasting results from each new scale to the original temporal resolution. The final prediction $\hat { \mathbf Y }$ is computed as the weighted sum of the upsampled forecasts, using the same learned weights $w _ { i }$ . This weighted mixing strategy can be seen as ensembling [26], which helps mitigate overfitting on the original scale. Additional implementation details for different TSFM architectures are provided in Appendix B.2.
# 5 Related Work
# 5.1 Time Series Foundation Model
We focus our discussion solely on transformer-based TSFMs for TSF. Such TSFMs can be broadly categorized according to the backbone architecture. Encoder-only models like Moirai [42], Moment [12] and UniTS [11] use masked reconstruction for pretraining. Decoder-only models, such as TimesFM [8], Lag-Llama [32], Timer [20], and Time-MoE [35] are pretrained by next-token prediction in an auto-regressive manner. Chronos [2], an encoder-decoder model, quantizes scaled time series values into discrete tokens and adopts the training objective originally developed for NLP. Despite the advancement of the field, existing TSFM research predominantly emphasizes pretraining and zero-shot performance. Although some studies [2, 12, 8, 20] mention naive finetuning methods, these attempts are limited compared to the efforts devoted to pretraining and zero-shot evaluation. We include a more detailed discussion in Appendix A.
# 5.2 Multi-scale modeling in time series forecasting
Multi-scale modeling has garnered growing attention in the TSF community. Existing works mostly involves down-sampling, where coarser scales are derived from the original series using pooling or convolution. Models are then designed to capture multi-scale characteristics from these different views. Pyraformer [18] constructs a pyramidal graph of different scales and employs a pyramid attention mechanism to extract multi-resolution representations. MICN [39] processes different scales separately through multiple branches with distinct convolution kernels and subsequently merges the outputs. Inspired by hierarchical forecasting, Scaleformer [34] and GPHT [21] iteratively refine the outputs from coarser to finer scales. TimeMixer [41] and TimeMixer+ $^ { - + }$ [40] decompose each scale into seasonal and trend components, then integrate these components across multiple scales.
# 6 Experiments
We evaluate our proposed finetuning method, MSFT, on two prevalent TSF tasks: long sequence forecasting (LSF) and probabilistic forecasting (PF). For LSF, we experiment with three TSFMs: MOIRAI, MOMENT and UNITS. For PF, we focus solely on MOIRAI, as it is the only model capable of probabilistic forecasting. Our evaluation includes comparisons with both deep learning-based methods and other finetuning approaches applied to TSFMs. Detailed model configurations and experimental setups are provided in the Appendix B.
Table 1: Long sequence forecasting results, which are averaged across prediction lengths $\{ 9 6 , 1 9 2 , 3 3 6 , \hat { 7 } 2 0 \}$ . Each TSFM shows its zero-shot performance (highlighted in gray ) and results with different finetuning methods. The best finetuning results for each TSFM are highlighted in bold, while the global best results across all models are highlighted in red.
Table 2: Probabilistic forecasting results. The best finetuning results for each TSFM are highlighted in bold, while the global best results are highlighted in red. See Table 11 for full results.
# 6.1 Long Sequence Forecasting
Setup. We conduct our experiments on a subset of the widely-used long sequence forecasting benchmark [44]. This subset is identical to the one used in Moirai [42] for LSF experiments and is not included in the pretraining data of TSFMs. Each dataset involves predictions at four different lengths, with the model is finetuned separately for each prediction length. We evaluate the performance using Mean Squared Error (MSE) and Mean Absolute Error (MAE).
Results. As shown in Table 1, MSFT consistently enhances the forecasting performance of TSFMs. Across all models, MSFT outperforms other finetuning methods that use only the original scale, consistently delivering the best finetuned results. For $\mathbf { M O I R A I } _ { \mathrm { S m a l l } }$ and $\mathbf { M O I R A I _ { B a s e } }$ , MSFT further improves their forecasting accuracy over their solid zero-shot performance, achieving competitive results across all datasets, with 10 out of 12 metrics showing the best performance. Notably, MSFT substantially improves MOIRAI’s finetuned performance on minutely-level datasets. Compared to full finetuning, it achieves $6 . 8 \%$ lower MSE in ETTm1, $6 . 3 \%$ lower MSE in ETTm2 and $6 . 7 \%$ lower MSE in Weather. In contrast, the improvement brought by MSFT on hourly datasets are relatively smaller compared to minute-level datasets. This discrepancy can be explained by the richer multi-scale patterns present in minute-level data, which MSFT can effectively leverage. For MOMENT, the improvements brought by MSFT are generally less pronounced compared to MOIRAI and UNITS. This can be attributed its pretraining with fixed context lengths, which limits their ability to extract information from new scales of varying lengths. Despite these differences, MSFT exhibit superior finetuned performance across diverse models and datasets, demonstrating its generalizability.
# 6.2 Probabilistic Forecasting
Setup. We evaluate on six datasets spanning various domains, using the rolling evaluation setup described in Moirai [42]. The test set comprises the final time steps, segmented into multiple non-overlapping evaluation windows. The length of the prediction window and the number of rolling evaluations are tailored for each dataset based on its frequency (see Table 5 for details). For performance evaluation, we report the Continuous Ranked Probability Score (CRPS) and Mean Scaled Interval Score (MSIS) metrics.
Results. Experimental results in Table 2 demonstrate that MSFT consistently delivers superior performance across all datasets. Building upon the strong zero-shot performance, MOIRAIBase achieves the best results for nearly all the datasets. MSFT provides consistent improvements over other finetuning methods, achieving an additional $2 4 . 4 \%$ CPRS relative reduction in Solar and 18.3 $\%$ CPRS relative reduction in Istanbul Traffic compared to full finetuning. A similar trend is also observed in the small model, demonstrating that our multi-scale modeling method can effectively enhance the fine-tuned performance of probabilistic forecasting.
Table 3: Ablation study on three LSF datasets using MOIRAISmall.
# 6.3 Model Analysis
To fully understand MSFT, we conduct model analysis using the MOIRAISmall model on three LSF datasets, selected for its strong zero-shot performance and relatively low training cost. Due to page limits, we present the analysis of down-sampling approaches, down-sampling factors, detailed attention analysis, and visualizations in the Appendix C. We also discuss the potential application of MSFT to decoder-based structures and its limitation in Appendix D.
Ablation Study. Ablations $\textcircled{1}$ to $\textcircled{4}$ examine the effectiveness of scale-specific knowledge activation. For both input projection and attention, either freezing $\textcircled{1} , \textcircled{3} )$ or finetuning shared weights $\textcircled{2} , \textcircled{4} )$ yields inferior performance to using scale-specific modules, with freezing causing larger performance drops. Among the two, attention has a greater impact than input projection, highlighting its critical role in capturing temporal dependencies.
Ablations $\textcircled{5}$ to $\textcircled{8}$ evaluate the effect of each component in decoupled dependency modeling. In $\textcircled{5}$ , we remove cross-scale aggregators and only retain in-scale attention masking. Without cross-scale modeling, the performance suffers a significant decline. In $\textcircled{6}$ and $\textcircled{7}$ , we ablate the coarse-to-fine and fine-to-coarse branches, respectively. Both cases lead to performance drops, with the coarse-to-fine branch showing a stronger impact. In $\textcircled{8}$ , we completely remove decoupled dependency modeling, capturing dependency directly via attention on the concatenated multi-scale sequence. This approach leads to misaligned cross-scale interactions and further degrades performance.
Finally, we assess the impact of multi-scale mixing. In $\textcircled{9}$ , we disable prediction mixing, only using the original scale for prediction. In $\textcircled{10}$ , we aggregate the multi-scale predicitions by averaging. Both approaches result in lower performance compared to our full model.
Effect of Number of New Scales. As shown in Figure 4, increasing the number of new scales $K$ initially reduces errors. However, beyond a certain point, performance plateaus or declines, likely due to overly coarse predictions with few tokens disrupting multi-scale modeling. Our results indicate that setting $K$ to 2 or 3 achieves the best balance.
Attention Analysis. Figure 5 shows the attention score heatmaps of three attention strategies. In (a), direct attention (Ablation $\textcircled{8}$ ) exhibits spurious temporal dependencies, with attention scores biased toward tokens sharing the same time indices. In (b), we align time indices during attention, ensuring that cross-scale tokens corresponding to the same temporal region share identical time indices. While this approach produces "correct" attention patterns, it is limited to RoPE and performs worse than our method (see Appendix C for details). In (c), our in-scale masking strategy eliminates misleading cross-scale attention, focusing on accurate within-scale dependency modeling.
Figure 4: LSF accuracy w.r.t. number of scales
Figure 5: Attention heatmaps of various methods | Time series foundation models (TSFMs) demonstrate impressive zero-shot
performance for time series forecasting. However, an important yet
underexplored challenge is how to effectively finetune TSFMs on specific
downstream tasks. While naive finetuning can yield performance gains, we argue
that it falls short of fully leveraging TSFMs' capabilities, often resulting in
overfitting and suboptimal performance. Given the diverse temporal patterns
across sampling scales and the inherent multi-scale forecasting capabilities of
TSFMs, we adopt a causal perspective to analyze finetuning process, through
which we highlight the critical importance of explicitly modeling multiple
scales and reveal the shortcomings of naive approaches. Focusing on
\textit{encoder-based} TSFMs, we propose \textbf{M}ulti\textbf{\textsc{s}}cale
\textbf{\textsc{f}}ine\textbf{\textsc{t}}uning (\textbf{MSFT}), a simple yet
general framework that explicitly integrates multi-scale modeling into the
finetuning process. Experimental results on three different backbones (\moirai,
\moment\ and \units) demonstrate that TSFMs finetuned with MSFT not only
outperform naive and typical parameter efficient finetuning methods but also
surpass state-of-the-art deep learning methods. | [
"cs.LG"
] |
# 1 Introduction
Autonomous driving systems typically adopt a modular paradigm, decomposing the driving task into different sub-modules, such as perception [1–3], prediction [4–6], and planning [7–9]. While this design enables structured development, it may cause error accumulation and a lack of joint optimization across modules, leading to suboptimal performance [10, 11]. End-to-end autonomous driving has gained prominence with a unified model architecture that maps raw sensor inputs directly to final driving actions. These models are trained on human driving data, enhancing scalability and human-like behavior. Vision-based approaches have garnered significant interest due to their affordability and ease of deployment [12–15].
However, conventional end-to-end methods [16–19] primarily focus on imitating expert trajectories, lacking essential world knowledge for understanding and reasoning about surrounding environments, particularly in long-tail or challenging scenarios. Recent advances in Vision-Language Models (VLMs) [20–22] have gained significant interest by introducing models capable of leveraging extensive world knowledge and powerful reasoning. These models have shown strong potential in
Camera streams V-Vision Space Planning A-Action Space Dual Thinking Mode Adaptation FTionpa-lKTrTaojkeectnory iSnloCowmTplheixnkSicnegnarios iFnaSsitmTplheinSkciengarios Not in Top-K 0.2 p=0.2j Change Left Go Straight Vision Action p=0.3 p=0.1 Encoder p=0.4 p=0.2 Codebook t = 0 p=0.1 t = 3 Vision Token t = t = 2 Instruction L-Language Space Reasoning Token Action Token
Turn Left | Go Straight … 3
Ego states 1. Scene Description Chain of Thought
Velocity | History Action … $\Bumpeq$ TokTenxitzer Auto Vision - eFreognot :veAhcioclnesitsruacptipornoawcohriknegraiscionntshterucetinotnerzloane ,dhuorilndigndgay lSigLhOtW…
System Prompt Language - Lpeafdt:d lTeh,ealnedftal acnoenistrculcetairo,nwviteh iaclfe iws vbelohickliensgathearidg.h…t lane … Back: Light traAic is present at a distance behind the ego vehicle … Action Model
Task & Thinking Guidance Text Token 2. Critical Object Description Construction Worker (center front): Holding a SLOW sign and gesturing, likely managing traAic flow around a construction zone
Supervised Fine-tuning SFT Reinforcement Fine-tuning RFT - Forklift/Loader (front right): Stationary and blocking the right lane… 🔥Vision Encoder 三 TsGlhoOew_elSygTo(RvAeIlGohicHciltTye, b≈wu0at.st4hp4reemdv/irsoe,ulcsotlwyp asatchocipesplbelrdoaatciknoednd)i.sbTnyhobewodtrbhievtighniengncionngmsttmoruamcntodivoiesn GT Reasoning Reward Vision Encoder
Trajectory 🔥 Data Model GRPO vwiaorbkle,r waintdh tehneofuogrkhlisftp.aMce aton wsahfilel,y tmhe rlgefet. lane appears clear and LLM LLM LoRA ? 4. Best Driving Action: change lane to the left with an acceleration.
improving adaptability and scalability across diverse driving scenarios [23–29]. Building upon VLMs, Vision-Language-Action (VLA) models extend this capability to action generation, enabling embodied agents, such as robots [30–32] and autonomous vehicles [33, 34], to produce feasible physical actions based on visual observations and language instructions.
Despite recent progress, existing VLA models face two critical limitations in autonomous driving, as illustrated in Fig. 2. 1) Physically-infeasible or complex structure for action generation. Some models generate textual actions or waypoints directly using VLMs [35–37], but these outputs can be physically infeasible and suffer from mode collapse. To address this, recent approaches introduce intermediate meta-actions [38–40] or latent action tokens [41–43], which are then processed by downstream planners or decoders to produce physically feasible trajectories. However, the intermediate representations either break the end-to-end optimization paradigm or increase model complexity and training overhead. 2) Inflexible and inefficient reasoning across diverse scenarios. Most existing models [44, 45] employ a fixed reasoning strategy, lacking the ability to adaptively switch between direct action outputs for straightforward scenarios and chain-of-thought (CoT) reasoning for complex ones. Although DriveVLM [46] introduces a dual-process paradigm, it relies on separate modules (i.e., a VLM for slow reasoning and a conventional end-to-end model for fast responses), which results in a complicated architecture, increased training overhead, and limited scalability [47].
To overcome these limitations, we propose AutoVLA, an end-to-end autonomous driving framework that directly integrates physical action tokens into a pretrained VLM backbone, enabling direct learning of an autoregressive planning policy, as illustrated in Fig. 1. Our unified architecture seamlessly integrates reasoning and action generation, allowing adaptive switching between direct trajectory generation and CoT reasoning. In supervised fine-tuning (SFT), we leverage both trajectoryonly data and CoT reasoning data to equip the model with dual-process capabilities (fast and slow thinking). Furthermore, we propose reinforcement fine-tuning (RFT) [48], utilizing Group Relative Policy Optimization (GRPO) [49] with verifiable planning reward functions. This enables adaptive reasoning that balances planning accuracy and efficiency. The RFT method not only improves planning performance but also runtime efficiency by minimizing unnecessary reasoning.
We extensively evaluate AutoVLA using real-world datasets, including nuPlan [50, 51], Waymo [52], nuScenes [53], and simulation datasets such as CARLA [54, 55]. Experimental results demonstrate that AutoVLA achieves superior performance across various end-to-end autonomous driving benchmarks under both open-loop and closed-loop tests. Empirical results validate that our RFT approach
(a) Dual VLM End-to-end Model Vision Space Language Space Action Space (c) VLM as End-to-end Model Conventional End-to-end Model $$ Planning iRneSdiumnpdlaentScRenasroionsing ITnrfaejaescitbolreies Text Planning IIImaaggeess Dual System IIImaaggeess Vision Language Model CReoaTsoning → Vision Language Model Explanation
(b) Hybrid VLM End-to-end Model (d) VLA as End-to-end Model Conventional End-to-end Model Planning Action Tokenization Planning IIImaaggeess High-level Decision → Vision Language Model SHiyesrtaermchical IIImaaggeess Vision Language Model CReoaTsoning
markedly improves planning performance, enables adaptive fast and slow thinking capabilities, and reduces runtime by minimizing redundant reasoning. The main contributions of this paper are summarized as follows:
1. We introduce AutoVLA, an end-to-end autonomous driving framework leveraging a pretrained VLM backbone integrated with physical action tokens, enabling direct policy learning and semantic reasoning from raw visual observations and language instructions. 2. We propose an RL-based post-training method using GRPO to enable adaptive reasoning and further enhance the model’s performance on end-to-end driving tasks. 3. We demonstrate that AutoVLA achieves superior performance across multiple autonomous driving benchmarks, including both open-loop and closed-loop testing.
# 2 Related Work
End-to-end Autonomous Driving. End-to-end autonomous driving approaches have made significant advances in recent years [10, 11, 56–64]. Methods such as UniAD [65] and VAD [66] explicitly integrate multiple driving tasks from perception to planning in a unified Transformer architecture, thereby enhancing planning performance. ParaDrive [67] discusses the necessary components within end-to-end driving architectures. Additionally, GenAD [68] and DiffusionDrive [69] adopt generative models to maintain trajectory continuity and produce multi-modal driving trajectories. However, integrating world knowledge into end-to-end driving systems remains challenging due to bottlenecks in semantic reasoning [34] and limited adaptability in complex environments [70].
VLA and VLM for Autonomous Driving. The gap between semantic reasoning and physical actions remains a critical challenge for VLA and VLM in end-to-end autonomous driving. Current research broadly follows three directions. The first directly formulates driving as a language-centric problem, utilizing VLMs for scenario understanding through caption generation [71–73] or question answering [74, 75]. The second direction leverages VLA or VLM to produce high-level meta-actions or driving decisions [17, 38–40], which are used to either supervise [12, 76–78] or guide [39, 79] traditional planners or end-to-end models. Although these approaches facilitate integration, they prevent full end-to-end optimization. Thus, a third direction directly integrates VLMs with action generation into VLA models, enabling the direct prediction of latent action tokens [34–36, 43] or final driving trajectories [37, 44, 80–83]. However, simple trajectory decoders employed in these methods (e.g., MLP [41, 84] or GRU [45]) may produce impractical trajectories and suffer from modal collapse. To address this issue, ORION [42] incorporates generative planners into VLM architectures, enhancing trajectory feasibility but increasing model complexity and computational demands. In our work, we integrate a physical action codebook for vehicle motion into a pretrained VLM to effectively bridge the semantic reasoning and physical action space.
Reinforcement Fine-tuning. RFT [48] has shown considerable promise in enhancing the performance and adaptability of LLMs, as demonstrated in DeepSeek-R1 [22]. In autonomous driving,
KDinsotiwllaetdiogne LangLuarage VMisoidoenl Qwen2.5-VL-72B GT Decision ISnapmutes Reinforcement Fine-tuning PolicGyrOouptpimRiezlatiiovne Sample 1 | Slow Thinking
H FSiunpe-truvinsiendg GT Trajectory → Action Reasoning $\blacktriangleleft$ Reasoning Data Trajectory 1 a<tdhdiintiko>nalThriesasiso niangc…omTplhex esgcoenvaerhiioclereqsuhioriunlgd wait for the vehicle on the right to pass before proceeding straight with caution.</think> <answer><action_0><action_0><action_0><acti Reasoning Token Action Token on_18><action_22><action_26><action_35><ac
g Reasoning Format tion_32><action_42><action_38></answer>
- Scene Analysis 个 Sample 2 | Fast Thinking
- Critical Object Identification Qwen2.5-VL-3B Trajectory 2 <think> This is a straightforward scenario, and a
- Intention Reasoning direct decision can be made...</think>
- Final Action Decision <answer><action_22><action_12><action_26>< action_34><action_31><action_33><action_68> <action_53><action_42><action_58></answer>
System Prompt Text Tokenizer Vision Encoder Sample 3 | Slow Thinking
- Role: You are an advanced full self-driving system. <think> This is a complex scenario requiring
- Task: You will be provided with video observations from Trajectory 3 saldodwit-iomnoavli nregaosropnionsgsi…bl yThsteovpepheidc.leThaeheagdo s
the ego vehicle’s surrounding cameras, along with the vehicle should change to the left lane.</think>
vehicle’s current dynamic states. Your task is to predict <answer><action_12><action_23><action_14><
the most appropriate driving action for the next 5 seconds. action_89><action_17><action_33><action_86>
- Fast/Slow Thinking: If necessary, use step-by-step <action_75><action_67><action_83></answer>
reasoning (Chain-of-Thought) ... Otherwise, you may Multi-view
directly predict the final driving action. 0 Camera Streams Reward
TurnInLsefttr|uGctoioStnraight | Turn Right VeloEcigtyo sAtcactelesration | History Action S21: 0.9020 RMeowdealrd Penalty for slow thinking
Gen-Drive [7] and TrajHF [85] employed the RFT to align the trajectory generation model with safety constraints and human driving preferences. RAD [86] combined 3D Gaussian splatting to generate scenarios and conduct closed-loop RL training. However, the application of RFT in endto-end VLM/VLA-based autonomous driving remains nascent. While previous methods, such as AlphaDrive [38], utilize GRPO instead of direct preference optimization (DPO) [87] to enhance planning performance and ensure training efficiency and stability, they are still limited to simplified settings involving only high-level meta-actions. In this work, we advance this direction by applying RFT to optimize the end-to-end VLA framework in both scene reasoning and low-level planning, and we adopt GRPO to accelerate convergence and enhance training stability.
# 3 AutoVLA
The proposed AutoVLA framework consists of two main components, as shown in Fig. 1. 1) VLM Backbone: It is capable of processing visual and textual input and generating corresponding tokens (reasoning and action), employing a unified autoregressive Transformer decoder. 2) Physical Action Token Generation: We extend the language model decoder to output physical action tokens that directly correspond to vehicle movements. These tokens are designed to comply with physical constraints and can be reliably translated into physically feasible planning trajectories.
Training of AutoVLA is conducted in two stages, as illustrated in Fig. 3. 1) Supervised Fine-Tuning uses ground-truth trajectory data and distills high-quality reasoning data from a large-scale VLM. 2) Reinforcement Fine-Tuning uses task-specific reward functions to optimize planning performance while improving the running efficiency by minimizing unnecessary reasoning. The details of our model and training process are illustrated below.
# 3.1 Framework
Model Inputs. AutoVLA takes as input multi-view, multi-frame camera data $C$ from onboard cameras, high-level navigation instructions $I$ , and ego vehicle states $S$ , and performs scene reasoning and trajectory planning. Specifically, we utilize three RGB cameras positioned at the front, front-left, and front-right sides of the vehicle. Each camera stream $c ^ { i } = [ c _ { t - 3 } ^ { i } , c _ { t - 2 } ^ { i } , c _ { t - 1 } ^ { i } , c _ { t } ^ { i } ]$ captures four sequential frames at a frequency of $2 \mathrm { H z }$ , including the current and three preceding frames, supplying temporal information for scene dynamics. Additionally, the model employs high-level navigation instructions $I$ (e.g., Turn Left and Go Straight) to specify intended directions explicitly. The ego vehicle’s state $S$ encompasses current velocity, acceleration, and historical actions.
Base VLM Model. We adopt Qwen2.5-VL-3B [21] as the vision-language backbone of AutoVLA. Qwen2.5-VL is a series of powerful multimodal large language models that possess strong visual understanding capabilities, and the open-source nature of the Qwen2.5-VL model facilitates taskspecific fine-tuning. The 3B variant offers a good trade-off between efficiency and performance, making it suitable for deployment in onboard devices.
Action Tokenization. To enable trajectory planning within the language model, we discretize continuous vehicle trajectories $\mathbf { P } \in \mathbb { R } ^ { \tilde { r } \times d }$ into a sequence of physical action tokens $\mathbf { a } = [ a _ { 1 } , \dots , a _ { T } ]$ , where $a _ { t } \in { \mathcal { A } }$ , $T$ is the length of the tokenized predicted trajectory and each token is represented by short-term spatial position and heading movement $( \Delta x , \Delta y , \Delta \theta )$ . This transforms the planning task into a next-token prediction problem, which can be conducted within the language model. We build our action codebook $\mathcal { A } = \{ a _ { 1 } , a _ { 2 } , . . . , a _ { K } \}$ using a K-disk clustering method [88–90], which covers the majority of vehicle movement patterns. Finally, we obtain a vehicle motion codebook that consists of $K = 2 0 4 8$ discrete action tokens. Following [30, 91], these action tokens are incorporated into the VLM as additional tokens (i.e., <action_ $. 0 >$ , <action_1>, . . . ). During inference, the model outputs a sequence of these action tokens, which are subsequently decoded into a planning trajectory using the action codebook. More details about action tokenization and trajectory decoding are provided in the Supplementary Material.
Unified Reasoning and Action. AutoVLA unifies reasoning and action generation within a single autoregressive Transformer framework, enabling adaptive switching between fast and slow thinking depending on the driving scenario. In fast thinking mode, AutoVLA directly predicts physical action tokens without generating long reasoning steps, enabling rapid responses in straightforward scenarios. In contrast, slow thinking mode involves structured CoT reasoning, where the model first analyzes the environment, identifies critical elements, and reasons through potential outcomes before deciding on the final driving action. To enable this dual thinking capability, AutoVLA is trained with a mixture of direct action supervision and reasoning-augmented data. We design system prompts and response formats to support both modes consistently.
# 3.2 Reasoning Data
Reasoning data provides high-quality CoT annotations that are essential for training VLMs with reasoning capabilities [42]. In driving tasks, reasoning involves understanding complex semantics and interactions in dynamic environments [92–95]. Despite its importance, the development of a high-quality, large-scale driving reasoning dataset remains a key challenge due to three major limitations: 1) limited scenario diversity and repetitive examples, 2) inadequate representation of critical perceptual cues, such as traffic signs and vehicle indicator signals, 3) low-quality reasoning process, such as repeatedly stopping at a stop sign without justification.
To address these issues, we propose an automated reasoning annotation pipeline using the advanced Qwen2.5-VL-72B model [21]. This pipeline enables automatic generation of high-accuracy reasoning annotations and supports knowledge distillation from a large capable model to a more compact target model. The pipeline generates structured reasoning annotations across four key components: detailed scene descriptions, identification of crucial objects, prediction of surrounding agents’ intentions, and determination of appropriate driving actions. To regulate the reasoning outcomes, our approach incorporates ground-truth driving actions as hints, guiding the model to produce causal explanations that explicitly link driving decisions to scene context. This structured prompting method significantly reduces nonsensical outputs and minimizes the need for manual correction.
Employing this annotation pipeline, we compile a comprehensive reasoning dataset comprising approximately $4 5 . 6 \mathrm { k }$ CoT reasoning annotations for the nuPlan dataset and $7 . 2 \mathrm { k }$ annotations for the Waymo E2E dataset. Additionally, we reformat and integrate DriveLM [96], a VQA dataset built on nuScenes and CARLA simulation data, to augment the reasoning data. Additional details and illustrative examples are provided in the Supplementary Material.
# 3.3 Supervised Fine-tuning
Supervised fine-tuning (SFT) is employed to train the model to generate both reasoning and action sequences. Given multi-frame camera images $C$ , a high-level navigation instruction $I$ , and the ego vehicle state $S$ , the model is trained to produce a sequence of output tokens. The output sequence consists of language tokens $\mathbf { l } = [ l _ { 1 } , \dots , l _ { L } ]$ for reasoning followed by action tokens $\mathbf { a } \overset { \cdot } { = } \left[ a _ { 1 } , \ldots , a _ { T } \right]$ . To enable both fast and slow thinking during SFT, we curate training data with ground-truth assistant responses that either include only the final action tokens or combine CoT reasoning with the corresponding action tokens. In the fast-thinking mode, l is a fixed, short template indicating that reasoning is not needed. Conversely, in the slow-thinking mode, l begins with a template that introduces the need for CoT reasoning, followed by a structured sequence of reasoning.
The first supervision signal is the standard causal language modeling objective, which minimizes the negative log-likelihood of the target token sequence and facilitates the reasoning capability. The other supervision signal focuses on the planning accuracy, and we introduce an auxiliary loss over action tokens $\mathbf { a } = [ \bar { a _ { 1 } } , \dots , \bar { a _ { T } } ]$ , which appear at positions $x _ { L + 1 }$ to $x _ { L + T }$ in the output sequence. Given an output sequence $\mathbf { x } = [ l _ { 1 } ^ { \top } , \ldots , l _ { L } , a _ { 1 } , \ldots , a _ { T } ]$ , the loss functions are defined as:
$$
\mathcal { L } _ { \mathrm { L M } } = - \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \log p _ { \theta } ( x _ { i } \mid x _ { < i } , C , I , S ) , \quad \mathcal { L } _ { \mathrm { a c t i o n } } = - \frac { 1 } { T } \sum _ { i = L + 1 } ^ { L + T } \log p _ { \theta } ( x _ { i } \mid x _ { < i } , C , I , S ) ,
$$
where $N = L + T$ , and $p _ { \theta }$ denotes the model’s predicted distribution parameterized by $\theta$ .
To jointly optimize reasoning and action generation, we combine the language modeling loss and the action loss into a single SFT loss function. To address the imbalance between reasoning data and action-only data, and to encourage the model to learn from examples that include CoT reasoning, we apply a per-sample weighting factor based on the presence of CoT in the ground truth. The overall loss for each training example is computed as follows:
$$
\begin{array} { r } { \mathcal { L } _ { i } ^ { \mathrm { S F T } } = w _ { i } \cdot \left( \mathcal { L } _ { \mathrm { L M } , i } + \lambda _ { \mathrm { a } } \mathcal { L } _ { \mathrm { a c t i o n } , i } \right) , \quad w _ { i } = \left\{ \begin{array} { l l } { \lambda _ { \mathrm { c o t } } } & { \mathrm { i f ~ C o T ~ i s ~ p r e s e n t ~ i n ~ G T } } \\ { 1 } & { \mathrm { o t h e r w i s e } } \end{array} \right. , } \end{array}
$$
where $\lambda _ { \mathrm { a } }$ and $\lambda _ { \mathrm { c o t } }$ are hyperparameters that control the relative importance.
# 3.4 Reinforcement Fine-tuning
To further improve the performance of AutoVLA and align it with driving requirements and taskspecific rewards, we introduce a reinforcement learning-based post-training method. This RFT stage enables the model to perform adaptive reasoning and optimize planning performance. We employ the GRPO algorithm [49], which stabilizes training and improves convergence efficiency. Moreover, the inherent multi-modality of planning, characterized by multiple feasible trajectories in the same scenario, naturally aligns with the group-based optimization framework of GRPO [38].
Given a scenario input query $q$ , comprising sensor images, the ego vehicle’s state, and driving instruction, we sample a set of $G$ candidate outputs ${ \cal O } = \{ o _ { 1 } , o _ { 2 } , . . . , o _ { G } \}$ from the old policy $\pi _ { \theta _ { \mathrm { o l d } } }$ . The current policy $\pi _ { \boldsymbol { \theta } }$ is then optimized using the normalized group-relative advantage $A _ { i }$ , by maximizing the following objective:
$$
\mathcal { I } _ { \mathrm { G R P O } } ( \boldsymbol { \theta } ) = \mathbb { E } _ { \boldsymbol { q } , \{ \boldsymbol { o } _ { i } \} \sim \pi _ { \boldsymbol { \theta } _ { \mathrm { o l d } } } ( \boldsymbol { O } | \boldsymbol { q } ) } \left[ \frac { 1 } { G } \sum _ { i = 1 } ^ { G } \left( \mathcal { I } _ { i } ^ { R } - \beta \mathbb { D } _ { \mathrm { K L } } ( \pi _ { \boldsymbol { \theta } } \| \pi _ { \mathrm { r e f } } ) \right) \right] ,
$$
$$
\mathcal { I } _ { i } ^ { R } = \operatorname* { m i n } \left( \frac { \pi _ { \theta } ( o _ { i } | q ) } { \pi _ { \theta _ { \mathrm { o i d } } } ( o _ { i } | q ) } A _ { i } , ~ \mathrm { c l i p } \left( \frac { \pi _ { \theta } ( o _ { i } | q ) } { \pi _ { \theta _ { \mathrm { o i d } } } ( o _ { i } | q ) } , 1 - \epsilon , 1 + \epsilon \right) A _ { i } \right) , ~ A _ { i } = \frac { r _ { i } - \operatorname* { m e a n } ( \{ r _ { j } \} _ { j = 1 } ^ { G } ) } { \mathrm { s t d } ( \{ r _ { j } \} _ { j = 1 } ^ { G } ) } ,
$$
where $\theta$ and $\theta _ { o l d }$ denote the current and old policy parameters, $r _ { i }$ is the reward for sample $o _ { i }$ , $\epsilon$ and $\beta$ are hyperparameters controlling the clipping range and the weight of the KL divergence regularization term, and $\pi _ { \mathrm { r e f } }$ is the reference policy from the SFT stage.
The final reward function is defined as $r = r _ { \mathrm { D r i v i n g } } - \lambda _ { r } r _ { \mathrm { C o T } }$ , where the $\lambda _ { r }$ denotes the balance weight. The term $r _ { \mathrm { D r i v i n g } }$ varies across benchmarks. For the nuPlan dataset, we employ the Predictive Driver Model Score (PDMS) [51] as the driving reward, which captures aspects such as safety, comfort, travel efficiency, and other driving quality metrics. For the Waymo E2E dataset, due to the limited availability of Rater Feedback Score (RFS) annotations [52], we use the Average Displacement Error (ADE) as the driving reward. To discourage unnecessarily long reasoning chains, we incorporate a CoT length penalty $r _ { \mathrm { C o T } }$ into the reward function. Additional implementation details are provided in the Supplementary Material.
# 4 Experiments
# 4.1 Experimental Setup
Datasets. We train the AutoVLA model using a diverse set of real-world and simulation datasets. The nuPlan (Open-Scene) dataset [50, 97] contains 120 hours of large-scale driving data with eight streams of camera data and object annotations. The Waymo end-to-end driving dataset [52] comprises 4,021 20-second driving segments with eight streams of camera views and ego vehicle trajectories, especially focusing on challenging and long-tail scenarios, such as driving through construction areas or risky situations. The nuScenes dataset [53] provides 1,000 urban driving scenes with six camera views. The CARLA-Garage dataset [55] provides over 500,000 frames of camera data from the CARLA simulator. In addition to the collected reasoning data, we utilize the DriveLM dataset [96] for nuScenes and CARLA datasets, by reformatting the VQA pairs to facilitate CoT reasoning.
Benchmarks. We evaluate AutoVLA on both open-loop and closed-loop benchmarks across realworld and simulated environments. Open-loop performance is assessed on two public benchmarks: the NAVSIM benchmark [51] from the nuPlan dataset and the nuScenes benchmark [65]. The NAVSIM benchmark employs PDMS to assess key aspects of driving behavior, such as collision and ego progress. The nuScenes benchmark uses L2 distance and collision rate as evaluation metrics. Additionally, we report our model’s performance on the Waymo end-to-end driving benchmark using the RFS metric, which reflects human-judged planning quality. Closed-loop performance is evaluated on the Bench2Drive benchmark [54] in the CARLA simulator. Bench2Drive contains 44 interactive, closed-loop scenarios under varying locations and weather conditions, using metrics such as success rate, driving score, efficiency, and comfort.
Implementation Details. Each action token corresponds to 0.5 seconds of movement, and the planning horizon is set to 5 seconds. Consequently, the model outputs 10 action tokens, from which a 5-second trajectory can be decoded. For SFT, we use a learning rate of $1 \times 1 0 ^ { - 5 }$ and the FSDP training strategy. The model is trained for 5 epochs using 8 NVIDIA L40S GPUs. We use a per-GPU batch size of 1 and accumulate gradients over 4 steps, resulting in an effective batch size of 32. The weighting parameters in the SFT loss function are set to $\lambda _ { a } = 1$ and $\lambda _ { c o t } = 4 0$ . For RFT, we employ the LoRA adapter [98] for parameter-efficient training. The learning rate for RFT is set to $3 \times 1 0 ^ { - 5 }$ , and the KL regularization weight $\beta$ is set to 0.04. We perform a single policy update at each step, allowing the use of a simplified objective without the need for clipping or tracking the old policy. The model is fine-tuned for 6, 000 steps, and the best-performing checkpoint is selected for evaluation. Additional details are provided in the Supplementary Material.
nuPlan PDM Score nuPlan No at-fault Col. 100 80 Action-only Training 80.54 Action-only Training 96.89 Reasoning Training73.62 Reasoning Training94.18 74.97 95 95.19
60 71.79 No at-fault Col. 93.26 65.19 92.33 . 90 61.38 86.23 51.17 85 50 84.44 80.30 80 44.06 40 10k 50k 100k 185k 10k 50k 100k 185k nuScenes L2 (m) nuScenes Col. Rate (%) 1.0 Action-only Training Action-only Training 1.6 1.47 Reasoning Training 0.9 0.84 Reasoning Training 1.4 □ 0.8 1.2 1.16 3 0.7 0.68
1 1.0 1.04 3 0.6 0.45 0.86 0.93 1 0.40 0.8 0.4 0.35 0.82 0.76 0.41 0.39 0.70 0.3 0.6 0.3 0.2 10k 50k 100k 185k 10k 50k 100k 185k
# 4.2 Main Results
This section reports the main results of the AutoVLA model for various datasets and benchmarks, with additional results included in the Supplementary Material.
(a) Before RFT Ground Ground 6 After RFT 95 Truth Truth Planning Planning 5 89.11 90 Front Back 3.95s 85 80.54 80 2 1.31s 75 N (c) 0 70 <think> This is a complex scenario requiring additional reasoning. Slow Thinking Before RFT $0$ Fast Thinking After RFT Runtime PDMS Scene Description: 0.95 The scene appears near a building entrance. The front view shows a clear path ahead with no immediate (b) obstacles... The weather seems clear and sunny, with no visible signs of rain or adverse conditions. 0.90 Critical Object Description: <think> 1. White Car (Front view): Just ahead of the ego vehicle, and appears stationary or slowly moving forward. This is a straightforward 2. A Row of Cars(Left View): Several cars are parked or waiting along the left side, with some space scenario, and a direct decision between them. Paying attention to these cars is crucial to avoid any potential collisions while turning left... can be made.
0.80 Reasoning on Intent: </think>
1.75 -aTnhyecoeplrgleiosiveoenhcsi.eclToehfietshacebusrwrehenintcteleycoafmraoincvtitinvhge aiftnr odainctmaotnodrdelricagathretssopsneuteghgde(ls4et.fs9tt7nh9eactmet/hs)esiatvanetdheiiscslaienscsataruruteicosttueasdt itaopntpaurroyn.alcehf t .o. .avoid <answer> 0.70 GRPO goupste2 -BTehset vDerihviicnleg hAacstiao nsi:gnificant lateral acceleration, suggesting that the vehicle is already turning left. T</haaectnfiisonwna_le5ro2>u0tp>u<taactcitoion_n9is2:0>… GRPO group size 8 Turn left with a constant speed. Given the need to navigate around the cars on the left and … </think> 0.65 1k 2k 3k 4k 5k 6k <answer> The final output action is: <action_520><action_920>…</answer> Training Step
Data Scaling Results. We train AutoVLA on a mixture of the nuPlan and nuScenes datasets with varying training set sizes (10k, 50k, 100k, and the full $1 8 5 \mathrm { k \Omega }$ samples), with action-only supervision or with additional CoT reasoning supervision. The models are evaluated on the respective standard test sets, and the results are shown in Fig. 4. We observe that increasing the amount of training data consistently improves planning performance on both datasets. In the nuPlan dataset, when using fewer than $5 0 \mathrm { k }$ training samples, CoT reasoning does not outperform action-only in terms of PDMS and Collision Score. This is likely due to the increased difficulty of learning structured reasoning from limited data. However, as the training set size increases, models trained with CoT reasoning surpass those with action-only supervision, highlighting the scalability advantages of reasoning-augmented learning. In the nuScenes dataset, action-only supervision yields better performance in terms of L2 distance and collision rate. This is likely because nuScenes contains mostly simple scenarios that do not require complex reasoning, making CoT training less beneficial in this setting.
RFT Performance. We apply RFT to the full-data CoT reasoning model trained via SFT to enhance its planning performance. As shown in Fig. 5(a), RFT yields a $1 0 . 6 \%$ improvement in PDMS (on the NAVSIM testing set) and a $6 6 . 8 \%$ reduction in runtime (average over 500 testing scenarios). The reward curve in Fig. 5(b) illustrates the progressive improvement of the model’s policy during RFT. Experiments with different GRPO group sample sizes indicate that larger groups lead to better performance by promoting broader exploration of training samples. As illustrated in Fig. 5(c), RFT also reduces unnecessary and slow reasoning in simple scenarios, driven by the CoT length penalty that encourages fast thinking for straightforward driving cases. A qualitative comparison shows that the SFT model produces suboptimal plans due to error accumulation in generation, whereas the RFT model (optimized via PDMS-based reward) generates better planning trajectories.
nuPlan Benchmark Results. We evaluate AutoVLA against state-of-the-art end-to-end driving models on the NAVSIM benchmark [51] and present results in Table 1. In best-of-N planning, we use
Table 1: Testing Results on the NAVSIM (nuPlan) End-to-end Driving Benchmark
eton-ony irai PGlraonunindgTruth (b) dd<Stcheinek>DTehsicsrisptaiocno:mplex scenario requiring additional reasoning. CoT-enhace d ih reri ddscChoriontiswctsraulacOtbiuboijlnedcbintagrDriaensrdsc rsaionptdmi oseing:rneseinedriyc atlionggr tohaedrwoaordk. Taheafrdo.nSterivgehrtalcvaemhiecrlaecsaprteuvriesibalecion tshtreudcitsitoan sciet,etrwaitvhelbinargrinerbsoatnhd iar feectwiovnesh.i cTlhes.front left camera 1. Construction Barriers: Located in the center of the road ahead, these barriers indicate ongoing road work. They are important as they may require the vehicle to slow down and navigate around them. (a) 7.557 s2l.oVwehdiocwlensoAr hsteoapd:dPueostiotitohne rdofaudrtwheorkdorwont thheer vroeahidc,ltehseasheevaedh.icles are traveling in the same direction as the ego vehicle. They are important as they may M 7.406 7.447 N vFierhsitlcyl,ensohtoicueldt haep pcronasctrhutchteiosne barriers.wTihtehsceabutairorinerasnidndpioctaetnetitahllayt rtehdeurcoeaditsa hsepaede dm…ay…have reduced speed limits or require careful navigation. The ego - Given the current driving command instruction to "go straight," the ego vehicle should continue moving forward while being mindful of the road 7.239 ddcBoenstd itDiroinvisnaghAecatdi.oTnh:e presence of construction barriers and other vehicles suggests that the vehicle should slow down to navigate safely through the area. Move forward with a deceleration. This to safely approach the construction barriers, maintain a safe distance from other vehicles, and comply with any Front road conditions indicated by the signs. … </think> 7.1 <answer> The final output action is: <action_1457><action_1457>…</answer>
an oracle scorer to select the optimal trajectory from six generated candidates. After RFT, AutoVLA demonstrates significantly improved performance, aligning more closely with the NAVSIM reward signal. The best-of-N strategy further enhances performance, achieving the highest PDMS. Overall, AutoVLA achieves competitive results while demonstrating scalability across diverse datasets.
Waymo E2E Performance. We evaluate AutoVLA on the Waymo end-to-end driving dataset [52], which features long-tail and complex driving scenarios. The model’s performance under various training settings on the test set is presented in Fig. 6. The results reveal that pretraining on a combination of nuPlan and nuScenes datasets significantly enhances performance, suggesting enhanced scene understanding through exposure to more diverse training data. Incorporating CoT reasoning in training further improves planning performance compared to action-only training. Post-training with RFT, using ADE as the reward function, achieves the best overall RFS metric. A qualitative example in a construction zone demonstrates the model’s ability to reason about occlusions and generate effective detour plans. Additional results are provided in the Supplementary Material.
CARLA Closed-loop Performance. We evaluate the closed-loop driving performance of our AutoVLA model on the Bench2Drive benchmark [54] in the CARLA simulator. The model is trained using SFT with both trajectory-only and CoT data. During testing, the planning frequency is set to $2 \ : \mathrm { H z }$ . The results, shown in Table 2, demonstrate that AutoVLA outperforms existing end-to-end driving models in terms of overall driving score and success rate in the closed-loop test.
Table 2: Testing Results on the Bench2Drive (CARLA) Closed-loop Driving Benchmark
# 4.3 Ablation Study
Text Waypoint Output. We use the same mixed training set from the nuPlan and nuScenes datasets to train a model that predicts waypoints in a text format, which are then converted into a trajectory. We evaluate its performance in an open-loop planning setting using the standard test sets. The results, shown in Table 3, indicate that our action tokenization and generation method significantly outperforms the text-based
Table 3: Influence of Physical Action Tokenization
waypoint prediction approach. Additionally, due to the need to decode numerical values, the textbased method incurs a substantially higher computational cost in generating the final trajectory. This shows the limitation of language models in handling precise numerical reasoning. | Recent advancements in Vision-Language-Action (VLA) models have shown promise
for end-to-end autonomous driving by leveraging world knowledge and reasoning
capabilities. However, current VLA models often struggle with physically
infeasible action outputs, complex model structures, or unnecessarily long
reasoning. In this paper, we propose AutoVLA, a novel VLA model that unifies
reasoning and action generation within a single autoregressive generation model
for end-to-end autonomous driving. AutoVLA performs semantic reasoning and
trajectory planning directly from raw visual inputs and language instructions.
We tokenize continuous trajectories into discrete, feasible actions, enabling
direct integration into the language model. For training, we employ supervised
fine-tuning to equip the model with dual thinking modes: fast thinking
(trajectory-only) and slow thinking (enhanced with chain-of-thought reasoning).
To further enhance planning performance and efficiency, we introduce a
reinforcement fine-tuning method based on Group Relative Policy Optimization
(GRPO), reducing unnecessary reasoning in straightforward scenarios. Extensive
experiments across real-world and simulated datasets and benchmarks, including
nuPlan, nuScenes, Waymo, and CARLA, demonstrate the competitive performance of
AutoVLA in both open-loop and closed-loop settings. Qualitative results
showcase the adaptive reasoning and accurate planning capabilities of AutoVLA
in diverse scenarios. | [
"cs.CV"
] |
# 1 Introduction
The development of human-like chatbots has been a long-standing aspiration in the history of AI chatbot research. Over the years, researchers have introduced various aspects that constitute humanlikeness, such as persona (Zhang et al., 2018; Ahn et al., 2023), long-term memory (Xu et al., 2022a,b), commonsense (Zhou et al., 2021; Qin et al., 2021), emotional support (Rashkin et al., 2019; Liu et al., 2021; Zhang et al., 2024), roleplay (Shao et al., 2023; Li et al., 2023), and virtual world (Park et al., 2022, 2023). These efforts have led to the success of commercial chat services like Replika and Character AI, which have met the public’s demand for social companion chatbots (Chaturvedi et al., 2023; Guingrich and Graziano, 2023). The pursuit for human-like chatbots still remains important with the remarkable advancements in large language models (LLMs) as dialogue agents, intersecting with the growing societal and technological demand for AI agents capable of engaging in more natural and human-like interactions.
Research on dialogue response generation has predominantly focused on generating appropriate and consistent next utterances, conditioning on the textual information within dialogue contexts. Meanwhile, although the question of what to respond has received considerable attention, the issue of when to respond remains underexplored, despite its crucial role in enabling real-time dialogue agents to appropriately ground their responses on the temporal contexts regarding the status of ongoing conversational events. For instance, as illustrated in Figure 1, if an agent generates only instant responses without considering response timing, it can cause repetitive interactions without conversational progress or produce responses that do not align with the temporal context of the conversational event. In contrast, by incorporating response timing, an agent can maintain a natural flow while providing timely responses. This requires grounding responses on the temporal context tied to the status of the event, mirroring the way humans naturally adapt their responses in human-to-human conversations. This requires both the ability to introduce delays tied to the event status and the ability to generate responses conditioned on those delays.
However, it is inherently challenging to simulate such scenarios with dialogue models trained on existing datasets. Most dialogue datasets lack explicit temporal context and are created under the tacit assumption that interactions occur instantly. Additionally, collecting real-time conversations where temporal context is naturally embedded (e.g., text messages between individuals) is highly restricted due to privacy concerns and ethical considerations.
In this work, we propose a novel task named Timely Dialogue Response Generation, which aims to generate not only coherent responses but also to consider the temporal context associated with ongoing events. Specifically, it focuses on predicting the necessary time interval for the next utterance and generating a corresponding timeconditioned response. We introduce TIMELYCHAT dataset and propose a benchmark to assess two key aspects: response timing prediction and timeconditioned response generation. To create diverse event-driven dialogues, we combine the humanannotated event-duration pairs from a temporal commonsense knowledge graph with the powerful dialogue generation capability of an LLM.
Furthermore, we introduce a large-scale dataset comprising 55K event-driven dialogues for supervised fine-tuning (SFT). To address the challenges of costly and labor-intensive manual annotation, we utilize unlabeled event sources from a large-scale temporal commonsense knowledge graph and leverage an LLM to pseudo-label event durations and synthesize diverse event-driven dialogues. Using this dataset, we present TIMER, a dialogue model fine-tuned with a multi-task learning objective that jointly predicts the time interval and generates the corresponding response.
Evaluation results on the proposed benchmark demonstrate that TIMER outperforms both instruction-tuned LLMs and dialogue models fine-tuned on other datasets in generating timeconditioned responses and predicting time intervals consistent with temporal commonsense. Furthermore, in dialogue-level evaluations, TIMER distinguishes between situations requiring delayed responses and those requiring instant responses more effectively, and generates more timely responses that align well with the predicted time intervals. Our contributions are three-fold:
• We propose a novel task named timely dialogue response generation, which considers not only what to respond but also when to respond.
• We introduce an SFT dataset enriched with diverse and comprehensive event knowledge, along with a time-augmented training approach.
• We release the TIMELYCHAT benchmark, training data, and our timely dialogue agent named TIMER to facilitate further research in this area.
# 2 Related Work
Long-term dialogue involves conversations that unfold over multiple sessions with time intervals between sessions. Xu et al. (2022a) introduce MultiSession Chat (MSC), which consists of up to five sessions separated by certain time intervals, resembling interactions in messaging platforms. Jang et al. (2023) emphasize the significance of speaker relationships in long-term dialogues and propose Conversation Chronicles (CC), a large-scale LLMgenerated dataset that incorporates a wider range of time intervals and fine-grained speaker information. Maharana et al. (2024) present LoCoMo, a very long-term dialogue dataset covering up to 32 sessions, along with a benchmark designed to assess various long-term memory capabilities. However, prior research primarily focuses on recalling persona sentences or past events from previous sessions, without addressing the temporal context between ongoing events and time intervals in realtime conversations. A notable attempt to incorporate such relations is GapChat (Zhang et al., 2023), which introduces an event timeline to capture event progression over given time intervals. Our work moves beyond the assumption of predetermined time intervals and instead necessitates a proactive dialogue agent capable of dynamically determining realistic time delays based on temporal context.
# 3 Task Definition
We introduce a new task named Timely Dialogue Response Generation, which aims to generate contextually appropriate responses while incorporating temporal considerations from the dialogue history. A key temporal factor that influences a response is how much time has passed since the previous utterance. To capture this, we define time interval as our primary temporal context, which represents the relative time difference (e.g., 10 minutes) between utterances. Formally, we model the conditional probability distribution $P _ { \theta }$ of a response $\boldsymbol { r } _ { t }$ at $t$ -th turn given the textual context $\boldsymbol { U } _ { < t }$ and the temporal context $T _ { < t }$ :
$$
r _ { t } \sim P _ { \theta } ( u _ { t } | U _ { < t } , T _ { \leq t } ) ,
$$
where $\tau _ { t } \in T \ ( t \geq 2 )$ denotes the elapsed time between $u _ { t - 1 }$ and $\boldsymbol { u } _ { t }$ . This probability distribution can be further decomposed into two subtasks, which are the main focus of this study.
Subtask 1. Response Timing Prediction The first task is to predict the optimal timing for delivering messages to users. Mathematically, this involves predicting the $t$ -th time interval given the available contexts:
$$
\begin{array} { r } { \hat { \tau } _ { t } \sim P _ { \theta } ( \tau _ { t } | U _ { < t } , T _ { < t } ) . } \end{array}
$$
Subtask 2. Time-conditioned Response Generation The subsequent task is to generate a contextually appropriate response while incorporating the predicted timing for message delivery:
$$
r _ { t } \sim P _ { \theta } ( u _ { t } | U _ { < t } , T _ { < t } , \hat { \tau } _ { t } ) .
$$
Note that this task formulation challenges the widely held assumption that dialogue agents should always respond to user messages instantly. Instead, it takes temporal context into account, i.e., the amount of elapsed time, to determine when a response should be generated.
# 4 TIMELYCHAT Benchmark
We construct TIMELYCHAT benchmark to assess the timely response generation capabilities of dialogue models. To this end, we first craft high-quality timely conversations through temporal knowledge base and LLMs and then design two evaluation processes. Figure 2 shows the overall construction process of our benchmark.
# 4.1 Data Construction
We incorporate temporal information into dialogues using a temporal commonsense knowledge base. This knowledge base captures various eventrelated temporal dynamics which is well suited for transforming temporal context into event-driven dialogues. By identifying temporal patterns, we seamlessly integrate them into conversations, utilizing the sophisticated dialogue generation capabilities of LLMs. We outline our data construction process below.
Event Knowledge Extraction. We first obtain a rich and reliable source of daily events and their typical durations for crafting event-driven conversations with temporal context. To this end, we utilize the event duration category of MC-TACO dataset (Zhou et al., 2019). The dataset consists of sentences for specific events, queries to ask the typical duration of the event (e.g., "How long does it take to . . . ?"), and human-annotated ground-truth answers. We utilize the sentences with groundtruth answers, i.e., event-duration pair, to synthesize event-driven conversations. During data construction, we excluded the examples whose temporal intervals shorter than one minute or longer than 24 hours to simulate realistic temporal delay in daily dialogue situations. Lastly, we instruct GPT-4 (Achiam et al., 2023) with these sentences and event-duration pairs to generate descriptive sentences. It integrates the event and its duration into coherent sentences, forming seed narratives for dialogue generation.
Timely Dialogue Generation. With the extracted temporal event knowledge, we instruct GPT4 to generate conversations. Our instruction contains the conditions that the generated dialogues must satisfy:
• Spatial Separation: The scenario must involve one speaker experiencing an event while conversing with another speaker about it. This ensures there are no contradictions arising from both speakers being in the same spatial context.
MC-TACO ATOMIC Event Narrative Sentence Relation Delay-interleaved Dialogue Jane is John’s isBefore Jane and John spent six
Question girlfriend. PHerasdonX wants PTearislonX asks teongcehtahnetri,nlgoshionugrtsrack of Jane It’omdamy.eeting John
How long {6 hours, 3 to date PersonY out PersonY shared stories over a Oh, where are you
does…? hours, …} tcharnodulgelhi thdiennbleor,osmtirnolgled guys going? Stella gardens, and watched the Maybe somewhere 1. Make paired stars emerge in the Jane new, I guess? event knowledge evening sky. That sounds fun. C 3. Synthesize Enjoy your date! Stella Event 6D uhroautirosn 2. Generate Instruction edivaelontg-uderisven 6 hours later seed narratives 2. Temporal Implicitness Jane night! 3. Mutual Exclusivity
• Temporal Implicitness: The response must avoid direct references to the elapsed time. This condition reduces the occurrence of dull responses that simply acknowledge the time interval and, more importantly, prevents lexical overlap with the ground-truth time interval, which could create shortcuts in the generation process.
• Mutual Exclusivity: The time-conditioned response must become untimely under contrary temporal conditions. In other words, a delayed response should be incoherent under an instant condition with no time interval, and an instant response should be incoherent when a time interval exists. It prevents generating time-agnostic responses that remain coherent regardless of the temporal context.
Along with these instructions, we provide one randomly selected example from six author-written dialogues, each ranging from 5 to 10 turns, to prevent ill-formed outputs and diversify dialogue lengths. After manual inspection and filtering out low-quality dialogues that did not meet all the conditions, the final synthesized dataset consists of 324 dialogues, with an average length of 6.5 turns. All prompts and examples used in the construction process are provided in Appendix A.
# 4.2 Evaluation Protocols
With the crafted conversations, we propose two evaluation approaches to assess the abilities of dialogue agents to generate timely responses: turnlevel and dialogue-level.
Turn-level Evaluation. In turn-level evaluation, we assess each subtask on the target response. For response timing prediction, a model predicts the time interval required for the next utterance given a dialogue context. We then evaluate (1) whether the model correctly classifies the next turn as either delayed or instant, and (2) how close is the predicted interval to the ground truth. We measure precision, recall, false positive rate (FPR), and F1 for the binary classification, and root mean squared logarithmic error (RMSLE) for regression by converting each time interval into minutes. For response generation, a model generates a time-conditioned response given a dialogue context and ground-truth time interval. We measure BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTScore (Zhang et al., 2020) as reference-based metrics. Additionally, we measure naturalness (Mehri et al., 2022) and timespecificity (Tsunomori et al., 2023) on a 5-point scale, adopting G-Eval (Liu et al., 2023) for automatic evaluation.
Dialogue-level Evaluation. One crucial quality of a timely dialogue agent is its ability to introduce appropriate delays considering the temporal context while maintaining a natural conversational flow. Inspired by dialogue-level evaluation methods with model-to-model interactions (Li et al., 2019; Zhou et al., 2024), we provide an event-driven scenario and let an agent converse with GPT-4 as a user simulator (Yoon et al., 2024; Kazi et al., 2024; Niu et al., 2024) for the fixed number of turns to measure dialogue-level metrics. We measure coherence (Mehri et al., 2022) and dialogue-level time-specificity to assess the quality of the agent’s responses, and measure delay appropriateness that considers both the timing and duration of delays, using G-Eval with a 5-point scale. The evaluation criteria of G-Eval metrics and simulator instructions are detailed in Appendix B.
# 5 TIMER: A Dialogue Agent for Timely Responses
# 5.1 Training Data Augmentation with Unlabeled Knowledge
Utilizing paired event-duration knowledge is essential for creating conversations that simulate timely responses. However, manually constructing such annotations is both costly and labor-intensive, posing a challenge to creating large-scale datasets for training LMs. To overcome this limitation, we leverage unlabeled event knowledge graphs and harness the capabilities of GPT-3.5 to construct large-scale paired knowledge and generate synthetic dialogues. This approach significantly reduces the manual effort required while enabling the creation of extensive training data.
Event Knowledge Extraction. We extract event knowledge from the $\mathrm { A T O M I C _ { 2 0 } ^ { 2 0 } }$ dataset (Hwang et al., 2021), a large-scale commonsense knowledge graph containing the event-centered category represented as event triplets (i.e., head, relation, and tail), which capture diverse temporal dynamics. To make more natural dialogues, we randomly replace the anonymized person names (e.g., PersonX) in the triplets with common names of US SSN applicants, following the method by Kim et al. (2023). Subsequently, we prompt GPT-3.5 to integrate these triplets into single-sentence event descriptions, producing more natural and coherent event representations.
Event Duration Estimation. Since the event triplets in $\mathrm { A T O M I C _ { 2 0 } ^ { 2 0 } }$ do not include annotated durations, we utilize GPT-3.5 to estimate typical durations. Specifically, we provide GPT-3.5 with the event descriptions and prompt it to extract the main event and predict its typical duration, which is then used as a pseudo label. We filter out examples where the predicted duration is less than a minute or exceeds 24 hours.
Dialogue Generation with Bootstrap Examples. We prompt GPT-3.5 using the instructions detailed in $\ S \ 4 . 1$ . During initial iterations, we observed that providing only the instructions often led to illformed dialogues, such as speaker mismatches or non-alternating turns. To address these issues and improve dialogue quality, we include a one-shot demonstration sampled from the TIMELYCHAT set in each prompt. All prompts used in the construction process are presented in Appendix A.1.
The resulting dataset consists of 55K events paired with their corresponding dialogues. Compared to existing long-term dialogue datasets in Table 1, our dataset includes a significantly larger amount of even-grounded dialogues without requiring costly human annotation and handles time intervals with finer granularity.
# 5.2 Time-augmented Training with Multi-task Learning Objectives
The goal of our training approach is to predict an appropriate time interval for delaying the response based on the temporal context of the conversation and then generate a time-conditioned response corresponding to the interval. For this purpose, we introduce a time interval prediction step before generating each turn’s utterance.
We propose a training approach for timely dialogue response generation, as formalized in Eqs. 2 and 3. For each turn consisting of a speaker identifier and a text utterance, we insert a time interval. We prepend prefix tokens to distinguish each component, formatting the input as $\tt { < S P K > }$ $s _ { t }$ <TIME> $\tau _ { t }$ ${ < \mathsf { U T T } > \ u _ { t } }$ , where $s _ { t } , \tau _ { t }$ , and $u _ { t }$ denote the speaker, the time interval, and the utterance at the $t$ -th turn, respectively. For turns within the dialogue context, we set $\tau = 0$ , indicating no delay, maintain coherence and align with typical instant responses.
From these inputs, we define two losses for training: response timing prediction loss and response generation loss. The losses are defined as follows:
$$
\begin{array} { r } { \mathcal { L } _ { \mathrm { t i m e } } = - \displaystyle \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \sum _ { t = 2 } ^ { T } \log p ( \tau _ { t } \mid s _ { \le t } , \tau _ { < t } , u _ { < t } ) , } \\ { \mathcal { L } _ { \mathrm { r e s p o n s e } } = - \displaystyle \frac { 1 } { N } \sum _ { i = 1 } ^ { N } \sum _ { t = 2 } ^ { T } \log p ( u _ { t } \mid s _ { \le t } , \tau _ { \le t } , u _ { < t } ) , } \end{array}
$$
where $N$ is the number of training examples, and $T$ is the number of turns in a dialogue.
The final multi-task learning objective is given as follows:
$$
\begin{array} { r } { \mathscr { L } = \mathscr { L } _ { \mathrm { r e s p o n s e } } + \lambda \mathscr { L } _ { \mathrm { t i m e } } . } \end{array}
$$
Table 1: Comparison of long-term dialogue datasets interleaved with time intervals. The number in parentheses under the # Sessions column represents the count of sessions with time intervals within a day. Event-grounded indicates whether the dialogues reflect the temporal context associated with events or not.
This approach ensures that the model learns both to predict appropriate time intervals and to generate time-conditioned responses effectively.
# 6 Experiments
# 6.1 Baselines
We evaluate two types of dialogue agents for simulating timely dialogue response generation: prompting-based models and fine-tuned models. The prompting-based models include LLMs optimized for dialogue use cases. We select 8B and 70B models of LLaMA 3.1 Instruct (Dubey et al., 2024) as open-source chat models, and GPT-3.5 and GPT-4 as proprietary models. We experiment with zero-shot, few-shot, and chain-of-thought (CoT) (Wei et al., 2022) prompting strategies to investigate the effectiveness of in-context learning without task-specific fine-tuning. The fine-tuned models are trained on dialogue datasets where time intervals are interleaved. We compare the following models:
• MSC 3B (Xu et al., 2022a): Fine-tuned on BlenderBot (Roller et al., 2021) using the MSC dataset, which includes time intervals between sessions.
• ReBot 400M (Jang et al., 2023): Fine-tuned on BART-Large (Lewis et al., 2020) using the CC dataset, which consists of large-scale LLM-generated dialogues.
• GapChat 3B (Zhang et al., 2023): Fine-tuned on MSC using the GapChat dataset, which incorporates event progress based on time intervals.
Implementation details of all models including TIMER 3B are described in Appendix B.1.
# 6.2 Turn-level Evaluation Results
Response Timing Prediction. Table 2 presents the results of response timing prediction on the
Table 2: Results of response timing prediction. For fewshot and CoT strategies, we provide balanced 2-shot demonstrations which consist of one delayed example and one instant example, along with the task description used in zero-shot prompting.
TIMELYCHAT. Overall, prompting-based models exhibit significantly low precision and F1 scores, and high FPR. This suggests that these models tend to over-predict the need for a delay, potentially introducing unnecessary intervals that disrupt the conversational flow. Although few-shot and CoT strategies slightly improve F1 scores across all LLMs, they sometimes negatively impact FPR compared to zero-shot prompting. In contrast, TIMER 3B achieves the highest F1 score and the lowest FPR compared to prompting-based models. Even the best-performing GPT-4 still lags significantly behind the fine-tuned TIMER 3B model.
Likewise, when it comes to predicting the length of time intervals, in-context learning methods fail to enhance performance effectively. While few-shot prompting achieves a lower RMSLE than CoT across all LLMs, it does not consistently outperform zero-shot prompting, as demonstrated by GPT-4’s results. These findings indicate that prompting with task descriptions and demonstrations alone is insufficient to reliably predict whether to pose a delay and how long it should last. In contrast, task-specific fine-tuning is essential for effectively learning these capabilities.
Table 3: Results of time-conditioned response generation on TIMELYCHAT. B-2, R-L, BS, Nat., and Spec. refer to BLEU-2, ROUGE-L, BERTScore, naturalness, and time-specificity, respectively.
Time-conditioned Response Generation. Table 3 shows the time-conditioned response generation performance on the TIMELYCHAT. For prompting-based models, we observe that zeroshot performance tends to improve as model size increases across all metrics. Among all LLMs, few-shot prompting consistently outperforms zeroshot prompting, while CoT prompting performs the worst in terms of naturalness and time-specificity. This aligns with previous findings that LLMs struggle to generate helpful CoT rationales for dialogue response generation (Chae et al., 2023).
Meanwhile, models fine-tuned on existing dialogue datasets that include time intervals exhibit poor overall performance. Notably, these models achieve low time-specificity, indicating that they struggle to generate timely responses conditioned on given time intervals. This stems from the characteristics that time intervals in existing long-term dialogue datasets are assigned arbitrarily rather than based on the temporal context of ongoing events, making it difficult for models to learn the conditional distribution of responses based on the given interval. For example, we find that these models frequently generate generic greeting messages, failing to capture the temporal nuances of timely responses. In contrast, TIMER 3B, despite having a smaller model size, achieves comparable naturalness to prompting-based LLMs and even surpasses LLaMA 3.1 8B. More importantly, it achieves the highest time-specificity, demonstrating that our training approach enables response generation that aligns well with event-specific temporal contexts.
Figure 3: Results of simulated interactions for timely dialogue agents. We perform pairwise t-test and denote statistically significant score differences from the other models with asterisk $( ^ { * } )$ $( p < 0 . 0 5 )$ .
# 6.3 Dialogue-level Evaluation Results
Beyond turn-level evaluation, we also conduct dialogue-level evaluation to assess whether a dialogue agent can introduce temporally contextual delays at appropriate timing without disrupting the conversational flow. We let the four zero-shot LLMs from the previous experiments, along with TIMER 3B, engage in 10 interactions with the simulator described in $\ S 4 . 2$ . To simulate event-driven dialogue, we provide the first turn of conversations from the TIMELYCHAT as the initial interaction.
We randomly sample 100 dialogues that include at least one delayed response and report three dialogue-level metrics in Figure 3. GPT-4 achieves the highest coherence among the models, demonstrating its ability to maintain a natural conversation flow, while TIMER 3B achieves the second-highest coherence score. Notably, TIMER 3B shows the highest delay-appropriateness and time-specificity scores. This suggests that TIMER 3B effectively considers both dialogue context and temporal context to predict delays with appropriate timing and
# Conversation Starter
User: Hey, have you heard from her? She’s been MIA since yesterday.
# TIMER 3B
# LLaMA 3.1 70B
# GPT-4
A: No, I haven't. I hope she's okay.
U: Have you tried calling or texting her?
A: Yeah, I've been trying to reach her, but she's not responding.
U: I'm getting a bit worried. Should we contact someone else to check on her, like a neighbor or a mutual friend?
A: Good idea. I'll give that a try. Hopefully,
she's fine.
U: Okay, let me know if you hear anything. I can help out if needed.
A: (1 day later) I finally got in touch with a
neighbor. Turns out she's fine, just needed
some time to sort things out.
U: That's a relief to hear! Thanks for checking up on her. I'll reach out and see if she needs anything.
A: No problem, I just want to make sure she's okay. Let me know if you need any help.
U: I will. It's great to have each other's back in situations like this. Thanks again! A: (15 minutes later) We usually don't hear from each other for long periods when she's traveling, so I just assumed..
U: I get that, but she usually gives us a headsup. Did she mention any plans to travel recently? A: (120 minutes later) It feels like she always does this. I remember last year she went for a spontaneous road trip without notifying anyone. U: Maybe she's taken a liking to unplanned
adventures lately. Have you tried reaching out through other means, like sending her a text or checking her social media for any posts or
updates?
A: (660 minutes later) Still nothing new, been trying to call her but my battery just died, was in the middle of a regular workday and didn't
notice how fast it drained.
U: I'm going to charge it and try again later. Do you think it's worth asking anyone else if
they've heard from her or noticed anything
unusual? Maybe someone else saw her or
heard about her plans.
A: (77 minutes later) Actually, yeah. I was
talking to Sarah the other day, and she said A: Yeah, I texted her earlier but haven't got a reply yet. I'm starting to get worried too.
U: Have you tried calling her? Maybe we
should check with others to see if anyone else has heard from her.
A: (30 minutes later) I called her just now, but it went straight to voicemail. I reached out to a couple of our friends too; they're also waiting to hear back.
U: I hope she's okay. Should we think about heading over there to check on her if we don't hear back soon?
A: (10 minutes later) That's a good idea. Let's give it another 10 minutes, and if there's still nothing, we can make a plan to head over. U: Alright, I'll get ready just in case we need to go. Let me know if you hear anything from her or our friends.
A: (10 minutes later) Still nothing from her or the others. I think we should head over there now, just to be safe.
U: Okay, I'll meet you at her place in about 15 minutes. Let's hope everything's alright.
A: (15 minutes later) Alright, I'll leave now too. See you soon at her place. Drive safe!
U: Thanks, you too. Let's hope for the best.
duration. Additionally, it generates delayed responses that are coherent only when a delay is given, thereby justifying and necessitating the delay. In contrast, LLaMA 3.1 8B and 70B exhibit relatively lower delay-appropriateness, while GPT-3.5 and GPT-4 achieve lower time-specificity scores. We further analyze these findings in the following case study.
Figure 4 presents illustrative examples of dialogue simulations conducted with TIMER 3B, LLaMA 3.1 70B, and GPT-4 for the same event. In TIMER 3B’s conversation, the agent correctly identifies a situation where a delay is appropriate, specifically, when the user’s utterance (e.g., “. . . let me know. . . ”) suggests a natural pause in the conversation. The agent then introduces a realistic 1- day delay before responding with an update about finding the missing person, successfully justifying the delay. In contrast, LLaMA 3.1 70B generates delayed responses in every turn, but the predicted time intervals appear somewhat arbitrary (e.g., 660 minutes, 77 minutes). Furthermore, its responses lack time specificity, making it difficult to establish a clear temporal correlation between the predicted delays and the generated response. GPT-4 predicts more realistic time intervals that better align with the temporal context compared to LLaMA 3.1 70B. However, it still fails to generate time-specific responses, meaning the predicted delays are not well justified. It also exhibits a tendency to overuse delays, which can disrupt the natural flow of conversation. We observe similar behavior in LLaMA 3.1 8B and GPT-3.5, reinforcing these findings.
Table 4: Pairwise human evaluation results on turn-level metrics. Win/Tie/Loss rates of TIMER 3B against zeroshot GPT-4 are presented.
Table 5: Pairwise human evaluation results on dialoguelevel metrics. Win/Tie/Loss rates of TIMER 3B against zero-shot GPT-4 are presented.
# 6.4 Human Evaluation Results
To investigate the reliability of LLM-based evaluation, we also conduct human evaluations on both turn-level and dialogue-level metrics. We recruit three graduate students as annotators, provide them with the same evaluation criteria used for LLMbased evaluation, and ask them to perform pairwise comparisons between responses or dialogue from two different models.
Table 4 presents the results for turn-level metrics, comparing TIMER 3B with the most competitive baseline, zero-shot GPT-4, on 90 randomly sampled examples. While TIMER 3B falls short of GPT4 in terms of naturalness, it slightly outperforms GPT-4 in time-specificity, which is consistent with the LLM-based evaluation results observed in Table 3.
Table 5 shows the results for dialogue-level metrics. Again, TIMER 3B lags behind GPT-4 in coherence, but it significantly outperforms GPT-4 in both delay-appropriateness and time-specificity. This finding aligns with the results shown in Figure 3, indicating that the proposed evaluation criteria and LLM-based evaluation are reliable measures for assessing desired model behavior. | While research on dialogue response generation has primarily focused on
generating coherent responses conditioning on textual context, the critical
question of when to respond grounded on the temporal context remains
underexplored. To bridge this gap, we propose a novel task called timely
dialogue response generation and introduce the TimelyChat benchmark, which
evaluates the capabilities of language models to predict appropriate time
intervals and generate time-conditioned responses. Additionally, we construct a
large-scale training dataset by leveraging unlabeled event knowledge from a
temporal commonsense knowledge graph and employing a large language model (LLM)
to synthesize 55K event-driven dialogues. We then train Timer, a dialogue agent
designed to proactively predict time intervals and generate timely responses
that align with those intervals. Experimental results show that Timer
outperforms prompting-based LLMs and other fine-tuned baselines in both
turn-level and dialogue-level evaluations. We publicly release our data, model,
and code. | [
"cs.CL"
] |
# 1. Introduction
Over the last decade, AI, supported by deep learning, has become increasingly more prevalent in our lives. However, as we rely more on deep learning technologies to make critical decisions, concerns regarding their safety, reliability and explainability naturally emerge. Indeed, deep learning models, such as those used for image classification, are considered black boxes as their internal workings are not easily interpretable, resulting in a possible lack of trust in their predictions.
Motivated by the need for more interpretable image classifiers, we introduce a novel neuro-argumentative learning (NAL) architecture which generates symbolic representations in the form of assumption-based argumentation (ABA) frameworks (Dung et al., 2009) from images, using objects identified in these images by Object-Centric (OC) methods (De Vita, 2020). The resulting ABA frameworks can be used to make predictions while allowing humans to follow a line of reasoning as to why the model made those predictions.
To generate ABA frameworks, our OC-NAL architecture uses ABA Learning (De Angelis et al., 2023, 2024), a method that uses argumentation in a logic-based learning fashion to generate ABA frameworks which, with their accepted arguments, cover given positive examples and do not cover given negative examples. OC-NAL also uses slot attention (Locatello et al., 2020) as the underpinning OC method, to support a granular understanding of input images in terms of the objects they contain. Overall, our OC-NAL architecture enables the extraction of meaningful properties and relationships between objects within the images, facilitating accurate classification with interpretable argumentation frameworks.
Figure 1: OC-NAL: the input image (top left) is processed by slot attention to obtain objects (coloured squares) mapped by MLPs into facts fed into a Background ABA framework; then ABA learning generates a Learnt ABA framework, which may admit several ‘extensions’ (i.e., sets of accepted arguments) $\Delta _ { 1 } , \ldots , \Delta _ { n }$ ; inference therewith gives a classification (bottom left).
Contributions Overall, we make the following contributions: 1) we tailor slot-attention to generate factual background knowledge suitable to be injected in ABA Learning; 2) we combine slot-attention and ABA Learning into a pipeline architecture (that we term OC-NAL) for neuro-argumentative learning; 3) we assess the performances of our method experimentally on (synthetic) image datasets, showing that it can be competitive against a baseline (Neuro-Symbolic Concept Learner (NS-CL) (Mao et al., 2019)).
# 2. Related Work
Proietti and Toni (2023) overview several approaches integrating logic-based learning with image classification. Specifically, NeSyFold (Padalkar et al., 2024) uses the rule-based learning algorithm FOLD-SE-M to convert binarised kernels from a trained CNN to a set of ASP rules (Gelfond and Lifschitz, 1991) with abstract predicates. It then uses semantic labelling to assign human-like concepts to predicates leading to global explanations of image classifications. Further, Embed2Sym (Aspis et al., 2022) uses clustered embeddings extracted by a neural network in combination with a symbolic reasoner encoded with predefined rules for explainable predictions. Some other approaches integrate learning and argumentation for image classification, notably Thauvin et al. (2024) classify images based on argumentative debates drawn from encoders and Kori et al. (2024a) explain the outputs of image classifiers with argumentative debates drawn from quantized features. No existing approach uses argumentation with object-centric methods, as we do. Indeed, to the best of our knowledge we are the first to propose such combination as a neuro-symbolic learning approach.
The closest approach to OC-NAL is Neuro-Symbolic Concept Learner (NS-CL) (Mao et al., 2019), which we use as a baseline. NS-CL uses object-centric learning via slot attention to identify objects within images, while its reasoning modules incorporate set transformers to extract and generate explanations. These are presented as grid-like representations, enabling users to understand the concepts applied in classification and derive further rules. We use this approach as a baseline for the evaluation of our method, even though its outputs are at a different level of abstraction than our fully argumentative method.
# 3. Background
In this section we briefly recall some notions that are the basis of our OC-NAL architecture.
Slot Attention Slot Attention (Locatello et al., 2020) maps a set of $N$ input feature vectors $z \in \mathbb { R } ^ { N \times d }$ , of dimension $d$ , obtained from an input image $x$ , to a set of $K$ output vectors, of dimension $d _ { S } ( \leq d )$ , $\hat { z } \in \mathbb { R } ^ { K \times d _ { S } }$ that we refer to as slots. The input features are projected with linear layers to create $k e y$ and value vectors, represented by $\mathbf { k }$ and $\mathbf { v }$ , respectively. The slots are also projected with a linear layer, resulting in a query vector $\mathbf { q }$ . To simplify our exposition, later on, let $f _ { s }$ denote the slot update function, defined as:
$$
\hat { z } ^ { t + 1 } : = f _ { s } ( z , \hat { z } ^ { t } ) = \hat { A } \mathbf { v } , \hat { A } _ { i j } : = \frac { A _ { i j } } { \sum _ { l = 1 } ^ { N } A _ { i l } } , A : = \mathrm { s o f t m a x } \left( \frac { \mathbf { q } \mathbf { k } ^ { T } } { \sqrt { d _ { S } } } \right)
$$
where $A \in \mathbb { R } ^ { K \times N }$ is the cross-attention matrix. The queries $\mathbf { q }$ in slot attention are a function of the slots $\hat { z } ^ { t }$ , and are iteratively refined over $T$ iterations. The initial slots $\hat { z } ^ { t = 0 }$ are randomly sampled from a standard Gaussian (Locatello et al., 2020).
Assumption-Based Argumentation (ABA) Assumption-Based Argumentation (Dung et al., 2009) is a well-known symbolic formalism for modelling non-monotonic reasoning. An ABA framework (Dung et al., 2009) is a tuple $\langle \mathcal { L } , \mathcal { R } , \mathcal { A } , \overline { { \mathbf { \Omega } } } \rangle$ where: (i) $\langle \mathcal { L } , \mathcal { R } \rangle$ is a deductive system where $\mathcal { L }$ is the language and $\mathcal { R }$ is the set of (inference) rules; (ii) ${ \mathcal { A } } \subseteq { \mathcal { L } }$ is the set of assumptions; (iii) is a total mapping from $\mathcal { A }$ to $\mathcal { L }$ where $a$ is referred to a contrary of $a$ ( $a \in { \mathcal { A } }$ ). In this paper, we consider flat ABA frameworks, where assumptions are not heads of rules. Also, we assume that the elements of $\mathcal { L }$ are atoms and, for the sake of simplicity, we omit the language as it can be derived from $\langle \mathcal { R } , \mathcal { A } , \quad \rangle$ . We illustrate with a simple example, where, as in (Dung et al., 2009), we use schemata to write rules, assumptions and contraries, using the variable A.
Example 1 A simple ABA framework for image classification is $\langle \mathcal { R } , \mathcal { A } , \quad \rangle$ , where:
$$
\begin{array} { r l r l r l } { \mathcal { R } = \left\{ \begin{array} { l l } { \rho _ { 1 } : c i r c l e ( A ) } & { : - A = i m g _ { - } { \cal L } , } & { \rho _ { 2 } : c i r c l e ( A ) } & { : - \ A = \ i m g _ { - } { \cal 2 } , } \\ & { \rho _ { 3 } : s q u a r e ( A ) } & { : - \ A = \ i m g _ { - } { \cal 2 } , } \end{array} \right. } & & { \rho _ { 4 } : c _ { - } { \cal L } ( A ) } & { : - \ c i r c l e ( A ) \ , \ a l p h a ( A ) = 0 , } \\ & { \rho _ { 5 } : c _ { - } a l p h a ( A ) } & { : - \ s q u a r e ( A ) } & { \} } \\ & { A = \left\{ a l p h a ( A ) \right\} } & { \overline { { a l p h a ( A ) } } = c _ { - } a l p h a ( A ) } \end{array}
$$
Here, $\mathcal { R }$ is a set of rules, each with a name $\rho _ { i }$ , a head following : and a body following :-, $\mathcal { A }$ is a set of assumptions, in this case consisting of a single assumption, with each assumption equipped with a contrary, in this case the contrary of alpha(X) is $\overline { { a l p h a ( X ) } }$ . The intuirive meaning of the ABA framework is as follows: images img 1 and img 2 contain a circle $( \rho _ { 1 } , \rho _ { 2 } )$ , image img 2 contains a square $\left( \rho _ { 3 } \right)$ , and image A belongs to concept c 1, if it contains a circle, unless it also contains a square $( \rho _ { 4 } , \rho _ { 5 } )$ .
We define a fact as a rule with distinct variables in the head and only equalities in the body. To decide which conclusions may be drawn from an ABA framework, arguments and attacks between them are first obtained, then acceptance of arguments is determined using an extension-based semantics, in our case of stable extensions (Dung et al., 2009). An argument for the claim $c \in { \mathcal { L } }$ supported by $A \subseteq A$ and $R \subseteq \mathcal { R }$ (denoted as $\textit { A } \vdash _ { R } \textit { c }$ ) is a finite tree with nodes labelled by sentences in $\mathcal { L }$ or by $\tau$ denoting true, the roots labelled by $c$ , the leaves either true or assumptions in $A$ , and non-leaves $c ^ { \prime }$ with, as children, the elements of the body of some rule in $R$ with the head $c ^ { \prime }$ . An argument $A \vdash _ { R }$ c attacks an argument $A ^ { \prime } \vdash _ { R ^ { \prime } } c ^ { \prime }$ iff there is an assumption $a \in A ^ { \prime }$ such that $\overline { { a } } = c$ . A set of arguments $E$ is stable iff the set is conflict-free (i.e. no argument in $E$ attacks an an argument also in $E$ ) and for every argument not in $E$ there is an argument in $E$ attacking it. We illustrate these notions with the earlier example.
Example 2 The following arguments can be obtained (amongst others) from the ABA framework in example 1:
$$
\begin{array} { r l r } & { \{ a l p h a ( i m g _ { - } { \it 1 } ) \} \vdash _ { \{ \rho _ { 1 } , \rho _ { 4 } \} } c _ { - } I ( i m g _ { - } { \it 1 } ) , } & { \{ a l p h a ( i m g _ { - } { \it 2 } ) \} \vdash _ { \{ \rho _ { 2 } , \rho _ { 4 } \} } c _ { - } I ( i m g _ { - } { \it 2 } ) , } \\ & { \{ \} \vdash _ { \{ \rho _ { 2 } \} } c i r c l e ( i m g _ { - } { \it 2 } ) , } & { \{ \} \vdash _ { \{ \rho _ { 3 } , \rho _ { 5 } \} } c _ { - } a l p h a ( i m g _ { - } { \it 2 } ) . } \end{array}
$$
Intuitively, each argument is a deduction from (possibly empty) sets of assumptions (the premises) to claims (e.g. c 1(img 1) for the first argument), using sets of rules. Attacks between arguments result from undercutting assumptions in the premises of arguments. Here, the fourth argument attacks the second, as the former is a deduction of the contrary of the assumption occurring in the premise of the latter. The third and fourth arguments belong to the single stable extension admitted by this simple ABA framework, as they cannot be attacked by any other arguments.
Learning ABA Frameworks We use the ASP-ABALearn method by De Angelis et al. (2023, 2024). This takes in input a Background ABA framework (admitting at least one stable extension), sets ${ \mathcal { E } } ^ { + }$ and ${ { \mathcal { E } } ^ { - } }$ of positive and negative examples (i.e., atoms obtained from labelled images), respectively, and returns in output a Learnt ABA framework (admitting at least one stable extension) such that all positive examples are accepted in all stable extensions and no negative example is accepted in all the stable extensions. Computationally, ASP-ABALearn leverages the fact that flat ABA frameworks (where assumptions cannot be claims of arguments supported by other assumptions) can be mapped to logic programs. This is done by replacing each assumption $\alpha ( X )$ with not $p ( X )$ where ${ \overline { { \alpha ( X ) } } } = p ( X )$ .
# 4. OC-NAL Architecture
We will now explain the OC-NAL architecture shown in Figure 1, by detailing the neural and symbolic components and their training, as well as inference post-learning.
Inputs The OC-NAL architecture accepts a dataset $D \subseteq X \times Y \times L$ of labelled images, where $X$ is the given set of images, $L = \{ c _ { 1 } , c _ { 2 } \}$ is a set of classes, and $\begin{array} { r } { Y = \{ 0 , 1 \} ^ { K \times ( P + 1 ) } } \end{array}$ , for $K$ the total number of objects/slots that may occur in images, and $P$ the total number of properties that each of these objects may have (we consider an extra property for characterising the absence of objects). As a simple example, for images such as the one in Figure $^ 1$ , $K = 1 0$ (as there are a maximum of 9 objects in each such image plus the background) and $P = 8$ (3 for the shapes, 3 for the colours, 2 for sizes). We assume that $D$ is not noisy. $L$ is a set of two alternative classes. $Y$ is a metadata consisting of one-hot encodings, each representing all objects and their corresponding properties in an image. Note that the neural component of the architecture disregards the labels in $L$ , focusing instead on the images in $X$ in a weakly supervised manner, while the symbolic component disregards the image itself, using instead its abstraction drawn from the neural component.
Neural Component This uses a Convolutional Neural Network (CNN), slot attention and a set of multi-layer perceptrons (MLP) to convert a given input image $x$ into facts for the symbolic component, amounting specifically to the Background Knowledge for ABA Learning or, during inference, facts to be added to the generated ABA framework.
First, the input image is converted into features $z$ using a CNN, which is further used by the slot attention model trained using the process described in Section 3 to produce slots $\hat { z }$ .
Then, each slot $\hat { z }$ is passed through the MLPs to extract the properties of each object. To identify both continuous and categorical properties, we use MLPs of two types: classification MLPs, which use a softmax activation in the final layer to predict the most likely attribute for a given slot and regression MLPs, to predict the location of objects and determine whether each given slot has attended to a real object in the image. The results of these predictions are then concatenated to form the final prediction $\hat { y }$ for the input image, with the corresponding ground truth $y \in Y$ . This component is trained with weak-supervision and is optimised by minimising the loss function:
$$
\mathrm { M S E } ( \boldsymbol { x } , \hat { \boldsymbol { x } } ) + \alpha \operatorname* { m i n } _ { \tau \in S _ { K } } \sum _ { j = 1 } ^ { P } \mathrm { B C E } ( y _ { j } , \tau ( \boldsymbol { \hat { y } } ) _ { j } )
$$
which encapsulates both training objectives. The first is the mean square error (MSE) between the input image $x$ and the (reconstructed) image $\hat { x }$ , ensuring the reconstruction quality (from the slots) of the model. The second is the binary cross entropy (BCE) between the ground truth label $y$ and the predicted label $\hat { y }$ , where $y _ { j }$ corresponds to a particular property in vector $y$ and $\tau ( \hat { y } )$ is the prediction for a permutation of the $K$ objects, drawn from $S _ { K }$ , which denotes the set of all such permutations of $K$ objects. Intuitively, we compare each of the permutations with the ground-truth representation (with a specific order). Given the equivariance property of slot attention, we need to first align the slots before applying the BCE loss. To circumvent this, we used the Hungarian matching algorithm (Kuhn, 2010). Finally we balance both loss terms with the hyperparameter $\alpha$ .
Symbolic Component This receives the output predictions from the neural component and transforms them into facts for the Background ABA Framework taken in input by the ASP-ABALearn algorithm. It also uses the input labels in $L$ to obtain appropriate positive and negative examples in $( \mathcal { E } ^ { + } , \mathcal { E } ^ { - } )$ . This is accomplished by aggregating the slots and performing K-means clustering. The number of clusters corresponds to the desired number of examples, and an image is chosen from each cluster as a representative positive or negative example. We also check the confidence of each prediction and prune off any image below a certain threshold.
The slot predictions are then passed to concept embedding functions which use a dictionary (for the $K$ objects and the $P$ properties) to convert the raw predictions to ABA facts. For each image and object, an identifier is given in the form of an atom image(img i) and a constant object i respectively. Then, for each slot prediction, we take the argmax to identify the properties in the dictionary which are attributed to the object. We encode this as a fact, e.g. blue(object i).
Once all images and objects are encoded into the Background ABA Framework, we generate the ASP-ABALearn command aba asp(‘filename.aba’, e pos,e neg). This specifies which images are positive/negative, using e pos/e neg as the encodings of ${ { \mathcal E } ^ { + } } / { { \mathcal E } ^ { - } }$ , as discussed earlier. The ABA-ASPLearn algorithm is then run to produce a Learnt ABA Framework.
Inference At inference time, we run a slightly different pipeline to obtain a final classification for each unseen input image. Specifically:
1. We pass the image through the neural component to obtain predictions of the objects and their properties therein. These are subsequently converted into facts as during training.
2. We then create an ABA Framework which contains these facts, the rules learnt via ASP-ABALearn and any extra background knowledge.
3. The stable extensions of this ABA framework are then computed (using a straightforward mapping into ASP, and then using an ASP solver. specifically we use Clingo (Gebser et al., 2019)).
4. Depending on the ABA framework, we may obtain more than one stable extension. The prediction boils down to checking whether the atom sanctioning that the input image belongs to concept $c _ { 1 }$ is a member of all the stable extensions (i.e., it is a cautious consequence of the Learnt ABA framework).
# 5. Experimental Evaluation
We conducted experiments on the OC-NAL architecture to answer the following questions:
Q1: How well can the Neural Component identify/predict object properties?
Q2: How well can the OC-NAL architecture learn ABA frameworks which describe the latent rules in images so that they be used for classification?
Q3: How well does our OC-NAL architecture scale w.r.t. the number of examples used in the Symbolic Component and the complexity of the latent rules?
Figure 2: ASP rules used to generate the 6 classes for the SHAPES dataset. We generated 3K images for each rule half of which was the negative instance of the rules. We then took 500 positive and negative splits for each rule as testing data. In total, we had 12K images for training and 6K for testing.
Experiments To address these questions, we defined various binary classification tasks using our adaptation of the SHAPES dataset (Andreas, 2017) . This dataset was generated by a tool that processed ASP rules to create images conforming to them. We defined 6 rules, as detailed in Figure 2, to form the dataset. Each binary classification task aimed to distinguish between positive and negative instances for each class. This dataset served as a baseline to evaluate the viability of using argumentation to reason in an object-centric way. We also defined a multi-class classification task on the CLEVR dataset using CLEVRHans3 (Stammer et al., 2021) which splits CLEVR images into 3 classes based on the following concepts c1: Large (Gray) Cube and Large Cylinder, c2: Small metal Cube and Small (metal) Sphere, c3: Large blue Sphere and Small yellow Sphere. The goal of this task was for OC-NAL to generate Learnt ABA frameworks that could differentiate these classes.
Setup We trained the OC-NAL architecture in two stages as described in Section 4. The Neural Component was trained using the full datasets for 1000 epochs with hyperparameter $\alpha = 0 . 3 5$ . The Symbolic Component used 10 positive and 10 negative examples from the datasets to obtain the Learnt ABA framework. Regarding the CLEVR-Hans3 task (which was multi-classed), we executed the Symbolic Component twice, the first run to distinguish c3 images from c1 and c2 and the second run to distinguish between c1 and c2, thus producing two frameworks. We compare the results with a ResNet (He et al., 2016) and the NS-CL.
[Q1] Object Prediction Figure 3 shows that the Neural Component effectively segmented images into their constituent objects and accurately predicted each object’s property. We evaluated its localisation and segmentation capabilities using the Adjusted Rand Index, which measures similarity between clusters (with clusters representing objects and data points representing pixels), and the Average Precision metric. The scores ranged from 0.80 to 0.95, indicating consistent performance regardless of the number of objects present in each dataset.
We also observed high scores in standard metrics, indicating that the MLPs accurately predicted each object’s properties. The F1 score exceeded 70%, suggesting that the model effectively minimises false positives and negatives, ensuring the symbolic component contains facts that accurately represent the image. Results on the CLEVR dataset were slightly worse than those on the SHAPES dataset, likely due to CLEVR objects having a larger number of properties. This suggests that the number of properties can affect prediction performance.
Figure 3: (Left) Standard machine learning evaluation metrics of attribute classification on datasets SHAPES and CLEVR (Center) Mean Adjusted Rand Index on datasets (Right) Average Precision on datasets.
[Q2] Classification From Table 1 we observe that the Learnt ABA frameworks performed well on the binary classification tasks for SHAPES, achieving near-perfect scores on most tasks. Among the metrics, we saw that recall was consistently the lowest. This is likely due to errors propagated from the Neural Component, which resulted in facts that did not accurately represent the images, causing some instances to be misclassified. However, we saw the opposite when predicting classes s4 and s5, with significantly lower precision. We believe that this is due to the fact that in those tasks ASP-ABALearn learnt rules capturing (some of) the exceptions rather than the full concepts. For example, s1 images were defined in the dataset (see Figure 2) as those with blue squares. However, the learnt framework defined s 1 as images with squares that are not red and not green (see c alpha2 in Figure 4). Despite being semantically equivalent, this reasoning made interpretation difficult and impacted the generation of more complex rules. Consequently, the F1 scores for these tasks dropped significantly due to the inability to capture all rule exceptions, leading to many false positives and thus lower precision. We encountered a similar outcome with the learnt ABA frameworks generated for the CLEVR-Han3 tasks. These only partially captured the rules defining each class. For instance, the framework for c1 stated that these images contain cubes that are not small. However, this rule failed to distinguish some c2 images, which could also contain cubes that are not small. As a result, many c2 images were incorrectly classified as c1, as shown in the confusion matrix (see Figure 7).
Despite this issue, the learnt ABA frameworks were able to capture most of the facts for classifying c3 images, leading to an F1 score of 0.68 (see Table 2). This outperformed the ResNet baseline, though it was worse than NS-CL, which scored above $0 . 8 0 \%$ . The superior performance of NS-CL could be attributed to its use of a set transformer for classification, rather than relying solely on symbolic reasoning.
[Q3] Scalability During our experiment, we found that the symbolic component, specifically ASP-ABALearn, faced some scalability issues as the execution time grew significantly as we increased the number of examples. This could be due to a larger search space gen% Learnt Rules s 1(A) :- in(A,B), square(B), alpha 2(B,A). c alpha 2(A,B) :- image(B), red(A). c alpha 2(A,B) :- image(B), green(A).
Figure 4: Rules in the ABA framework generated by our OC-NAL architecture for SHAPES (class s1). Here, alpha 2 is an assumption, with $\overline { { \tt a l p h a \mathrm { . } 2 ( A , B ) } } ;$ =c alpha 2(A,B).
% Learnt Rules c 1(A) :- in(A,B), cube(B), alpha 2(B,A). c alpha 2(A,B) :- small(A), image(B).
Figure 5: Rules in the ABA framework generated by our OC-NAL architecture for SHAPES (class s5). Here, alpha 2 is an assumption, with alpha 2(A, B)=c alpha 2(A,B).
% Learnt Rules c 3(A) :- in(A,B), sphere(B), alpha 2(B,A).
c alpha 2(A,B) :- brown(A), image(B).
c alpha 2(A,B) :- green(A), image(B).
c alpha 2(A,B) :- cyan(A), image(B).
c alpha 2(A,B) :- red(A), image(B).
c alpha 2(A,B) :- large(A), image(B).
c alpha 2(A,B) :- blue(A), image(B).
c alpha 2(A,B) :- gray(A), image(B).
Figure 6: Rules in the ABA framework generated by our OC-NAL architecture for CLEVR, differentiating class c3 from c1 and c2. Here, alpha 2 is an assumption, with alpha 2(A, B)=c alpha 2(A,B).
Table 1: Standard evaluation metrics denoting how well OC-NAL can distinguish between positive and negative instances of rules present in SHAPES images
Table 2: Standard evaluation metrics for the performance of OC-NAL, ResNet, and NS-CL on the CLEVR-Hans3 task.
Figure 7: Confusion Matrix of OC-NAL on the CLEVR-Hans3 task.
erated when looking for ABA frameworks whose extensions cover all positive examples and none of the negative examples. It may happen that this search is unsuccessful and ASP-ABALearn halts with failure.
Our experiments also showed that the non-determinism present when generalising rules led to variability in both the quality of the results and the system’s execution time. This variability was amplified by the size of the Background ABA framework. Additionally, we observed that longer execution task times negatively impacted the quality of the results, as heavily nested rules caused the frameworks to overfit to the example set. | Over the last decade, as we rely more on deep learning technologies to make
critical decisions, concerns regarding their safety, reliability and
interpretability have emerged. We introduce a novel Neural Argumentative
Learning (NAL) architecture that integrates Assumption-Based Argumentation
(ABA) with deep learning for image analysis. Our architecture consists of
neural and symbolic components. The former segments and encodes images into
facts using object-centric learning, while the latter applies ABA learning to
develop ABA frameworks enabling predictions with images. Experiments on
synthetic data show that the NAL architecture can be competitive with a
state-of-the-art alternative. | [
"cs.LG",
"cs.AI"
] |
# 1. Introduction
Causal inference often involves estimating the average treatment effect (ATE), which represents the causal impact of an exposure on an outcome. Under controlled study setups of randomized controlled trials (RCTs), valid inference methods for ATE estimation are well established (Deaton & Cartwright, 2018). However, RCT data is usually scarce and in some cases even impossible to obtain, either due to ethical or economic reasons. This often implies relying on observational data, typically subject to (unmeasured) confounding—(hidden) factors that affect both the exposure and the outcome. To overcome this issue of confounding and to obtain unbiased estimates, several inferential methods have been developed to properly adjust the ATE estimation for confounders. One approach that has garnered significant attention in recent years is the debiased/double machine learning (DML) framework (Chernozhukov et al., 2017; 2018), which allows the incorporation of machine learning methods to adjust for non-linear or complex confounding effects in the ATE estimation. DML is usually applied in the context of tabular features and was introduced for ML methods tailored to such features. However, confounding information might only be present in non-tabular data, such as images or text.
Non-tabular Data as Sources of Confounding Especially in medical domains, imaging is a key component of the diagnostic process. Frequently, CT scans or X-rays are the basis to infer a diagnosis and a suitable treatment for a patient. However, as the information in such medical images often also affects the outcome of the therapy, the information in the image acts as a confounder. Similarly, treatment and health outcomes are often both related to a patient’s files, which are typically in text form. Consequently, ATE estimation based on such observational data will likely be biased if the confounder is not adequately accounted for. Typical examples would be the severity of a disease or fracture. The extent of a fracture impacts the likelihood of surgical or conservative therapy, and the severity of a disease may impact the decision for palliative or chemotherapy. In both cases, the severity will likely also impact the outcome of interest, e.g., the patient’s recovery rate. Another famous example is the Simpson’s Paradox observed in the kidney stone treatment study of Charig et al. (1986). The size of the stone (information inferred from imaging) impacts both the treatment decision and the outcome, which leads to flawed conclusions about the effectiveness of the treatment if confounding is not accounted for (Julious & Mullee, 1994).
Contemporary Applications While the DML framework provides a solution for non-linear confounding, previous examples demonstrate that modern data applications require extending ATE estimation to incorporate non-tabular data. In contrast to traditional statistical methods and classical machine learning approaches, information in non-tabular data usually requires additional feature extraction mechanisms to condense high-dimensional inputs to the relevant information in the data. This is usually done by employing neural network-based approaches such as foundation models or other pre-trained neural networks. While it may seem straightforward to use such feature extractors to extract latent features from non-tabular data and use the resulting information in classical DML approaches, we show that this necessitates special caution. In particular, incorporating such features into ATE estimation requires overcoming previously unaddressed theoretical and practical challenges, including non-identifiability, high dimensionality, and the resulting limitations of standard assumptions like sparsity.
Figure 1. Schematic (left) and DAG visualization (right) of the effect of a treatment $T$ on outcome $Y$ that is confounded by nontabular data $W$ (e.g. information from medical imaging).
Problem Setup Given $n$ independent and identically distributed (i.i.d.) observations of $( T , W , Y )$ , we are interested in estimating the ATE of a binary variable $T \in \{ 0 , 1 \}$ on some outcome of interest $Y \in \mathbb { R }$ while adjusting for some source of confounding $W \in \mathbb { W }$ (cf. Figure 1). $W$ is pretreatment data from some potentially complex sampling space $\mathbb { W }$ that is assumed to be sufficient for adjustment. The definition of sufficiency will be formalized in Section 3.1. Under positivity and consistency assumption—the standard assumptions in causality—the target parameter of interest can be identified as
$$
\mathrm { A T E } : = \mathbb { E } [ \mathbb { E } [ Y | T = 1 , W ] - \mathbb { E } [ Y | T = 0 , W ] ] .
$$
While there are many well-known ATE estimators, most require to estimate either the outcome regression function
$$
g ( t , w ) : = \mathbb { E } [ Y | T = t , W = w ]
$$
or the propensity score
$$
m ( t | w ) : = \mathbb { P } [ T = t | W = w ]
$$
at parametric rate $\sqrt { n }$ . Doubly robust estimators such as the Augmented Inverse Probability Weighted, the Targeted
Maximum Likelihood Estimation or the DML approach estimate both nuisance functions $g$ and $m$ . These methods thus only require the product of their estimation errors to converge at $\sqrt { n }$ -rate (Robins & Rotnitzky, 1995; van der Laan & Rubin, 2006; van der Laan & Rose, 2011; Chernozhukov et al., 2017; 2018). However, even this can be hard to achieve, given the curse of dimensionality when considering the high-dimensionality of non-tabular data $W$ such as images. Especially given the often limited number of samples available in many medical studies involving images, estimating $m$ and $g$ as a function of $W$ , e.g., via neural networks, might not be feasible or overfit easily. To cope with such issues, a common approach is to adopt ideas from transfer learning and use pre-trained neural networks.
Our Contributions In this paper, we discuss under what conditions pre-trained representations $Z : = \varphi ( W )$ obtained from pre-trained neural networks $\varphi$ can replace $W$ in the estimation of nuisance functions (2) and (3). Although the dimensionality of $Z$ is usually drastically reduced compared to $W$ , one major obstacle from a theoretical point of view is that representations can only be learned up to invertible linear transformations (e.g., rotations). We argue that common assumptions allowing fast convergence rates, e.g., sparsity or additivity of the nuisance function, are no longer reasonable in such settings. In contrast, we build on the idea of low intrinsic dimensionality of the pre-trained representations. Combining invariance of intrinsic dimensions and functional smoothness with structural sparsity, we establish conditions that allow for sufficiently fast convergence rates of nuisance function estimation and, thus, valid ATE estimation and inference. Our work, therefore, not only advances the theoretical understanding of causal inference in this context but also provides practical insights for integrating modern machine learning tools into ATE estimation.
# 2. Related Work
The DML framework was initially proposed for tabular features in combination with classical machine learning methods (Chernozhukov et al., 2017; 2018). Several theoretical and practical extensions to incorporate neural networks have been made with a focus on tabular data (Shi et al., 2019; Farrell et al., 2021; Chernozhukov et al., 2022; Zhang & Bradic, 2024). Additionally, there is a growing body of research that aims to incorporate non-tabular data as adjustment into DML (Veitch et al., 2019; 2020; Klaassen et al., 2024). While the latter directly incorporates the non-tabular data in the estimation, none of them discuss conditions that would theoretically justify fast convergence rates necessary for valid inference. A different strand of research instead uses either derived predictions (Zhang et al., 2023; Battaglia et al., 2024; Jerzak et al., 2022; 2023a;b) or proxy variables (Kuroki & Pearl, 2014; Kallus et al., 2018; Miao et al., 2018;
Mastouri et al., 2021; Dhawan et al., 2024) in downstream estimation. In contrast to these proposals, we consider the particularly broad setup of using pre-trained representations for confounding adjustment. Given the increasing popularity of pre-trained models, Dai et al. (2022) and Christgau & Hansen (2024) establish theoretical conditions justifying the use of derived representations in downstream tasks, which we will review in the next section. The idea of a low intrinsic dimensionality of non-tabular data and its latent representations to explain the superior performance of deep neural networks in non-tabular data domains has been explored and validated both empirically (Gong et al., 2019; Ansuini et al., 2019; Pope et al., 2021; Konz & Mazurowski, 2024) and theoretically (Chen et al., 2019; Schmidt-Hieber, 2019; Nakada & Imaizumi, 2020). By connecting several of those theoretical ideas and empirical findings, our work establishes a set of novel theoretical results and conditions that allow to obtain valid inference when using pre-trained representations in adjustment for confounding.
# 3. Properties of Pre-Trained Representations
Given the high dimensional nature of non-tabular data, together with the often limited number of samples available (especially in medical domains), training feature extractors such as deep neural networks from scratch is often infeasible. This makes the use of latent features from pre-trained neural networks a popular alternative (Erhan et al., 2010). In order to use pre-trained representations for adjustment in the considered ATE setup, certain conditions regarding the representations are required.
# 3.1. Sufficiency of Pre-Trained Representations
Given any pre-trained model $\varphi$ , trained independently of $W$ on another dataset, we denote the learned (last-layer) representations as $Z : = \varphi ( W )$ . Due to the non-identifiability of $Z$ up to certain orthogonal transformations, further discussed in Section 3.2, we define the following conditions for the induced equivalence class of representations $\mathcal { Z }$ following Christgau & Hansen (2024). For this, we abstract the adjustment as conditioning on information in the ATE estimation, namely conditioning on the uniquely identifiable information contained in the sigma-algebra $\sigma ( Z )$ generated by any $Z \in { \mathcal { Z } }$ (see also Appendix A.1 for a special case).
Definition 3.1. [Christgau & Hansen (2024)] Given the joint distribution $P$ of $( T , W , Y )$ , sigma-algebra $\sigma ( Z )$ of $Z$ , and $t \in \{ 0 , 1 \}$ , we say that any $Z \in { \mathcal { Z } }$ is
(i) $P$ -valid if:
$$
\mathbb { E } _ { P } [ \mathbb { E } _ { P } [ Y | T = t , \sigma ( Z ) ] ] = \mathbb { E } _ { P } [ \mathbb { E } _ { P } [ Y | T = t , W ] ]
$$
(ii) $P$ -OMS (Outcome Mean Sufficient) if:
$$
\begin{array} { r } { \mathbb { E } _ { P } [ Y | T = t , \sigma ( Z ) ] = \mathbb { E } _ { P } [ Y | T = t , W ] \quad ( P \mathrm { - a . s . } ) } \end{array}
$$
(iii) $P$ -ODS (Outcome Distribution Sufficient) if:
$$
Y \perp _ { P } W | T , Z .
$$
Remark 3.2. If $Z \in { \mathcal { Z } }$ is $P$ -ODS, it is also called a sufficient embedding in the literature (Dai et al., 2022).
The three conditions in Definition 3.1 place different restrictions on the nuisance functions (2) and (3). While $P$ -ODS is most restrictive (followed by $P$ -OMS) and thus guarantees valid downstream inference more generally, the strictly weaker condition of $P$ -validity is already sufficient (and in fact necessary) to guarantee that $Z \in { \mathcal { Z } }$ is a valid adjustment set in the ATE estimation (Christgau & Hansen, 2024). Thus, any pre-trained representation $Z$ considered in the following is assumed to be at least $P$ -valid.
Figure 2. Schematic visualization of a pre-trained neural network $\varphi ( \cdot )$ and representations $Z = \varphi ( W )$ .
# 3.2. Non-Identifiability under ILTs
In practice, the representation $Z = \varphi ( W )$ is extracted from some layer of a pre-trained neural network $\varphi$ . This information does not change under bijective transformations of $Z$ , so the representation $Z$ itself is not identifiable. We argue that, in this context, non-identifiability with respect to invertible linear transformations (ILTs) is most important. Suppose $Z = \varphi ( W )$ is extracted from a deep network’s ℓth layer. During pre-training the network further processes $Z$ through a model head $\phi ( Z )$ , as schematically depicted in Figure 2. The model head usually has the form $\phi ^ { > \ell } ( A Z + b )$ where $A , b$ are the weights and biases of the ℓth layer, and $\phi ^ { > \ell }$ summarizes all following computations. Due to this structure, any bijective linear transformation $Z \mapsto Q Z$ can be reversed by the weights $A \mapsto \tilde { A } = A Q ^ { - 1 }$ so that the networks $\phi ^ { > \ell } ( A \cdot + b )$ and $\phi ^ { > \ell } ( \tilde { A } Q \cdot + b )$ have the same output.
Definition 3.3 (Invariance to ILTs). Given a latent representation $Z$ , we say that a model (head) $\phi _ { \xi }$ with parameters $\xi \in \Xi$ is non-identifiable up to invertible linear transformations if for any invertible matrix $Q \in \mathbb { R } ^ { d \times d }$ $\exists \tilde { \xi } \in \Xi : \phi _ { \xi } ( Q Z ) = \phi _ { \tilde { \xi } } ( Z )$ .
Important examples of ILTs are rotations, permutations, and scalings of the feature space as well as compositions thereof.
Adjustment for Confounding using Pre-Trained Representations
Table 1. Assumptions and related minimax convergence rates of the estimation error
# 4. Estimation using Pre-Trained Representations
The previous section discussed sufficient and necessary (information theoretic) conditions for pre-trained representations, justifying their usage for adjustment in downstream tasks. The following section will discuss aspects of the functional estimation in such adjustments. Valid statistical inference in downstream tasks usually requires fast convergence of nuisance function estimators. However, obtaining fast convergence rates in high-dimensional estimation problems is particularly difficult. We argue that some commonly made assumptions are unreasonable due to the non-identifiability of representations. We discuss this in the general setting of nonparametric estimation as described in the following.
The Curse of Dimensionality The general problem in nonparametric regression is to estimate some function $f$ in the regression model
$$
\boldsymbol { Y } = f ( \boldsymbol { X } ) + \epsilon
$$
with outcome $Y ~ \in ~ \mathbb { R }$ , features $X ~ \in ~ \mathbb { R } ^ { d }$ , and error $\epsilon \sim \mathcal { N } ( 0 , \sigma ^ { 2 } )$ . The minimax rate for estimating Lipschitz functions is known to be $n ^ { - \frac { 1 } { 2 + d } }$ (Stone, 1982). This rate becomes very slow for increasing $d$ , known as the curse of dimensionality. Several additional structural and distributional assumptions are commonly encountered to obtain faster convergence rates in high dimensions.
# 4.1. Structural Assumption I: Smoothness
A common structural assumption is the smoothness of the function $f$ in (4), i.e., the existence of $s$ bounded and continuous derivatives. Most convergence rate results assume at least some level of smoothness (see Table 1). The following lemma verifies that this condition is also preserved under ILTs.
Lemma 4.1 (Smoothness Invariance under ILTs). Let $D \subseteq$ $\mathbb { R } ^ { d }$ be an open set, $f : D \to \mathbb { R }$ be an s-smooth-function on $D$ , and $Q$ by any ILT. Then $h = f \circ Q ^ { - 1 } \colon Q ( D ) \to \mathbb { R }$ is also $s$ -smooth on the transformed domain $Q ( D )$ .
The proof of Lemma 4.1 and subsequent lemmas of this section are given in Appendix A.
The lemma shows that a certain level of smoothness of a function defined on latent representations may reasonably be assumed due to its invariance to ILTs. If the feature dimension is large, however, an unrealistic amount of smoothness would be required to guarantee fast convergence rates (e.g., of order $n ^ { - \bar { 1 } / 4 }$ ). This necessitates additional structural or distributional assumptions.
# 4.2. Structural Assumptions II: Additivity & Sparsity
The common structural assumption is that $f$ is additive, $\begin{array} { r } { f ( \boldsymbol { x } ) = \sum _ { j = 1 } ^ { d } f _ { j } ( x _ { j } ) } \end{array}$ , i.e., the sum of univariate $s$ -smooth functions. In this case, the minimax convergence rate reduces to $n ^ { - \frac { s } { 2 s + 1 } }$ (Stone, 1985). Another common approach is to rely on the idea of sparsity. Assuming that $f$ is $p$ -sparse implies that it only depends on $p \ < \ \operatorname* { m i n } ( n , d )$ features. In case one further assumes the univariate functions to be linear in each feature, i.e. $\textstyle f ( x ) = \sum _ { j = 1 } ^ { p } \beta _ { j } x _ { j }$ with coefficient $\beta _ { j } \in \mathbb { R }$ , the optimal convergence rate reduces to $\sqrt { p \log ( d / p ) / n }$ (Raskutti et al., 2009).
It can easily be shown that the previously discussed conditions are both preserved under permutation and scaling. But as the following lemma shows, sparsity and additivity of $f$ are (almost surely) not preserved under generic ILTs such as rotations.
Lemma 4.2 (Non-Invariance of Additivity and Sparsity under ILTs). Let $f : \mathbb { R } ^ { d } \mathbb { R }$ be a function of $\boldsymbol { x } \in \mathbb { R } ^ { d }$ . We distinguish between two cases:
(i) Additive: $\textstyle f ( x ) = \sum _ { j = 1 } ^ { d } f _ { j } ( x _ { j } )$ , with univariate functions $f _ { j } : \mathbb { R } \to \mathbb { R } ,$ and at least one $f _ { j }$ being non-linear.
(ii) Sparse Linear: $\begin{array} { r } { f ( \boldsymbol { x } ) = \sum _ { j = 1 } ^ { d } \beta _ { j } x _ { j } } \end{array}$ , where $\beta _ { j } \in \mathbb { R }$ and at least one (but not all) $\beta _ { j } = 0$ .
Then, for almost every $Q$ drawn from the Haar measure on the set of ILTs, it holds:
(i) If $f$ is additive, then $h = f \circ Q ^ { - 1 }$ is not additive.
(ii) If $f$ is sparse linear, then $h = f \circ Q ^ { - 1 }$ is not sparse.
Given the non-identifiability of representations with respect to ILTs and the non-invariance result of Lemma 4.2, any additivity or sparsity assumption about the target function $f$ of the latent features seems unjustified. An example of this rotational non-invariance of sparsity is given in Figure 3. This also implies that learners such as the lasso (with underlying sparsity assumption), tree-based methods that are based on axis-aligned splits (including corresponding boosting methods), and most feature selection algorithms are not ILT-invariant. Further examples can be found in $\mathrm { N g }$ (2004).
Figure 3. Non-zero coefficients of a linear classifier on latent features, showing that sparsity is lost with an increasing number of random feature rotations.
# 4.3. Distributional Assumption: Intrinsic Dimension
While the previous conditions are structural assumptions regarding the function $f$ itself, faster convergence rates can also be achieved by making distribution assumptions about the support of $f$ . A popular belief is that the $d$ -dimensional data $X \in \mathbb { R } ^ { d }$ lie on or close to a low-dimensional manifold $\mathcal { M }$ with intrinsic dimension $d _ { \mathcal { M } }$ . This relates to the famous manifold hypothesis that many high-dimensional data concentrate on low-dimensional manifolds (Fefferman et al., 2016, e.g.,). There is strong empirical support for this assumption, especially for non-tabular modalities such as text and images, see Appendix B.1. Given that $d _ { \mathcal { M } } \ll d$ , and again assuming $f$ to be $s$ -smooth, this can lead to a much faster convergence rate of $n ^ { - \frac { s } { 2 s + d _ { \mathcal { M } } } }$ (Bickel & Li, 2007), as it is independent of the dimension $d$ of the ambient space.
Similarly to Lemma 4.1, the following lemma shows the invariance of the intrinsic dimension of a manifold with respect to any ILT of the coordinates in the $d _ { \ l }$ -dimensional ambient space.
Lemma 4.3 (Intrinsic Dimension Invariance under ILTs). Let $\mathcal { M } \subset \mathbb { R } ^ { d }$ be a smooth manifold of dimension $d _ { \mathcal { M } } \leq d$ . For any $I L T Q$ , the transformed set
$$
Q ( \mathcal { M } ) = \{ Q x \mid x \in \mathcal { M } \} .
$$
is also a smooth manifold of dimension $d _ { \mathcal { M } }$ .
Remark 4.4. Put differently, in case the latent representations $Z \in \mathbb { R } ^ { d }$ lie on a $d _ { \mathcal { M } }$ -dimensional smooth manifold $\mathcal { M }$ , then the IL-transformed representations $Q ( Z )$ also lie on a smooth manifold $Q ( \mathcal { M } )$ of dimension $d _ { \mathcal { M } }$ .
Summarizing previous results, the structural and distribution assumptions of smoothness and low intrinsic dimensionality are invariant with respect to any ILT of the features. Hence, as opposed to additivity or sparsity, the two conditions hold not only for a particular instantiation of a latent representation $Z$ but for the entire equivalence class of latent representations induced by the class of ILTs. This is crucial given the non-identifiability of latent representations, highlighting the importance of low intrinsic dimensions (IDs).
Deep Networks Can Adapt to Intrinsic Dimensions Recently, several theoretical works have shown that DNNs can adapt to the low intrinsic dimension of the data and thereby attain the optimal rate of $n ^ { - \frac { s } { 2 s + d \mathcal { M } } }$ (Chen et al., 2019; Schmidt-Hieber, 2019; Nakada & Imaizumi, 2020; Kohler et al., 2023). In Section 5, we present a new convergence rate result that builds on the ideas of low ID and a hierarchical composition of functions particularly suited for DNNs.
# 5. Downstream Inference
The manifold assumption alone, however, cannot guarantee sufficient approximation rates in our setting. Even if the manifold dimension $d _ { \mathcal { M } }$ is much smaller than the ambient dimension $d$ (for example, $d _ { \mathcal { M } } \approx 3 0 _ { \cdot }$ ), an unreasonably high degree of smoothness would need to be assumed to allow for convergence rates below $n ^ { - 1 / 4 }$ . In what follows, we give a more realistic assumption to achieve such rates. In particular, we combine the low-dimensional manifold structure in the feature space with a structural smoothness and sparsity assumption on the target function.
# 5.1. Structural Sparsity on the Manifold
Kohler & Langer (2021) recently derived convergence rates based on the following assumption.
Definition 5.1 (Hierarchical composition model, HCM).
(a) We say that $f \colon { \mathbb { R } ^ { d } } \to { \mathbb { R } }$ satisfies a HCM of level $O$ , if $f ( x ) = x _ { j }$ for some $j \in \{ 1 , \ldots , d \}$ .
(b) We say that $f$ satisfies a HCM of level $k \geq 1$ , if there is a $s$ -smooth function $h \colon { \mathbb { R } } ^ { p } \to { \mathbb { R } }$ such that
$$
f ( x ) = h \bigl ( h _ { 1 } ( x ) , \ldots , h _ { p } ( x ) \bigr ) ,
$$
where $h _ { 1 } , \hdots , h _ { p } : \mathbb { R } ^ { d } \to \mathbb { R }$ are HCMs of level $k - 1$ .
The collection $\mathcal { P }$ of all pairs $( s , p ) \in \mathbb { R } \times \mathbb { N }$ appearing in the specification is called the constraint set of the HCM.
An illustration of Definition 5.1 is given in Appendix B.2. The assumption includes the case of sparse linear and (generalized) additive models as a special case but is much more general. Kohler & Langer (2021) and Schmidt-Hieber (2020) exploit such a structure to show that neural networks can approximate the target function at a rate that is only determined by the worst-case pair $( s , p )$ appearing in the constraint set. It already follows from Lemma 4.2 that the constraint set of such a model is not invariant to ILTs of the input space. Furthermore, the assumption does not exploit the potentially low intrinsic dimensionality of the input space. To overcome these limitations, we propose a new assumption combining the input space’s manifold structure with the hierarchical composition model.
Assumption 5.2. The target function $f _ { 0 }$ can be decomposed as $f _ { 0 } = f \circ \psi$ , where $\mathcal { M }$ is a smooth, compact, $d _ { \mathcal { M } }$ -dimensional manifold, $\psi \colon \mathcal { M } \to \mathbb { R } ^ { p }$ is $s _ { \psi }$ -smooth, and $f$ is a HCM of level $k \in \mathbb { N }$ with constraint set $\mathcal { P }$ .
Whitney’s embedding theorem (e.g., Lee & Lee, 2012, Chapter 6) allows any smooth manifold to be smoothly embedded into $\mathbb { R } ^ { 2 d \mathcal { M } }$ . This corresponds to a mapping $\psi$ with $s _ { \psi } = \infty$ and $p = 2 d _ { \mathcal { M } }$ in the assumption above. If not all information in the pre-trained representation $Z$ is relevant, however, $p$ can be much smaller. Importantly, Assumption 5.2 is not affected by ILTs.
Lemma 5.3 (Invariance of Assumption 5.2 under ILTs). Let $Q$ be any ILT. If $f _ { 0 }$ satisfies Assumption 5.2 for a given $\mathcal { P }$ and $( s _ { \psi } , d _ { \mathcal { M } } )$ , then $\tilde { f } _ { 0 } = f _ { 0 } \circ Q ^ { - 1 }$ satisfies Assumption 5.2 with the same $\mathcal { P }$ and $( s _ { \psi } , d _ { \mathcal { M } } )$ ,
# 5.2. Convergence Rate of DNNs
We now show that DNNs can efficiently exploit this structure. Let $( Y _ { i } , Z _ { i } ) _ { i = 1 } ^ { n }$ be i.i.d. observations and $\ell$ be a loss function. Define
$$
\begin{array} { l } { { \displaystyle f _ { 0 } = \arg \operatorname* { m i n } _ { f \colon \mathbb { R } ^ { d } \mathbb { R } } \mathbb { E } [ \ell ( f ( Z ) , Y ) ] } , \ ~ } \\ { { \displaystyle \hat { f } = \operatorname* { a r g m i n } _ { f \in \mathcal { F } ( L _ { n } , \nu _ { n } ) } \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \ell ( f ( Z _ { i } ) , Y _ { i } ) } , \ ~ } \end{array}
$$
where $\mathcal { F } ( L , \nu )$ is the set of feed-forward neural networks with $L$ layers and $\nu$ neurons per layer. Let $Z \sim P _ { Z }$ and define the L2(PZ)-norm of a function f as ∥f∥2L (P ) = $\textstyle \int f ( z ) ^ { 2 } d P ( z )$ . We make the following assumption on the loss function $\ell$ .
Assumption 5.4. There is $a , b \in ( 0 , \infty )$ such that
$$
\frac { \mathbb { E } [ \ell ( f ( Z ) , Y ) ] - \mathbb { E } [ \ell ( f _ { 0 } ( Z ) , Y ) ] } { \| f - f _ { 0 } \| _ { L _ { 2 } ( P _ { Z } ) } ^ { 2 } } \in [ a , b ] .
$$
Assumption 5.4 is satisfied for the squared and logistic loss, among others (e.g., Farrell et al., 2021, Lemma 8).
Theorem 5.5. Suppose Assumption 5.2 and Assumption 5.4 hold. There are sequences $L _ { n } , \nu _ { n }$ and a corresponding sequence of neural network architectures $\mathcal { F } ( L _ { n } , \nu _ { n } )$ such that (up to $\log n$ factors)
$$
\| \hat { f } - f _ { 0 } \| _ { L _ { 2 } ( P _ { Z } ) } = O _ { p } \left( \operatorname* { m a x } _ { ( s , p ) \in \mathcal { P } \cup ( s _ { \psi } , d _ { \mathcal { M } } ) } n ^ { - \frac { s } { 2 s + p } } \right) .
$$
The result shows that the convergence rate of the neural networks is only determined by the worst-case pair $( s , p )$ appearing in the constraint set of the HCM and the embedding map $\psi$ . The theorem extends the results of Kohler & Langer (2021) in two ways. First, it allows for more general loss functions than the square loss. This is important since classification methods are often used to adjust for confounding effects. Second, it explicitly exploits the manifold structure of the input space, which may lead to much sparser HCM specifications and dramatically improved rates.
# 5.3. Validity of DML Inference
In the previous sections, we explored plausible conditions under which the ATE is identifiable, and DNNs can estimate the nuisance functions with fast rates. We now combine our findings to give a general result for the validity of DML from pre-trained representations.
For binary treatment $T \in \{ 0 , 1 \}$ and pre-trained representations $Z$ , we define the outcome regression function
$$
g ( t , z ) : = \mathbb { E } [ Y | T = t , Z = z ] ,
$$
and the propensity score
$$
m ( z ) : = \mathbb { P } [ T = 1 | Z = z ] .
$$
Suppose we are given an i.i.d. sample $( Y _ { i } , Z _ { i } , T _ { i } ) _ { i = 1 } ^ { n }$ . DML estimators of the ATE are typically based on a cross-fitting procedure. Specifically, let $\mathsf { U } _ { k = 1 } ^ { K ^ { - } } I _ { k } = \{ 1 , \ldots , n \}$ be a partition of the sample indices such that $| I _ { k } | / n \to 1 / K$ . Let $\hat { g } ^ { ( k ) }$ and $\hat { m } ^ { ( k ) }$ denote estimators of $g$ and $m$ computed only from the samples $( Y _ { i } , Z _ { i } , T _ { i } ) _ { i \notin I _ { k } }$ . Defining
$$
\widehat { \mathrm { A T E } } ^ { ( k ) } = \frac { 1 } { | I _ { k } | } \sum _ { i \in I _ { k } } \rho ( T _ { i } , Y _ { i } , Z _ { i } ; \hat { g } ^ { ( k ) } , \hat { m } ^ { ( k ) } ) ,
$$
with orthogonalized score
$$
\begin{array} { r l } & { \rho ( T _ { i } , Y _ { i } , Z _ { i } ; g , m ) = g ( 1 , Z _ { i } ) - g ( 0 , Z _ { i } ) } \\ & { \quad + \frac { T _ { i } \left( Y _ { i } - g ( 1 , Z _ { i } ) \right) } { m ( Z _ { i } ) } + \frac { \left( 1 - T _ { i } \right) \left( Y _ { i } - g ( 0 , Z _ { i } ) \right) } { 1 - m ( Z _ { i } ) } , } \end{array}
$$
the final DML estimate of ATE is given by
$$
\widehat { \mathrm { A T E } } = \frac { 1 } { K } \sum _ { k = 1 } ^ { K } \widehat { \mathrm { A T E } } ^ { ( k ) } .
$$
We need the following additional conditions.
# Assumption 5.6. It holds
$$
\begin{array} { r l } & { \displaystyle \operatorname* { m a x } _ { t \in \{ 0 , 1 \} } \mathbb { E } [ | g ( t , Z ) | ^ { 5 } ] < \infty , \quad \mathbb { E } [ | Y | ^ { 5 } ] < \infty , } \\ & { \displaystyle \mathbb { E } [ | Y - g ( T , Z ) | ^ { 2 } ] > 0 , \quad \mathrm { P r } ( m ( Z ) \in ( \varepsilon , 1 - \varepsilon ) ) = 1 , } \end{array}
$$
for some $\varepsilon > 0$ .
The first two conditions ensure that the tails of $Y$ and $g ( t , Z )$ are not too heavy. The second two conditions are required for the ATE to be identifiable.
Theorem 5.7. Suppose the pre-trained representation is $P$ - valid, Assumption 5.6 holds, and the outcome regression and propensity score functions $g$ and m satisfy Assumption 5.2 with constraints $\mathcal { P } _ { g } \cup ( s _ { \psi } , d _ { \mathcal { M } } )$ and $\mathcal { P } _ { m } \cup ( s _ { \psi } ^ { \prime } , d _ { \mathcal { M } } )$ , respectively. Suppose further
$$
\operatorname* { m i n } _ { ( s , p ) \in \mathcal { P } _ { g } \cup ( s _ { \psi } , d _ { \mathcal { M } } ) } \frac { s } { p } \times \operatorname* { m i n } _ { ( s ^ { \prime } , p ^ { \prime } ) \in \mathcal { P } _ { m } \cup ( s _ { \psi } ^ { \prime } , d _ { \mathcal { M } } ) } \frac { s ^ { \prime } } { p ^ { \prime } } > \frac { 1 } { 4 } ,
$$
and the estimators $\hat { g } ^ { ( k ) }$ and $\hat { m } ^ { ( k ) }$ are DNNs as specified in Theorem 5.5 with the restriction that $\hat { m } ^ { ( k ) }$ is clipped away from $o$ and $\jmath$ . Then
$$
{ \sqrt { n } } ( { \widehat { \mathrm { A T E } } } - { \mathrm { A T E } } ) \to { \mathcal { N } } ( 0 , \sigma ^ { 2 } ) ,
$$
where $\sigma ^ { 2 } = \mathbb { E } [ \rho ( T _ { i } , Y _ { i } , Z _ { i } ; g , m ) ^ { 2 } ]$ .
Condition (5) is our primary regularity condition, ensuring sufficiently fast convergence for valid DML inference. It characterizes the necessary trade-off between smoothness and dimensionality of the components in the HCM. In particular, it is satisfied when each component function in the model has input dimension less than twice its smoothness.
# 6. Experiments
In the following, we will complement our theoretical results from the previous section with empirical evidence from several experiments. The experiments include both images and text as non-tabular data, which act as the source of confounding in the ATE setting. Further experiments can be found in Appendix D.
# 6.1. Validity of ATE Inference from Pre-Trained Representations
Text Data We utilize the IMDb Movie Reviews dataset from Lhoest et al. (2021) consisting of 50,000 movie reviews labeled for sentiment analysis. The latent features $Z$ as representations of the movie reviews are computed using the last hidden layer of the pre-trained Transformerbased model BERT (Devlin et al., 2019). More specifically, each review results in a 768-dimensional latent variable $Z$ by extracting the [CLS] token that summarizes the entire sequence. For this, each review is tokenized using BERT’s subword tokenizer (bert-base-uncased), truncated to a maximum length of 128 tokens, and padded where necessary.
Image Data We further use the dataset from Kermany et al. (2018) that contains 5,863 chest X-ray images of children. Each image is labeled according to whether the lung disease pneumonia is present or not. The latent features are obtained by passing the images through a pre-trained convolutional neural network and extracting the 1024-dimensional last hidden layer features of the model. We use the pretrained Densenet-121 model from the TorchXRayVision library (Cohen et al., 2022), which was trained on several publicly available chest X-ray datasets (Cohen et al., 2020). Further details on the datasets and pre-trained models used in our experiments are provided in Appendix C.1.
Figure 4. Label Confounding: Comparison of ATE estimators on the IMDb dataset. DML and S-Learner use pre-trained representations. Point estimates and $9 5 \%$ CIs are depicted.
Confounding Setup For both data applications, we simulate treatment and outcome variables while inducing confounding based on the labels. As an example, for the modified image dataset, children with pneumonia have a higher chance of receiving treatment compared to healthy children. In contrast, pneumonia negatively impacts the outcome variable. The same confounding is present in our modified text dataset. Hence, the label creates a negative bias in both ATE settings if not properly accounted for. Further details about the confounding setups are provided in Appendix C.2.
ATE Estimators We compare the performance of DML using three types of nuisance estimators: linear models with and without $L _ { 1 }$ -penalization (Lasso/ Linear), as well as random forest (RF). For comparison, we also include another common causal estimator, called S-Leaner, which only estimates the outcome function (2) (details in Appendix C.3). In each of the simulations, estimators facilitate the information contained in the non-tabular data to adjust for confounding by using the latent features from the pre-trained models in the estimation. As a benchmark, we compare the estimate to the ones of a Naive estimator (unadjusted estimation) and the Oracle estimator (adjusts for the true label).
Label Confounding Results The results for the IMDb experiment over 5 simulations are depicted in Figure 4. As expected, the naive estimator shows a strong negative bias. The same can be observed for the S-Learner (for all nuisance estimators) and for DML using lasso or random forest. In contrast, DML using linear nuisance es
1025257.050505 lEPSCSA timators (without sparsity-inducing penalty) yields unbiased estimates with good coverage, as can be seen by the confidence intervals (CIs). First, these results indicate that DML seems to benefit from the doubly robust estimation. Second, DML fails when using ILT non-invariant nuisance estimators such as lasso or random forest. This is because neither of the two can achieve sufficiently fast convergence rates without structural assumptions, such as sparsity or additivity. The latter being unlikely to hold given that representations were shown to be identifiable only up to ILTs. The results for image-based experiment are given in Appendix D.1, where the same phenomenon can be observed.
Figure 5. Complex Confounding: Comparison of ATE estimators on the X-ray dataset. DML and S-Learner use pre-trained representations. Point estimates and $9 5 \%$ CIs are depicted.
Figure 6. Different Intrinsic Dimension $( I D )$ estimates of pretrained representations obtained from different pre-trained models. Representations are based on the X-Ray dataset.
# 6.2. Neural Networks Adapt to Functions on Low Dimensional Manifolds
In a second line of experiments, we investigate the ability of neural network-based nuisance estimation to adapt to low intrinsic dimensions. The features in our data sets already concentrate on a low-dimensional manifold. For example, Figure 6 shows that the intrinsic dimension of the X-ray images is around $d _ { \mathcal { M } } = 1 2$ , whereas the ambient dimension is $d = 1 0 2 4$ . To simulate complex confounding with structural smoothness and sparsity, we first train an autoencoder (AE) with 5-dimensional latent space on the pre-trained representations. These low-dimensional encodings from the AE are then used to simulate confounding. Due to this construction of confounding, the true nuisance functions correspond to encoder-then-linear functions, which are multi-layered hierarchical compositions and therefore align with Assumption 5.2. We refer to this as complex confounding.
Complex Confounding Results We again compare DML and the S-Learner with different nuisance estimators. In contrast to the previous section, we now use a neural network (with ReLU activation, 100 hidden layers with 50 neurons each) instead of a linear model in the outcome regression nuisance estimation. The results are depicted in Figure 5. Similar to the previous experiments, we find that the naive estimate is strongly biased similar to the random forestbased estimators. In contrast, the neural network-based estimators exhibit much less bias. While the S-Learner’s confidence intervals are too optimistic, the DML estimator shows high coverage and is therefore the only estimator that enables valid inference. The results for the IMDb dataset with complex confounding are given in Appendix D.1.
Low Intrinsic Dimension We also investigate the low intrinsic dimension hypothesis about pre-trained representations. Using different intrinsic dimension (ID) estimators such as the Maximum Likelihood (MLE) (Levina & Bickel, 2004), the Expected Simplex Skewness (ESS), and the local Principal Component Analysis (lPCA) we estimate the ID of different pre-trained representations of the X-ray dataset obtained from different pre-trained models from the TorchXRayVision library (Cohen et al., 2022). The results in Figure 6 indicate that the intrinsic dimension of the pre-trained representations is much smaller than the dimension of the ambient space (1024). A finding that is in line with previous research, which is further discussed in Appendix B.1. Additional information on the experiment and the estimators used can be found in Appendix C.4.
# 6.3. The Power of Pre-Training for Estimation
In another line of experiments, we explore the benefits of pre-training in our setup. In particular, we are investigating whether pre-trained neural feature extractors actually outperform non-pre-trained feature extractors in the nuisance estimation of DML-based ATE estimation. We conduct the experiments in the context of the previously introduced image-based Label Confounding setup. To adjust for confounding in this setup, nuisance estimators must extract the relevant information from the X-rays. For this purpose, we compare DML using pre-trained feature extractors against DML using neural feature extractors that are trained on downstream data from scratch. While the former uses the same pre-trained Densenet-121 model that was used in previous image-based experiments, the latter incorporates Convolutional Neural Networks (CNNs) as nuisance estimators into the DML ATE estimation routine. The following experiment is based on 500 sampled images from the X-Ray dataset, where five-layer CNNs are used in the non-pre-trained DML version. Further details about the training and architecture of the utilized CNNs can be found in Appendix C.5.
Figure 7. Comparison of DML using pre-trained representations “DML (Pre-trained)” and DML without pre-training “DML (CNN)” for ATE estimation. Experiment is based on the $X$ -Ray dataset. Point estimates and $9 5 \%$ CIs are depicted.
The results are depicted in Figure 7. For illustrative purposes, we also show the estimates of the Naive and Oracle estimators, which match those of previous experiments. The key finding of Figure 7 is that DML using pre-trained feature extractors (DML (Pre-trained)) yields unbiased ATE estimates and well-calibrated confidence intervals, while DML without pre-training (DML (CNN)) does not. The same phenomenon can be observed in experiments with varying sample sizes and CNN architecture. These experiments are discussed in Appendix D.2. Overall, the results emphasize the benefits of using DML in combination with pre-trained models when utilizing non-tabular data such as images, for confounding adjustment in ATE estimation.
Further Experiments Further experiments on the asymptotic normality of DML-based ATE estimation as well as the role of the HCM structure of the nuisance functions are given and discussed in Appendix D.3 and D.4.
# 7. Discussion
In this work, we explore ATE estimation under confounding induced by non-tabular data. We investigate conditions under which pre-trained neural representations can effectively be used to adjust for such kind of confounding. While the representations typically have lower dimensionality, their invariance under orthogonal transformations challenges common assumptions to obtain fast nuisance function convergence rates, like sparsity and additivity. Instead, the study leverages the concept of low intrinsic dimensionality, combining it with invariance properties and structural sparsity to establish conditions for fast convergence rates in nuisance estimation. This ensures valid ATE estimation and inference, contributing both theoretical insights and practical guidance for integrating machine learning into causal inference.
Limitations and Future Research In this work, we focus on a single source of confounding from a non-tabular data modality. A potential future research direction is to study the influence of multiple modalities on ATE estimation. In particular, having multiple modalities requires further causal and structural assumptions on the interplay of the modalities. For example, this could mean that each modality is best processed by a separate network or that the confounding information can only be extracted through a joint network that correctly fuses modalities at some point. We note, however, that this is more of a technical aspect and a matter of domain knowledge, and thus being of minor relevance for the discussion and theoretical contributions of our study.
Moreover, we focused on the estimation of the ATE in this paper, given its popularity in both theory and practice. However, our approach could also be extended to cover other target parameters such as the average treatment effect on the treated (ATT) or the conditional ATE (CATE). While each of these would require a dedicated discussion of the necessary assumptions, we believe that many of the core ideas and results presented here—such as the convergence rates for neural network-based estimation—could also be transferred and used in a theoretical investigation in those settings.
# Impact Statement
This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
# References
Aghajanyan, A., Zettlemoyer, L., and Gupta, S. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255, 2020.
Ansuini, A., Laio, A., Macke, J. H., and Zoccolan, D. Intrinsic dimension of data representations in deep neural networks. Advances in Neural Information Processing Systems, 32, 2019.
Bach, P., Chernozhukov, V., Kurz, M. S., and Spindler, M. DoubleML-an object-oriented implementation of double machine learning in python. Journal of Machine Learning Research, 23(53):1–6, 2022.
Bartlett, P. L., Harvey, N., Liaw, C., and Mehrabian, A. Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. Journal of Machine Learning Research, 20(63):1–17, 2019.
Battaglia, L., Christensen, T., Hansen, S., and Sacher, S. Inference for regression with variables generated from unstructured data. 2024.
Bickel, P. J. and Li, B. Local polynomial regression on unknown manifolds. Lecture Notes-Monograph Series, pp. 177–186, 2007.
Charig, C. R., Webb, D. R., Payne, S. R., and Wickham, J. E. Comparison of treatment of renal calculi by open surgery, percutaneous nephrolithotomy, and extracorporeal shockwave lithotripsy. British Medical Journal (Clinical research ed.), 292(6524):879–882, 1986.
Chen, H., Harinen, T., Lee, J.-Y., Yung, M., and Zhao, Z. CausalML: Python Package for Causal Machine Learning, 2020.
Chen, M., Jiang, H., Liao, W., and Zhao, T. Efficient approximation of deep relu networks for functions on low dimensional manifolds. Advances in Neural Information Processing Systems, 32, 2019.
Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., and Newey, W. Double/debiased/neyman machine learning of treatment effects. American Economic Review, 107(5):261–265, 2017.
Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1), 2018.
Chernozhukov, V., Newey, W., Quintas-Martınez, V. M., and Syrgkanis, V. Riesznet and forestriesz: Automatic debiased machine learning with neural nets and random forests. In International Conference on Machine Learning, pp. 3901–3914. PMLR, 2022.
Christgau, A. M. and Hansen, N. R. Efficient adjustment for complex covariates: Gaining efficiency with DOPE. arXiv preprint arXiv:2402.12980, 2024.
Cohen, J. P., Hashir, M., Brooks, R., and Bertrand, H. On the limits of cross-domain generalization in automated Xray prediction. In Medical Imaging with Deep Learning, pp. 136–155. PMLR, 2020.
Cohen, J. P., Viviano, J. D., Bertin, P., Morrison, P., Torabian, P., Guarrera, M., Lungren, M. P., Chaudhari, A., Brooks, R., Hashir, M., et al. TorchXRayVision: A library of chest X-ray datasets and models. In International Conference on Medical Imaging with Deep Learning, pp. 231–249. PMLR, 2022.
Dai, B., Shen, X., and Wang, J. Embedding learning. Journal of the American Statistical Association, 117(537): 307–319, 2022.
Deaton, A. and Cartwright, N. Understanding and misunderstanding randomized controlled trials. Social science & medicine, 210:2–21, 2018.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Burstein, J., Doran, C., and Solorio, T. (eds.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
Dhawan, N., Cotta, L., Ullrich, K., Krishnan, R. G., and Maddison, C. J. End-To-End Causal Effect Estimation from Unstructured Natural Language Data. In Advances in Neural Information Processing Systems, volume 37, pp. 77165–77199, 2024.
Erhan, D., Courville, A., Bengio, Y., and Vincent, P. Why does unsupervised pre-training help deep learning? In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 201–208. JMLR Workshop and Conference Proceedings, 2010.
Farrell, M. H., Liang, T., and Misra, S. Deep neural networks for estimation and inference. Econometrica, 89(1): 181–213, 2021.
Fefferman, C., Mitter, S., and Narayanan, H. Testing the manifold hypothesis. Journal of the American Mathematical Society, 29(4):983–1049, 2016.
Fukunaga, K. and Olsen, D. R. An Algorithm for Finding Intrinsic Dimensionality of Data. IEEE Transactions on computers, 100(2):176–183, 1971.
Gong, S., Boddeti, V. N., and Jain, A. K. On the intrinsic dimensionality of image representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3987–3996, 2019.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708, 2017.
Jerzak, C. T., Johansson, F., and Daoud, A. Estimating causal effects under image confounding bias with an application to poverty in Africa. arXiv preprint arXiv:2206.06410, 2022.
Jerzak, C. T., Johansson, F., and Daoud, A. Integrating earth observation data into causal inference: challenges and opportunities. arXiv preprint arXiv:2301.12985, 2023a.
Jerzak, C. T., Johansson, F. D., and Daoud, A. Image-based Treatment Effect Heterogeneity. In Proceedings of the Second Conference on Causal Learning and Reasoning, pp. 531–552. PMLR, 2023b.
Johnsson, K., Soneson, C., and Fontes, M. Low Bias Local Intrinsic Dimension Estimation from Expected Simplex Skewness. IEEE transactions on pattern analysis and machine intelligence, 37(1):196–202, 2015.
Julious, S. A. and Mullee, M. A. Confounding and Simpson’s paradox. BMJ, 309(6967):1480–1481, 1994.
Kallus, N., Mao, X., and Udell, M. Causal inference with noisy and missing covariates via matrix factorization. Advances in Neural Information Processing Systems, 31, 2018.
Kermany, D., Zhang, K., and Goldbaum, M. Labeled Optical Coherence Tomography (OCT) and Chest X-Ray Images for Classification. Mendeley Data, 2018.
Klaassen, S., Teichert-Kluge, J., Bach, P., Chernozhukov, V., Spindler, M., and Vijaykumar, S. DoubleMLDeep: Estimation of Causal Effects with Multimodal Data. arXiv preprint arXiv:2402.01785, 2024.
Kohler, M. and Langer, S. On the rate of convergence of fully connected deep neural network regression estimates. The Annals of Statistics, 49(4):2231–2249, 2021.
Kohler, M., Langer, S., and Reif, U. Estimation of a regression function on a manifold by fully connected deep neural networks. Journal of Statistical Planning and Inference, 222:160–181, 2023. ISSN 0378-3758. doi: https://doi.org/10.1016/j.jspi.2022.05.008.
Konz, N. and Mazurowski, M. A. The effect of intrinsic dataset properties on generalization: Unraveling learning differences between natural and medical images. In International Conference on Learning Representations, 2024.
Kuroki, M. and Pearl, J. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):423– 437, 2014.
Lee, J. M. and Lee, J. M. Smooth manifolds. Springer, 2012.
Levina, E. and Bickel, P. Maximum likelihood estimation of intrinsic dimension. Advances in Neural Information Processing Systems, 17, 2004.
Lhoest, Q., Villanova del Moral, A., Jernite, Y., Thakur, A., von Platen, P., Patil, S., Chaumond, J., Drame, M., Plu, J., Tunstall, L., Davison, J., ˇSaˇsko, M., Chhablani, G., Malik, B., Brandeis, S., Le Scao, T., Sanh, V., Xu, C., Patry, N., McMillan-Major, A., Schmid, P., Gugger, S., Delangue, C., Matussi\`ere, T., Debut, L., Bekman, S., Cistac, P., Goehringer, T., Mustar, V., Lagunas, F., Rush, A., and Wolf, T. Datasets: A Community Library for Natural Language Processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 175–184, 2021.
Mastouri, A., Zhu, Y., Gultchin, L., Korba, A., Silva, R., Kusner, M., Gretton, A., and Muandet, K. Proximal Causal Learning with Kernels: Two-Stage Estimation and Moment Restriction. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 7512–7523. PMLR, 2021.
Miao, W., Geng, Z., and Tchetgen Tchetgen, E. J. Identifying causal effects with proxy variables of an unmeasured confounder. Biometrika, 105(4):987–993, 2018.
Nakada, R. and Imaizumi, M. Adaptive approximation and generalization of deep neural network with intrinsic dimensionality. Journal of Machine Learning Research, 21(174):1–38, 2020.
Ng, A. Y. Feature selection, $L _ { 1 }$ vs. $L _ { 2 }$ regularization, and rotational invariance. In Proceedings of the twenty-first International Conference on Machine Learning, pp. 78, 2004.
Pope, P., Zhu, C., Abdelkader, A., Goldblum, M., and Goldstein, T. The intrinsic dimension of images and its impact on learning. In International Conference on Learning Representations, 2021.
Raskutti, G., Yu, B., and Wainwright, M. J. Lower bounds on minimax rates for nonparametric regression with additive sparsity and smoothness. Advances in Neural Information Processing Systems, 22, 2009.
Robins, J. M. and Rotnitzky, A. Semiparametric efficiency in multivariate regression models with missing data. Journal of the American Statistical Association, 90(429):122– 129, 1995.
Robinson, P. M. Root-N-consistent semiparametric regression. Econometrica: Journal of the Econometric Society, pp. 931–954, 1988.
Schmidt-Hieber, J. Deep ReLu network approximation of functions on a manifold. arXiv preprint arXiv:1908.00695, 2019.
Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics, 2020.
Shi, C., Blei, D., and Veitch, V. Adapting Neural Networks for the Estimation of Treatment Effects. Advances in Neural Information Processing Systems, 32, 2019.
Stone, C. J. Optimal global rates of convergence for nonparametric regression. The Annals of Statistics, pp. 1040– 1053, 1982.
Stone, C. J. Additive regression and other nonparametric models. The Annals of Statistics, 13(2):689–705, 1985.
van der Laan, M. J. and Rose, S. Targeted Learning: Causal Inference for Observational and Experimental Data, volume 4. Springer, 2011.
van der Laan, M. J. and Rubin, D. Targeted Maximum Likelihood Learning. The International Journal of Biostatistics, 2(1), 2006.
van der Vaart, A. W. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998.
Van der Vaart, A. W. and Wellner, J. A. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Nature, 2023.
Veitch, V., Wang, Y., and Blei, D. Using embeddings to correct for unobserved confounding in networks. Advances in Neural Information Processing Systems, 32, 2019.
Veitch, V., Sridhar, D., and Blei, D. Adapting text embeddings for causal inference. In Conference on Uncertainty in Artificial Intelligence, pp. 919–928. PMLR, 2020.
Young, E. H. and Shah, R. D. ROSE Random Forests for Robust Semiparametric Efficient Estimation. arXiv preprint arXiv:2410.03471, 2024.
Zhang, J., Xue, W., Yu, Y., and Tan, Y. Debiasing MachineLearning-or AI-Generated Regressors in Partial Linear Models. Available at SSRN, 2023.
Zhang, Y. and Bradic, J. Causal inference through multistage learning and doubly robust deep neural networks. arXiv preprint arXiv:2407.08560, 2024.
# A. Proofs and Additional Results
A.1. Equivalence Class of Representations
Lemma A.1 (Equivalence Class of Representations). Let $( \Omega , { \mathcal { F } } , P )$ be a probability space, and let $Z : \Omega \to \mathbb { R } ^ { d }$ be $a$ measurable map (a random representation). Then for each ILT $Q$ the random variable $Q ( Z )$ satisfies
$$
\sigma \bigl ( Q ( Z ) \bigr ) \ = \ \sigma ( Z ) ,
$$
where $\sigma ( Z )$ denotes the $\sigma$ -algebra generated by the random variable $Z$ . Consequently,
$$
{ \mathcal { Z } } \ = \ \{ Q ( Z ) \ | \ Q \in { \mathcal { Q } } \}
$$
forms an equivalence class of representations that are indistinguishable from the viewpoint of measurable information.
Proof. Each $Q \in \mathcal { Q }$ is an invertible linear transformation. Consequently, $Q$ is a Borel measurable bijection with a Borel measurable inverse. To show $\sigma ( Q ( Z ) ) = \sigma ( Z )$ , consider any Borel set $B \subseteq \mathbb { R } ^ { d }$ . We have
$$
\{ \omega \in \Omega : Q ( Z ( \omega ) ) \in B \} = \{ \omega \in \Omega : Z ( \omega ) \in Q ^ { - 1 } ( B ) \} .
$$
Since $Q ^ { - 1 } ( B )$ is Borel (as $Q$ is a Borel isomorphism), the pre-image $\{ \omega : Z ( \omega ) \in Q ^ { - 1 } ( B ) \}$ belongs to $\sigma ( Z )$ . Similarly for any Borel set $A \subseteq \mathbb { R } ^ { d }$ ,
$$
\{ \omega \in \Omega : Z ( \omega ) \in A \} = \{ \omega \in \Omega : Q ( Z ( \omega ) ) \in Q ( A ) \} ,
$$
which belongs to $\sigma ( Q ( Z ) )$ . Therefore, $\sigma ( Q ( Z ) ) = \sigma ( Z )$ .
# A.2. Proof of Lemma 4.1
Proof. We consider $f$ being $C ^ { s }$ on the open domain $D \subseteq \mathbb { R } ^ { d }$ , so by definition, all partial derivatives of $f$ up to order $s$ exist and are continuous on $D$ . Further, we consider any invetible matrix $Q$ . Such linear transformations are known to be infinitely smooth (as all their partial derivatives of any order exist and are constant, hence continuous). Hence, the function $h = f \circ Q ^ { - 1 }$ is the composition of a $C ^ { s }$ function $f$ with a linear (and thus $C ^ { \infty }$ ) map $Q ^ { - 1 }$ .
Applying the multivariate chain rule, we can easily verify that the differentiability properties of $h$ are inherited from those of $f$ and the linear transformation $Q ^ { - 1 }$ . Specifically, since $Q ^ { - 1 }$ is $C ^ { \infty }$ , and $f$ is $C ^ { s }$ , their composition $h$ retains the $C ^ { s }$ smoothness. Lastly, the (transformed) domain $Q ( D )$ is also open as linear (and thus continuous) transformations preserve the openness of sets in $\mathbb { R } ^ { d }$ . Therefore, $h$ is well-defined and $C ^ { s }$ on $Q ( D )$ . □
# A.3. Proof of Lemma 4.2
Proof. Suppose that $Q$ is an invertible matrix representing the linear map $z \mapsto Q ( z )$ . Denote by $\tilde { Q } = Q ^ { - 1 }$ its inverse and its rows by $\tilde { q } _ { 1 } , \dots , \tilde { q } _ { d }$ .
# (i) Additivity
Assume that $f : \mathcal { X } \mathbb { R }$ is additive, where $\mathcal { X } \subseteq \mathbb { R } ^ { d }$ , such that
$$
f ( \boldsymbol x ) = \sum _ { j = 1 } ^ { d } f _ { j } ( x _ { j } ) ,
$$
and suppose that at least one $f _ { j }$ is nonlinear. Now consider the transformed input space ${ \tilde { \mathcal { X } } } : = Q ( \mathcal { X } ) = \{ Q x \mid x \in \mathcal { X } \}$ , induced by the invertible linear transformation $Q$ . Let $h : \tilde { \mathcal { X } } \to \mathbb { R }$ be given by $h ( \tilde { x } ) : = f ( Q ^ { - 1 } \tilde { x } )$ . Then $h$ represents the same mapping as $f$ but expressed in the transformed coordinate system $\tilde { \mathcal { X } }$ . In particular, $h ( \tilde { x } ) = f ( x )$ , $\forall \tilde { x } \in \tilde { \mathcal { X } }$ . Further, we have
$$
h ( \tilde { x } ) = \sum _ { j = 1 } ^ { d } f _ { j } ( \tilde { q } _ { j } ^ { \top } \tilde { x } ) .
$$
Assume without loss of generality that $f _ { 1 }$ is nonlinear. The set of invertible matrices where $\tilde { q } _ { 1 }$ equals a multiple of a standard basis vector has Haar measure 0. Hence, $f _ { 1 } \big ( \tilde { q } _ { 1 } ^ { \top } \tilde { x } \big )$ is almost everywhere a nonlinear function of all coordinates of $\tilde { x }$ , implying that $h$ is not additive.
# (ii) Sparsity
Assume $f : \mathcal { X } \mathbb { R }$ , where $\boldsymbol { \mathcal { X } } \subseteq \mathbb { R } ^ { d }$ , is sparse linear of the form $f ( x ) = \beta ^ { \top } x$ with $1 \leq \| \beta \| _ { 0 } < d$ . We again consider the transformed input space ${ \tilde { \mathcal { X } } } : = Q ( \mathcal { X } ) = \{ Q x \mid x \in \mathcal { X } \}$ , induced by the invertible linear transformation $Q$ , and define $h : \tilde { \mathcal { X } } \to \mathbb { R }$ given by $h ( \tilde { x } ) : = f ( Q ^ { - 1 } \tilde { x } ) .$ . Then we have $\dot { h ( x ) } = f ( Q ^ { - 1 } \tilde { x } ) = \beta ^ { \top } Q ^ { - 1 } \tilde { x } = : \tilde { \beta } ^ { \top } \tilde { x }$ . While the map $h$ is still linear, the set of matrices $Q$ such that $\| \tilde { \beta } \| _ { 0 } = \| \beta ^ { \top } Q ^ { - 1 } \| _ { 0 } \neq d$ has Haar measure zero. Hence, $h$ is almost everywhere not sparse. □
# A.4. Proof of Lemma 4.3
Proof. As in the previous proof in Appendix A.2, it is essential to note that ILTs $Q$ are linear, invertible maps that are $C ^ { \infty }$ (infinitely differentiable) with inverses that are likewise $C ^ { \infty }$ . Specifically, $Q$ serves as a global diffeomorphism on $\mathbb { R } ^ { d }$ , ensuring that both $Q$ and $Q ^ { - 1 }$ are smooth $( C ^ { \infty } )$ functions.
Given that $M$ is a $d _ { \mathcal { M } }$ -dimensional smooth manifold, for each point $x$ on the manifold $( x \in M )$ , there exists a neighborhood $U \subseteq M$ and a smooth chart $\varphi : U \to \mathbb { R } ^ { d _ { \mathcal { M } } }$ that is a diffeomorphism onto its image. Applying the orthogonal transformation $Q$ to $M$ results in the set $Q ( M )$ , and correspondingly, the image $Q ( U ) \subseteq Q ( M )$ . To construct a smooth chart for $Q ( M )$ , we can consider the map
$$
\tilde { \varphi } : Q ( U ) \to \mathbb { R } ^ { d _ { \mathcal { M } } } , \quad \tilde { \varphi } ( Q ( x ) ) = \varphi ( x ) ,
$$
where $x \in U$ . Since $Q$ is a diffeomorphism, the composition $\tilde { \varphi } = \varphi \circ Q ^ { - 1 }$ restricted to $Q ( U )$ remains a smooth diffeomorphism onto its image. Hence, this defines a valid smooth chart for $Q ( M )$ . Covering $Q ( M )$ with such transformed charts derived from those of $M$ ensures that $Q ( M )$ inherits a smooth manifold structure. Each chart $\tilde { \varphi }$ smoothly maps an open subset of $Q ( M )$ to an open subset of $\mathbb { R } ^ { d _ { \mathcal { M } } }$ , preserving the intrinsic dimension. Therefore, the intrinsic dimension $d _ { \mathcal { M } }$ of the manifold $M$ is preserved under any orthogonal transformation $Q$ , and $Q ( M )$ remains a $d _ { \mathcal { M } }$ -dimensional smooth manifold in $\mathbb { R } ^ { d }$ . □
# A.5. Proof of Lemma 5.3
Proof. Recall that $Q$ is an invertible linear map, $f _ { 0 } = f \circ \psi \colon { \mathcal { M } } \to \mathbb { R }$ , and $\tilde { f } _ { 0 } = f _ { 0 } \circ \psi \circ Q ^ { - 1 } \colon Q ( \mathcal { M } ) \to \mathbb { R }$ . Write $\tilde { f } = \stackrel { \cdot } { f } \circ \tilde { \psi }$ with $\tilde { \psi } = \psi \circ Q ^ { - 1 } \colon Q ( \mathcal { M } ) \to \mathbb { R }$ . Since $\mathcal { M }$ is a smooth manifold, $Q ( \mathcal { M } )$ is a smooth manifold with the same intrinsic dimension $d _ { \mathcal { M } }$ by Lemma 4.3. Since $z \mapsto Q ^ { - 1 }$ is continuous and $\mathcal { M }$ is compact, $Q ( \mathcal { M } )$ is also compact. Next, since $\psi$ is $s _ { \psi }$ -smooth by assumption, $\tilde { \psi }$ is also $s _ { \psi }$ -smooth by Lemma 4.1. Finally, the HCM part $f$ in the two models $f _ { 0 }$ and $\tilde { f } _ { 0 }$ is the same, so they share the same constraint set $\mathcal { P }$ . This concludes the proof. □
# A.6. Proof of Theorem 5.5
We will use Theorem 3.4.1 of Van der Vaart & Wellner (2023) to show that the neural network $\hat { f }$ converges at the rate stated in the theorem. For ease of reference, we restate a slightly simplified version of the theorem adapted to the notation used in our paper. Here and in the following, we write $a \lesssim b$ to indicate $a \leq C b$ for a constant $C \in ( 0 , \infty )$ not depending on $n$ .
Proposition A.2. Let ${ \mathcal { F } } _ { n }$ be a sequence of function classes, $\ell$ be some loss function, $f _ { 0 }$ the estimation target, and
$$
{ \hat { f } } = \underset { f \in \mathcal { F } _ { n } } { \arg \operatorname* { m i n } } { \frac { 1 } { n } } \sum _ { i = 1 } ^ { n } \ell ( f ( Z _ { i } ) , Y _ { i } ) .
$$
Define ${ \mathcal { F } } _ { n , \delta } = \{ f \in { \mathcal { F } } _ { n } \colon \| f - f _ { 0 } \| _ { L _ { 2 } ( P _ { Z } ) } \leq \delta \}$ and suppose that for every $\delta > 0$ , it holds
$$
\operatorname* { i n f } _ { f \in { \mathcal { F } _ { n , \delta } \backslash \mathcal { F } _ { n , \delta / 2 } } } \mathbb { E } [ \ell ( f ( Z ) , Y ) ] - \mathbb { E } [ \ell ( f _ { 0 } ( Z ) , Y ) ] \gtrsim \delta ^ { 2 } ,
$$
and, writing $\begin{array} { r } { \bar { \ell } _ { f } ( z , y ) = \ell ( f ( z ) , y ) - \ell ( f _ { 0 } ( z ) , y ) , } \end{array}$ , that
$$
\mathbb { E } \left[ \operatorname* { s u p } _ { f \in \mathcal { F } _ { n , \delta } } \left| \frac { 1 } { n } \sum _ { i = 1 } ^ { n } { \bar { \ell } } _ { f } ( Z _ { i } , Y _ { i } ) - \mathbb { E } [ { \bar { \ell } } _ { f } ( Z , Y ) ] \right| \right] \lesssim \frac { \phi _ { n } ( \delta ) } { \sqrt { n } } ,
$$
for functions $\phi _ { n } ( \delta )$ such that $\delta \mapsto \phi _ { n } ( \delta ) / \delta ^ { 2 - \varepsilon }$ is decreasing for some $\varepsilon > 0$ . If there are ${ \widetilde { f } } _ { 0 } \in { \mathcal { F } } _ { n }$ and $\varepsilon _ { n } \geq 0$ such that
$$
\begin{array} { r l } & { \varepsilon _ { n } ^ { 2 } \gtrsim \mathbb { E } [ \ell ( \tilde { f } _ { 0 } ( Z ) , Y ) ] - \mathbb { E } [ \ell ( f _ { 0 } ( Z ) , Y ) ] , } \\ & { \phi _ { n } ( \varepsilon _ { n } ) \lesssim \sqrt { n } \varepsilon _ { n } ^ { 2 } , } \end{array}
$$
it holds $\| \hat { f } - f _ { 0 } \| _ { L _ { 2 } ( P _ { Z } ) } = O _ { p } ( \varepsilon _ { n } )$ .
Proof of Theorem 5.5. Define $\begin{array} { r } { ( s ^ { * } , d ^ { * } ) = \arg \operatorname* { m i n } _ { ( s , p ) \in { \mathcal { P } } \cup ( s _ { \psi } , d _ { { \mathcal { M } } } ) } s / p } \end{array}$ and denote the targeted rate of convergence by
$$
\varepsilon _ { n } = \operatorname* { m a x } _ { ( s , p ) \in \mathcal { P } \cup ( s _ { \psi } , d _ { \mathsf { M } } ) } n ^ { - \frac { s } { 2 s + p } } ( \log n ) ^ { 4 } = n ^ { - \frac { s ^ { * } } { 2 s ^ { * } + d ^ { * } } } ( \log n ) ^ { 4 } .
$$
We now check the conditions of Proposition A.2.
Condition (A.2.1): Follows from Assumption 5.4, since
$$
\operatorname* { i n f } _ { f \in { \mathscr F } _ { n , \delta } \backslash { \mathscr F } _ { n , \delta / 2 } } \mathbb E [ \ell ( f ( Z ) , Y ) ] - \mathbb E [ \ell ( f _ { 0 } ( Z ) , Y ) ] \geq \operatorname* { i n f } _ { f \in { \mathscr F } _ { n , \delta } \backslash { \mathscr F } _ { n , \delta / 2 } } a \| f - f _ { 0 } \| _ { L _ { 2 } ( P _ { Z } ) } ^ { 2 } \geq \frac { a } { 4 } \delta ^ { 2 } .
$$
Condition (A.2.2): Let $N ( \varepsilon , \mathcal { F } , L _ { 2 } ( Q ) )$ be the minimal number of $\varepsilon$ -balls required to cover $\mathcal { F }$ in the $L _ { 2 } ( Q )$ -norm. Theorem 2.14.2 of Van der Vaart & Wellner (2023) states that eq. (A.2.2) holds with
$$
\phi _ { n } ( \delta ) = J _ { n } ( \delta ) \left( 1 + \frac { J _ { n } ( \delta ) } { \delta ^ { 2 } \sqrt { n } } \right) ,
$$
where
$$
J _ { n } ( \delta ) = \operatorname* { s u p } _ { Q } \int _ { 0 } ^ { \delta } { \sqrt { 1 + \log N ( \epsilon , { \mathcal { F } } ( L , \nu ) , L _ { 2 } ( Q ) ) } } d \epsilon ,
$$
with the supremum taken over all probability measures $Q$ . Lemma A.3 in Appendix A.7 gives
$$
J _ { n } ( \delta ) \lesssim \delta \sqrt { \log ( 1 / \delta ) } L \nu \sqrt { \log ( L \nu ) } ,
$$
which implies that $\delta \mapsto \phi _ { n } ( \delta ) / \delta ^ { 2 - 1 / 2 }$ is decreasing, so the condition is satisfied.
Condition (A.2.3): According to Lemma A.4 in Appendix A.7 there are sequences $L _ { n } = O ( \log \varepsilon _ { n } ^ { - 1 } )$ , $\nu _ { n } = O ( \varepsilon _ { n } ^ { - d ^ { * } / 2 s ^ { * } } )$ such that there is a neural network $\widetilde { f } _ { 0 } \in \mathcal { F } ( L _ { n } , \nu _ { n } )$ with
$$
\operatorname* { s u p } _ { z \in \mathcal { M } } | \widetilde { f } _ { 0 } ( z ) - f _ { 0 } ( z ) | = O ( \varepsilon _ { n } ) .
$$
Together with Assumption 5.4, this implies
$$
\begin{array} { r } { \mathbb { E } [ \ell ( \tilde { f } _ { 0 } ( Z ) , Y ) ] - \mathbb { E } [ \ell ( f _ { 0 } ( Z ) , Y ) ] \le b \| \tilde { f } _ { 0 } - f _ { 0 } \| _ { L _ { 2 } ( P _ { Z } ) } ^ { 2 } \le b \underset { z \in { \cal M } } { \operatorname* { s u p } } | \widetilde { f } _ { 0 } ( z ) - f _ { 0 } ( z ) | ^ { 2 } \lesssim \varepsilon _ { n } ^ { 2 } , } \end{array}
$$
as required.
Condition (A.2.4): Using $L _ { n } = O ( \log \varepsilon _ { n } ^ { - 1 } )$ , $\nu _ { n } = O ( \varepsilon _ { n } ^ { - d ^ { * } / 2 s ^ { * } } )$ and our bound on $J _ { n } ( \delta )$ from Lemma A.3, we get
$$
J _ { n } ( \delta ) \lesssim \delta \log ^ { 1 / 2 } ( \delta ^ { - 1 } ) \varepsilon _ { n } ^ { - \frac { d ^ { * } } { 2 s ^ { * } } } \log ^ { 3 / 2 } ( \varepsilon _ { n } ^ { - 1 } ) .
$$
Now observe that
$$
\begin{array} { r l } & { \frac { \phi _ { n } ( \varepsilon _ { n } ) } { \varepsilon _ { n } ^ { 2 } } \lesssim \varepsilon _ { n } ^ { - \frac { d ^ { * } } { s ^ { * } } - 1 } \log ^ { 2 } ( \varepsilon _ { n } ^ { - 1 } ) + \frac { \varepsilon _ { n } ^ { - \frac { d ^ { * } } { s ^ { * } } - 2 } \log ^ { 4 } ( \varepsilon _ { n } ^ { - 1 } ) } { \sqrt { n } } } \\ & { \phantom { m m m m m } = \varepsilon _ { n } ^ { - \frac { 2 s ^ { * } + d ^ { * } } { 2 s ^ { * } } } \log ^ { 2 } ( \varepsilon _ { n } ^ { - 1 } ) + \varepsilon _ { n } ^ { - \frac { 2 s ^ { * } + d ^ { * } } { s ^ { * } } } \log ^ { 4 } ( \varepsilon _ { n } ^ { - 1 } ) n ^ { - 1 / 2 } } \\ & { \phantom { m m m m } \lesssim n ^ { 1 / 2 } ( \log n ) ^ { - 2 } + n ^ { 1 / 2 } , } \end{array}
$$
where the last step follows from our definition of $\varepsilon _ { n }$ and the fact that $\log ( \varepsilon _ { n } ^ { - 1 } ) \lesssim \log n$ . In particular, $\varepsilon _ { n }$ satisfies $\phi _ { n } ( \varepsilon _ { n } ) \lesssim \sqrt { n } \varepsilon _ { n } ^ { 2 }$ , which concludes the proof of the theorem. □
# A.7. Auxiliary results
Lemma A.3. Let $\mathcal { F } ( L , \nu )$ be a set of neural networks with $\begin{array} { r } { \operatorname* { s u p } _ { f \in \mathcal { F } ( \mathcal { L } , \nu ) } \| f \| _ { \infty } < \infty } \end{array}$ . For all $\delta > 0$ sufficiently small, it holds
$$
\operatorname* { s u p } _ { Q } \int _ { 0 } ^ { \delta } \sqrt { 1 + \log N ( \epsilon , \mathcal { F } ( L , \nu ) , L _ { 2 } ( Q ) ) } d \epsilon \lesssim \delta \sqrt { \log ( 1 / \delta ) } L \nu \sqrt { \log ( L \nu ) } .
$$
Proof. Denote by $\operatorname { V C } ( { \mathcal { F } } )$ the Vapnik-Chervonenkis dimension of the set $\mathcal { F }$ . By Theorem 2.6.7 in Van der Vaart & Wellner (2023), it holds
$$
\operatorname* { s u p } _ { Q } \log N ( \varepsilon , \mathcal { F } , L _ { 2 } ( Q ) ) \lesssim \log ( 1 / \varepsilon ) \mathrm { V C } ( \mathcal { F } ) ,
$$
for $\varepsilon > 0$ sufficiently small. By Theorem 7 of Bartlett et al. (2019), we have
$$
\operatorname { V C } ( { \mathcal { F } } ( L , \nu ) ) \lesssim L ^ { 2 } \nu ^ { 2 } \log ( L \nu ) .
$$
For small $\varepsilon$ , this gives
$$
\operatorname* { s u p } _ { Q } \sqrt { 1 + \log N ( \varepsilon , \mathcal { F } ( L , \nu ) , L _ { 2 } ( Q ) ) } \lesssim \sqrt { \log ( 1 / \varepsilon ) } L \nu \sqrt { \log ( L \nu ) } ,
$$
Integrating the right-hand side gives the desired result.
Lemma A.4. Suppose $f _ { 0 }$ satisfies Assumption 5.2 for a given constraint set $\mathcal { P }$ and $( s _ { \psi } , d _ { \mathcal { M } } )$ . Define $( s ^ { * } , d ^ { * } ) \ =$ $\mathrm { \ r g \ m i n } _ { ( s , p ) \in \mathcal { P } \cup ( s _ { \psi } , d _ { \mathcal { M } } ) } s / p$ . Then for any $\varepsilon > 0$ sufficiently small, there is a neural network architecture $\mathcal { F } ( L , \nu )$ with $L = O ( \log \varepsilon ^ { - 1 } )$ , $\nu = O ( \varepsilon ^ { - d ^ { * } / 2 s ^ { * } } )$ such that there is $\widetilde { f } _ { 0 } \in \mathcal { F } ( L , \nu )$ with
$$
\operatorname* { s u p } _ { z \in \mathcal { M } } | \widetilde { f } _ { 0 } ( z ) - f _ { 0 } ( z ) | = O ( \varepsilon ) .
$$
Proof. The proof proceeds in three steps. We first approximate the embedding component $\psi$ by a neural network $\widetilde { \psi }$ , then the HCM component $f$ by a neural network $\widetilde { f }$ . Finally, we concatenate the networks to approximate the composition $f _ { 0 } = f \circ \psi$ by $\widetilde { f } _ { 0 } = \widetilde { f } \circ \widetilde { \psi }$ .
Approximation of the embedding component. Recall that $\psi \colon \mathcal { M } \mathbb { R } ^ { d }$ is a $s _ { \psi }$ -smooth mapping. Write $\psi ( z ) = $ $( \psi _ { 1 } ( z ) , \dots , \psi _ { d } ( z ) )$ and note that each $\psi _ { j } \colon \mathcal { M } \to \mathbb { R }$ is also $s _ { \psi }$ -smooth. Since $\mathcal { M }$ is a smooth $d _ { \mathcal { M } }$ -dimensional manifold, it has Minkowski dimension $d _ { \mathcal { M } }$ . Then Theorem 2 of Kohler et al. (2023) (setting $M = \varepsilon ^ { - 1 / 2 s _ { \psi } }$ in their notation) implies that there is a neural network $\widetilde { \psi } _ { j } \in \mathcal { F } ( L _ { \psi } , \nu _ { \psi } )$ with $L _ { \psi } = O ( \log \varepsilon ^ { - 1 } )$ and $\nu _ { \psi } = O ( \varepsilon ^ { - d _ { \mathcal { M } } / 2 s _ { \psi } } )$ such that
$$
\operatorname* { s u p } _ { z \in \mathcal { M } } | \widetilde { \psi } _ { j } ( z ) - \psi _ { j } ( z ) | = O ( \varepsilon ) .
$$
Parallelize the networks $\widetilde { \psi } _ { j }$ into a single network $\widetilde { \psi } : = ( \widetilde { \psi } _ { 1 } , \dots , \widetilde { \psi } _ { d } ) \colon \mathcal { M } \mathbb { R } ^ { d } .$ . By construction, the parallelized network $\widetilde { \psi }$ has $L _ { \psi }$ layers, width $d \times \nu _ { \psi } = { \cal O } ( \nu _ { \psi } )$ , and sa sefies
$$
\operatorname* { s u p } _ { z \in \mathcal { M } } \| \widetilde { \psi } ( z ) - \psi ( z ) \| = O ( \varepsilon ) .
$$
Approximation of the HCM component. Let $a \in ( 0 , \infty )$ be arbitrary. By Theorem 3(a) of Kohler & Langer (2021) (setting $M _ { i , j } ~ = ~ \varepsilon ^ { - 1 / 2 p _ { j } ^ { ( i ) } }$ in their notation), there is a neural network $\widetilde { f } \in \mathcal { F } ( L _ { f } , \nu _ { f } )$ with $L _ { f } \ = \ O ( \log \varepsilon ^ { - 1 } )$ and $\nu _ { f } = O ( \varepsilon ^ { - \tilde { d } ^ { * } / 2 s ^ { * } } )$ such that
$$
\operatorname* { s u p } _ { x \in [ - a , a ] ^ { d } } | { \widetilde { f } } ( x ) - f ( x ) | = O ( \varepsilon ) ,
$$
Combined approximation. Now concatenate the networks $\widetilde { \psi }$ and $\widetilde { f }$ to obtain the network $\widetilde { f } _ { 0 } = \widetilde { f } \circ \widetilde { \psi } \in \mathcal { F } ( L _ { \psi } +$ $L _ { f } , \operatorname* { m a x } \{ \nu _ { \psi } , \nu _ { f } \} )$ . Observe that $L _ { \psi } + L _ { f } = O ( \log \varepsilon ^ { - 1 } )$ and $\nu _ { \psi } + \nu _ { f } = O ( \varepsilon ^ { - d ^ { * } / 2 s ^ { * } } )$ , so the ne weork es the right size. It remains to show that its approximation error is sufficiently small. Define
$$
\gamma : = \operatorname* { s u p } _ { z \in \mathcal { M } } \| \widetilde { \psi } ( z ) - \psi ( z ) \| ,
$$
which is $O ( \varepsilon )$ by the construction of $\widetilde { \psi }$ ,
$$
a : = \operatorname* { s u p } _ { z \in \mathcal { M } } \| \psi ( z ) \| + \gamma ,
$$
which is $O ( 1 )$ by assumption, and
$$
K : = \operatorname* { s u p } _ { x , x ^ { \prime } } { \frac { | f ( x ) - f ( x ^ { \prime } ) | } { \| x - x ^ { \prime } \| } } ,
$$
which is finite since $f$ is Lipschitz due to $\mathrm { m i n } _ { ( s , d ) \in \mathcal { P } } s \geq 1$ and the fact that finite compositions of Lipschitz functions are Lipschitz. By the triangle inequality, we have
$$
\begin{array} { r l } & { \underset { z \in \mathcal { M } } { \operatorname* { s u p } } \left| \widetilde { f } _ { 0 } ( z ) - f _ { 0 } ( z ) \right| \leq \underset { z \in \mathcal { M } } { \operatorname* { s u p } } \left| \widetilde { f } ( \widetilde { \psi } ( z ) ) - f ( \widetilde { \psi } ( z ) ) \right| + \underset { z \in \mathcal { M } } { \operatorname* { s u p } } \left\| f ( \widetilde { \psi } ( z ) ) - f ( \psi ( z ) ) \right\| } \\ & { \qquad \leq \underset { x \in [ - a , a ] ^ { d } } { \operatorname* { s u p } } | \widetilde { f } ( x ) - f ( x ) | + K } \\ & { \qquad = O ( \varepsilon ) , } \end{array}
$$
as claimed.
# A.8. Proof of Theorem 5.7
Proof. We validate the conditions of Theorem II.1 of Chernozhukov et al. (2017). Our Assumption 5.6 covers all their moment and boundedness conditions on $g$ and $m$ . By Theorem 5.5, we further know that
$$
\| \hat { m } ^ { ( k ) } - m \| _ { L _ { 2 } ( P _ { Z } ) } + \| \hat { g } ^ { ( k ) } - g \| _ { L _ { 2 } ( P _ { Z } ) } = o _ { p } ( 1 ) .
$$
Further, Theorem 5.5 yields
$$
\begin{array} { r } { ^ { ( k ) } - m \big \| _ { L _ { 2 } ( P _ { \mathcal Z } ) } \times \big \| \hat { g } ^ { ( k ) } - g \big \| _ { L _ { 2 } ( P _ { \mathcal Z } ) } = O _ { p } \left( \operatorname* { m a x } _ { ( s , p ) \in \mathcal { P } _ { g } \cup ( s _ { \psi } , d _ { \mathcal M } ) } n ^ { - \frac { s } { 2 s + p } } \times \operatorname* { m a x } _ { ( s ^ { \prime } , p ^ { \prime } ) \in \mathcal { P } _ { m } \cup ( s _ { \psi } ^ { \prime } , d _ { \mathcal M } ) } n ^ { - \frac { s ^ { \prime } } { 2 s ^ { \prime } + p } } \right) } \\ { = O _ { p } \left( \operatorname* { m a x } _ { ( s , p ) \in \mathcal { P } _ { g } \cup ( s _ { \psi } , d _ { \mathcal M } ) } \cdot \operatorname* { m a x } _ { ( s ^ { \prime } , p ^ { \prime } ) \in \mathcal { P } _ { m } \cup ( s _ { \psi } ^ { \prime } , d _ { \mathcal M } ) } n ^ { - \left( \frac { s } { 2 s + p } + \frac { s ^ { \prime } } { 2 s ^ { \prime } + p ^ { \prime } } \right) } \right) } \end{array}
$$
We have to show that the term on the right is of order $o _ { p } ( n ^ { - 1 / 2 } )$ . Observe that
$$
\begin{array} { r l r l } { \displaystyle \frac { s } { 2 s + p } + \frac { s ^ { \prime } } { 2 s ^ { \prime } + p ^ { \prime } } > \frac { 1 } { 2 } } & { \Leftrightarrow } & { \displaystyle \frac { 1 } { 2 + p / s } + \frac { 1 } { 2 + p ^ { \prime } / s ^ { \prime } } > \frac { 1 } { 2 } } \\ & { \Leftrightarrow } & { \displaystyle \frac { 4 + p / s + p ^ { \prime } / s ^ { \prime } } { ( 2 + p / s ) ( 2 + p ^ { \prime } / s ^ { \prime } ) } > \frac { 1 } { 2 } } \\ & { \Leftrightarrow } & { 4 + p / s + p ^ { \prime } / s ^ { \prime } > 2 + p / s + p ^ { \prime } / s ^ { \prime } + \displaystyle \frac { p p ^ { \prime } } { 2 s s ^ { \prime } } } \\ & { \Leftrightarrow } & { 4 > \displaystyle \frac { p p ^ { \prime } } { s s ^ { \prime } } . } \end{array}
$$
Thus, our condition
$$
\operatorname* { m i n } _ { \substack { ( s , p ) \in \mathcal { P } _ { g } \cup ( s _ { \psi } , d \mathcal { M } ) } } \frac { s } { p } \times \operatorname* { m i n } _ { \substack { ( s ^ { \prime } , p ^ { \prime } ) \in \mathcal { P } _ { m } \cup ( s _ { \psi } ^ { \prime } , d \mathcal { M } ) } } \frac { s ^ { \prime } } { p ^ { \prime } } > \frac { 1 } { 4 } ,
$$
implies
$$
\| \hat { m } ^ { ( k ) } - m \| _ { L _ { 2 } ( P _ { Z } ) } \times \| \hat { g } ^ { ( k ) } - g \| _ { L _ { 2 } ( P _ { Z } ) } = o _ { p } ( n ^ { - 1 / 2 } ) ,
$$
as required.
# B. Additional Related Literature & Visualizations
# B.1. Empirical Evidence of Low Intrinsic Dimensions
Using different intrinsic dimension (ID) estimators such as the maximum likelihood estimator (MLE; Levina & Bickel, 2004) on popular image datasets such as ImageNet (Deng et al., 2009), several works find clear empirical evidence for low ID of both the image data and related latent features obtained from pre-trained NNs (Gong et al., 2019; Ansuini et al., 2019; Pope et al., 2021). The existence of the phenomenon of low intrinsic dimensions was also verified in the medical imaging (Konz & Mazurowski, 2024) and text-domain (Aghajanyan et al., 2020). All of the mentioned research finds a striking inverse relation between intrinsic dimensions and (state-of-the-art) model performance, which nicely matches the previously introduced theory about ID-related convergence rates.
# B.2. Hierarchical Composition Model (HCM) Visualization
This section provides an illustration of the Hierarchical Composition Model (HCM) that was formally introduced in Definition 5.1. As the name suggests, every HCM is a composition of HCMs of lower level. In Figure 8 we give an illustration of a particular HCM of level 2 and constraint set $\mathcal { P } _ { 2 1 }$ , which we abbreviated by $H C M ( 2 , \mathcal { P } _ { 2 1 } )$ . Following the notation of Definition 5.1, the $H C M ( 2 , \mathcal { P } _ { 2 1 } )$ corresponds to the function $f : \mathbb { R } ^ { d } \mathbb { R }$ defined by $f ( \boldsymbol { x } ) = h _ { 1 } ^ { [ 2 ] } ( h _ { 1 } ^ { [ 1 ] } ( \boldsymbol { x } ) , \dots , h _ { p } ^ { [ 1 ] } ( \boldsymbol { x } ) )$ Each $h _ { j } ^ { [ 1 ] } ( x )$ for $j \in \{ 1 , \dotsc , p \}$ corresponds to a HCM of level 1, which itself are compositions of HCMs of level 0. Each of the latter corresponds to a feature in the data. The constraint set of each HCM corresponds to the collection of pairs of the degree of smoothness and number of inputs of each HCM it is composed of. For example, assuming that $h _ { 1 } ^ { [ 2 ] }$ is a $s$ -smooth function, then the constraint set of the $H C M ( 2 , \mathcal { P } _ { 2 1 } )$ function $f$ is $\mathcal { P } _ { 2 1 } = \bigcup _ { j = 1 } ^ { p } \mathcal { P } _ { 1 j } \cup ( s , p )$ . The HCM framework fits both regression and classification. In the latter, the conditional probability would need to satisfy the HCM condition, and any non-linear link function for classification would just correspond to a simple function in the final layer of the HCM.
Figure 8. Visualization of a HCM: The illustration depicts a HCM of level 2 and constraint set $\mathcal { P } _ { 2 1 }$ . The HCM of level 2 is a composition of HCMs of level $\boldsymbol { { I } }$ , i.e., $h _ { 1 } , h _ { 2 } , \ldots , h _ { p }$ . The latter are itself compositions of HCMs of level 0, each corresponding to a feature of the data.
# C. Experimental Details and Computing Environment
We conduct several simulation studies to investigate the performance of different Average Treatment Effect (ATE) estimators of a binary treatment on some outcome in the presence of a confounding induced by non-tabular data. In the experiments, the confounding is induced by the labels, i.e., the pneumonia status or the review as well as more complex functions of the pre-trained features. Nuisance function estimation is based on the pre-trained representations that are obtained from passing the non-tabular data through the pre-trained neural models and extracting the last hidden layer features.
# C.1. Data and Pre-trained Models
IMDb For the text data, we utilize the IMDb Movie Reviews dataset from Lhoest et al. (2021) consisting of 50,000 movie reviews labeled for sentiment analysis. For each review, we extract the [CLS] token, a 768-dimensional vector per review entry, of the pre-trained Transformer-based model BERT (Devlin et al., 2019). To process the text, we use BERT’s subword tokenizer (bert-base-uncased) and truncate sequences to a maximum length of 128 tokens. We use padding if necessary. After preprocessing and extraction of pre-trained representations, we sub-sampled 1,000 and 4,000 pre-trained representations for the two confounding setups to make the simulation study tractable.
X-Ray For the image data simulation, we use the dataset from Kermany et al. (2018) that originally contains 5,863 chest X-ray images of children that were obtained from routine clinical care in the Guangzhou Women and Children’s Medical Center, Guangzhou. We preprocess the data such that each patient appears only once in the dataset. This reduces the effective sample size to 3,769 chest X-rays. Each image is labeled according to whether the lung disease pneumonia is present or not. The latent features are obtained by passing the images through a pre-trained convolutional neural network and extracting the 1024-dimensional last hidden layer features of the model. For this purpose, we use a pre-trained Densenet-121 model from the TorchXRayVision library (Cohen et al., 2022). Specifically, we use the model called densenet121-res224-all, which is a Densenet-121 model with resolution $2 2 4 \times 2 2 4$ that was pre-trained on all chest X-ray datasets considered in Cohen et al. (2020). We chose this model for the extraction of pre-trained representation in our experiments, based on its superior performance in benchmark studies conducted in prior work (Cohen et al., 2020). Note that the dataset from the Guangzhou Women and Children’s Medical Center that we use, was not used during the training of the model. This is important from a theoretical and practical viewpoint, as the confounding simulation via labels might otherwise be too easy to adjust for given that the model could have memorized the input data. However, using this kind of data we rule out this possibility.
# C.2. Confounding
As introduced in the main text, we simulate confounding both on the true labels of the non-tabular data as well as encodings from a trained autoencoder. While this induces a different degree of complexity for the confounding, the simulated confounding is somewhat similar in both settings. We first discuss the simpler setting of Label Confounding. In all of the experiments, the true average treatment effect was chosen to be two.
Label Confounding Label Confounding was induced by simulating treatment and outcome both dependent on the binary label. In the case of the label being one (so in case of pneumonia or in case of a positive review), the probability of treatment is 0.7 compared to 0.3 when the label is zero. The chosen probabilities guaranteed a sufficient amount of overlap between the two groups. The outcome $Y$ is simulated based on a linear model including a binary treatment indicator multiplied by the true treatment effect (chosen to be 2), as well as a linear term for the label. Gaussian noise is added to obtain the final simulated outcome. The linear term for the label has a negative coefficient in order to induce a negative bias to the average treatment setup compared to a randomized setting. Given that the confounding simulation is only based on the labels, the study was in fact randomized with respect to any other source of confounding.
Complex Confounding To simulate Complex Confounding with structural smoothness and sparsity, we first train an autoencoder (AE) with 5-dimensional latent space on the pre-trained representations, both in the case of the text and image representations. These AE-encodings are then used to simulate confounding similarly as in the previous experiment. The only difference is that we now sample the coefficients for the 5-dimensional AE-encodings. For the propensity score, these are sampled from a normal distribution, while the sampled coefficients for outcome regression are restricted to be negative, to ensure a sufficiently larger confounding effect, that biases naive estimation. We choose a 5-dimensional latent space to allow for sufficiently good recovery of the original pre-trained representations.
# C.3. ATE Estimators
We estimate the ATE using multiple methods across 5 simulation iterations. In each of these, we estimate a Naive estimator that simply regresses the outcome on treatment while not adjusting for confounding. The Oracle estimator uses a linear regression of outcome on both treatment and the true label that was used to induce confounding. The S-Learner estimates the outcome regression function $g ( t , z ) = \mathbb { E } [ Y \mid T = t , Z = z ]$ by fitting a single model $\hat { g } ( t , z )$ to all data, treating the treatment indicator as a feature. The average treatment effect estimate of the S-Learner is then given by
$$
\widehat { A T E _ { S } } = \frac { 1 } { n } \sum _ { i = 1 } ^ { n } \hat { g } ( 1 , z _ { i } ) - \hat { g } ( 0 , z _ { i } ) .
$$
In contrast, the Double Machine Learning (DML) estimators estimates both the outcome regression function and the propensity score to obtain its double robustness property. In our experiments, DML estimators use the partialling-out approach for ATE estimation, which is further discussed in the next paragraph.
In the Label Confounding experiments, both the S-Learner and DML estimators are used in combination with linear and random forest-based nuisance estimators. DML (Linear) uses standard linear regression for the estimation of the outcome regression function, and logistic regression with $L _ { 2 }$ -penalty for the estimation of the propensity score. Both nuisance function estimators are ILT-invariant. DML (Lasso) uses $L _ { 1 }$ -penalized linear and logistic regression with cross-validated penalty parameter selection for the outcome regression and propensity score estimation, respectively. The S-Learner (Linear) and S-Learner (Lasso) use unpenalized and $L _ { 1 }$ -penalized linear regression for the outcome regression, respectively. The random forest-based nuisance estimation (both for DML and S-Learner) is based on the standard random forest implementation from scikit-learn. The number of estimated trees is varied in certain experiments to improve numerical stability.
In the Complex Confounding experiments, we also use neural network-based nuisance estimators for DML and the S-Learner. For this purpose, we employed neural networks with a depth of 100 and a width of 50 while using ReLU activation and Adam for optimization. While DML (NN) and S-Learner (NN) use neural networks for the outcome regression, logistic regression is employed in DML (NN) for propensity score estimation to enhance numerical stability. Generally, DML is used with sample splitting and with two folds for cross-validation. For the S-Learner and DML the Python packages CausalML (Chen et al., 2020) and DoubleML (Bach et al., 2022) are used, respectively.
Partially Linear Model and Orthogonal Scores In our experiments, we simulated two different types of confounding. In both cases we use non-tabular data to adjust for this confounding, given that the confounding inducing information is contained in this data source, but not available otherwise. However, as this information is non-linearly embedded in the non-tabular data, the model that we aim to estimate follows the structure of a so-called partially linear model (PLM). Given a binary treatment variable $T$ , the PLM is a special case of the more general confounding setup that we consider in the theoretical discussion of this paper. Specifically, the PLM considers the case where the outcome regression function in (2) decomposes as
$$
g ( T , W ) = \mathbb { E } [ Y | T , W ] = \theta _ { 0 } T + \tilde { g } ( W ) .
$$
The structure of the propensity score in (3) remains the same. The parameter $\theta _ { 0 }$ in the PLM corresponds to the target parameter considered in (1), namely the ATE. In their theoretical investigation, Chernozhukov et al. (2018) discuss ATE estimation both in the partially linear model and in the more general setup, which they refer to as interactive model. Given that we consider the more general case in Section 5, the orthogonalized score stated in this section matches that of Chernozhukov et al. (2018) for the ATE in the interactive model. In case of the PLM, Chernozhukov et al. (2018) consider two other orthogonalized scores, one of which is the so-called partialling-out score function, which dates back to Robinson (1988). The partialling-out score corresponds to an unscaled version of the ATE score in the (binary) interactive model in case the outcome regression decomposes as in (6). The scaling is based on certain estimated weights. Therefore, score functions as the partialling-out score are sometimes referred to as unweighted scores (Young & Shah, 2024). While the theoretical result in Theorem 5.7 could also be obtained for DML with partialling-out score under similar assumptions, the key requirement being again (5), the approach may not be asymptotically efficient given that it does not use the efficient influence function. However, the potential loss of asymptotic efficiency is often outweighed by increased robustness in finite-sample estimation when using unweighted scores, which has contributed to the popularity of approaches such as the partialling-out method in practice (van der Vaart (1998, $\ S 2 5 . 9 _ { , }$ ), Chernozhukov et al. (2018, $\ S 2 . 2 . 4 \rangle$ , Young & Shah (2024)). Accordingly, we also adapted the partialling-out approach in the DML-based ATE estimation in our experiments.
# C.4. Intrinsic Dimensions of Pre-trained Representations
In Section 6.2 we also provide empirical evidence that validates the hypothesis of low intrinsic dimensions of pre-trained representations. For this, we use different pre-trained models from the from the TorchXRayVision library (Cohen et al., 2022). All of these are trained on chest $\mathrm { \Delta } \mathrm { X }$ -rays and use a Densenet-121 (Huang et al., 2017) architecture. Given the same architecture of the models, the dimension of the last layer hidden features is 1024 for all models. The different names of the pre-trained models on the $\mathbf { X }$ -axis in Figure 6 indicate the dataset they were trained on. We use the 3,769 chest X-rays from the X-rays dataset described above and pass these through each pre-trained model to extract the last layer features of each model, which we call the pre-trained representations of the data. Subsequently, we use standard intrinsic dimension estimators such as the Maximum Likelihood Estimator (MLE) (Levina & Bickel, 2004), the Expected Simplex Skewness (ESS) estimator (Johnsson et al., 2015), and the local Principal Component Analysis (lPCA) estimator (Fukunaga & Olsen,
1971), with a choice of number of neighbors set to 5, 25 and 50, respectively. While the intrinsic dimension estimates vary by the pre-trained model and the intrinsic dimension estimator used, the results indicated that the intrinsic dimension of the pre-trained representations is much smaller than the dimension of the ambient space (1024).
# C.5. Double Machine Learning with Convolutional Neural Networks as Nuisance Estimators
In Section 6.3, we compare DML with pre-trained neural networks against DML without pre-trained neural networks. This experiment investigates the benefits of pre-training for nuisance estimation in the context of DML-based ATE estimation. Experiments are conducted on the X-Ray dataset, and confounding is simulated based on Label Confounding. DML (Pretrained) uses the same pre-trained Densenet-121 from the TorchXRayVision library (Cohen et al., 2022) that was previously used as pre-trained neural feature extractors in the other image-based experiments. Building on this pre-trained feature extractor, DML (Pre-trained) then uses linear models on the pre-trained features for the nuisance function estimation. In contrast, DML without a pre-trained feature extractor uses standard Convolutional Neural Networks (CNNs) to estimate the nuisance functions directly on top of the images. The experiment of Figure 7 uses a five-layer CNN with $3 { \times } 3$ convolutions, batch normalization, ReLU activation, and max pooling, followed by a model head consisting of fully connected layers with dropout. Training uses Adam optimization with early stopping. When the network is utilized for propensity score estimation, the outputs are converted to probabilities via a sigmoid activation. Both DML with and without pre-trained feature extractors use the “partialling-out” approach in combination with sample-splitting for doubly-robust ATE estimation.
# C.6. Computational Environment
All computations were performed on a user PC with Intel(R) Core(TM) i7-8665U CPU $@$ 1.90GHz, 8 cores, and 16 GB RAM. Run times of each experiment do not exceed one hour. The code to reproduce the results of the experiments can be found at https://github.com/rickmer-schulte/Pretrained-Causal-Adjust.
# D. Further Experiments
This section provides additional results from experiments that extend those discussed in the main body of the paper.
# D.1. Comparison of ATE Estimators
The results depicted in Figure 9 and Figure 10 complement Figure 4 and Figure 5 that are discussed in Section 6.
Label Confounding (X-Ray) The results for the Label Confounding simulation based on the X-Ray dataset over 5 simulations are depicted in Figure 9. As before, the naive estimator shows a strong negative bias. Similarly, the S-Learner (for all three types of nuisance estimators) and for DML using random forest or lasso exhibit a negative bias and too narrow confidence intervals. In contrast, DML using linear nuisance estimator (without sparsity-inducing penalty) yields less biased estimates with good coverage due to its properly adapted confidence intervals.
Figure 9. Label Confounding (X-Ray): Comparison of ATE estimators on the $X$ -Ray dataset. DML & S-Learner use pre-trained representations and three types of nuisance estimators: linear models without $L _ { 1 }$ -penalization (Linear), linear models with $L _ { 1 }$ - penalization (Lasso), as well as random forest $( R F )$ . Point estimates and $9 5 \%$ CIs are depicted.
Complex Confounding (IMDb) A similar pattern can be observed for the Complex Confounding setting on the IMDb data depicted in Figure 10. The naive estimator and both of the random forest-based ATE estimators exhibit strong bias. In contrast, both neural network-based estimators show very little bias. This provides further evidence that neural networks can adapt to the low intrinsic dimension of the data. However, unlike the DML estimator, the S-Learner still produces overly narrow confidence intervals and thus has poor coverage. As in the example discussed in the main body of the text, the DML (NN) estimator is the only one that yields unbiased estimates and valid inference.
Figure 10. Complex Confounding (IMDb): Comparison of ATE estimators on the IMDb dataset. DML & S-Learner use pre-trained representations and either neural network (NN) or random forest $( R F )$ based nuisance estimators. Point estimates and $9 5 \%$ CIs are depicted.
# D.2. DML with and without Pre-Training
The experiment on DML with and without pre-trained representations explored the benefits of pre-training for DML and was discussed in Section 6.3. We extend this line of experiment by considering different sample sizes, as well as neural network architectures for the non-pre-trained model. While the DML (CNN) estimator in Figure 7 uses five-layer CNNs for the nuisance function estimation, the DML (CNN) estimator in Figure 11 uses a slightly simpler model architecture of two-layer CNNs with ReLU activation and max-pooling, followed by fully connected layers as model head. The slightly less complex model architecture requires fewer neural network parameters to be trained, which might be beneficial in the context of the X-Ray dataset, considering the comparably small sample size available for model training.
Overall, Figure 11 confirms the previous finding that DML with pre-trained representations performs much better than DML without pre-training. While the former yields unbiased ATE estimates, the ATE estimates of the latter show a strong negative bias. As the two plots show, this result is independent of the model architecture and sample size used.
Figure 11. DML with/out Pre-Training: Comparison of DML using pre-trained representations “DML (Pre-trained)” and DML without pre-training “DML (CNN)” for ATE estimation using 500 (Left) and all 3769 (Right) images from the X-Ray dataset. Point estimates and $9 5 \%$ CIs are depicted.
# D.3. Asymptotic Normality of DML ATE Estimation
This line of experiments explores the asymptotic normality of the DML estimator. For this purpose, we extend the ATE estimation experiments of Figure 9 with Label Confounding and Figure 5 with Complex Confounding that are based on the X-Ray dataset. While the two figures depict the estimates of different ATEs over 5 simulation iterations, we repeat both experiments with 200 iterations and collect the ATE estimates. We standardize each estimate and plot the corresponding empirical distribution. The results are shown in Figure 12. The left plot depicts the empirical distribution of the 200 standardized point estimates of the Oracle and Naive estimator, as well as DML with linear nuisance estimator in the Label Confounding experiment. The right plot displays the empirical distribution of the 200 standardized point estimates of the Naive, S-Learner, and DML estimator in the Label Confounding experiment. The latter two use neural network-based nuisance estimation. While the distributions of the Naive and S-Learner estimators show a strong bias, the distribution of the DML approach matches the theoretical standard normal distribution in both experiments.
Figure 12. Asymptotic Normality of DML: Comparison of the empirical distributions of standardized point estimates of different ATE estimators on the $X$ -ray dataset. Left: Distribution of 200 standardized point estimates of the Naive, Oracle, and DML with linear nuisance estimation from the Label Confounding experiment. Right: Distribution of 200 standardized point estimates of the Naive, S-Learner with NN-based nuisance estimation (S-Learner) and DML with NN-based nuisance estimation from the Complex Confounding experiment.
# D.4. Effects of the Hierarchical Composition Model (HCM) Structure on Estimation
In this experiment, we investigate the effect of the Hierarchical Composition Model (HCM) structure in the context of ATE estimation. The HCM was formally introduced in Definition 5.1 and later used in Theorem 5.5 to derive convergence rates of neural network-based estimation. This result showed that the convergence rate of neural networks is determined by the worst-case pair that appeared in the constraint set of the HCM. The core benefit of the HCM in our context is, that it constitutes a very flexible class of functions, while at the same time, it enables to obtain fast convergence rates in case the target function factors favorably according to the hierarchical structure of the HCM. The latter could be fulfilled in case each composition in the hierarchical structure only depends on a few prior compositions. This structural sparsity would improve the worst-case pair in the constraint set and thereby could allow for obtaining fast convergence rates.
Now, in the DML ATE estimation, one might be interested in what happens when the nuisance functions do not factor according to the HCM structure, such that sufficiently fast convergence rates for the nuisance estimation, required in the ATE estimation, can be achieved. We simulate such a scenario by extending the previously introduced setup of Complex Confounding. Instead of simulating confounding based on low-dimensional encodings from a AE trained on the pre-trained representations from the X-Ray experiments (as done in some of the previous experiments), we do this directly based on the pre-trained representations. For this purpose, we define the nuisance functions (outcome regression and propensity score) to depend on the product of all features in each pre-trained representation. Hence, the nuisance functions depend on all 1024 features, thereby mimicking the curse of dimensionality scenario discussed in Section 4. Further, the nuisance functions are constructed such that bias is introduced in the ATE estimation. In the following, we estimate the same set of ATE estimators that were previously used in the Complex Confounding experiments.
The results are depicted in Figure 13. All estimators show substantial bias (even the DML approach), given that none of the estimators is able to properly adapt to this complex type of confounding structure. The results are a validation of the fact that no (nuisance) estimator can escape the curse of dimensionality without utilizing certain beneficial structural assumptions.
Figure 13. Comparison of ATE estimators: DML & S-Learner use pre-trained representations and either neural network (NN) or random forest (RF) based nuisance estimators. Point estimates and $9 5 \%$ CIs are depicted. | There is growing interest in extending average treatment effect (ATE)
estimation to incorporate non-tabular data, such as images and text, which may
act as sources of confounding. Neglecting these effects risks biased results
and flawed scientific conclusions. However, incorporating non-tabular data
necessitates sophisticated feature extractors, often in combination with ideas
of transfer learning. In this work, we investigate how latent features from
pre-trained neural networks can be leveraged to adjust for sources of
confounding. We formalize conditions under which these latent features enable
valid adjustment and statistical inference in ATE estimation, demonstrating
results along the example of double machine learning. We discuss critical
challenges inherent to latent feature learning and downstream parameter
estimation arising from the high dimensionality and non-identifiability of
representations. Common structural assumptions for obtaining fast convergence
rates with additive or sparse linear models are shown to be unrealistic for
latent features. We argue, however, that neural networks are largely
insensitive to these issues. In particular, we show that neural networks can
achieve fast convergence rates by adapting to intrinsic notions of sparsity and
dimension of the learning problem. | [
"stat.ML",
"cs.AI",
"cs.LG",
"stat.CO",
"stat.ME"
] |
# 1 Introduction
Large multimodal models (LMMs) such as GPT-4o [1] exhibit omni-capabilities across text, vision, and speech modalities, unlocking broad potential across applications. Compared to vision-oriented LMMs [2, 3], omni-modal LMMs can support speech interaction based on visual information. Furthermore, advanced online services like GPT-4o can offer a seamless “see-while-hear” interaction for users by simultaneously providing intermediate text (i.e., transcription of user inputs and model responses) during speech interaction, which highlights the importance of building LMMs that can simultaneously support interactions through various modality combinations.
However, building LMMs that support text, vision, and speech remains a substantial challenge due to the intrinsic representational discrepancies across modalities. Most existing LMMs specialize in
Speech Out Speech Out Top Speech Layers -- Transformer Layer 40 iSnitmerulmtaednieaoteustleyxtproesdulctse Speech Decoder Transformer Layer 36 Text Out Response Text Out Large Language Model + It’s sunny today… Transformer Layer 35 Large Language Model Optional layer-dimension mapping Transformer Layer 4 for speech-text ? CTC Text Out
Visio+n PErnocjoder Speec+hPErnocjoder Embedding Vision In Transformer Layer 3 SHtorweisatmheiwnhgetAheSr?R whether? Transformer Layer 1 服 -- Hwohwetihsetrh?e Text In Bottom Speech Layers
Vision In Speech In Text In Speech In (autoregressive speech units) sequence-dimension concatenation sequence-dfiomrevnision-tceoxntcatenation
(a) Sequence-dimension concatenation for modality alignments in previous works
either vision [4, 3, 5–7] or speech [8–11], feeding the extracted modality representations into the context of large language model (LLM) backbone. Recently, some omni-modal LMMs [12–14] aim to integrate text, vision, and speech within a unified framework. Such models typically concatenate representations from individual modality encoders along the sequence dimension before feeding them into the LLM backbone, as shown in Figure 1(a). These concatenation-based approaches simplify modality integration, but they heavily rely on large-scale data to learn modality alignments in a datadriven manner [10, 11, 13, 14], which is not friendly to limited public tri-modal data. Moreover, such concatenation-dimension alignments are not flexible enough to simultaneously produce intermediate text results during speech interactions, as GPT-4o does.
To this end, we aim to model the relationships between modalities more purposefully, thereby achieving more efficient and flexible modality alignments. In multimodal interaction, text, vision, and speech modalities serve different roles, where vision primarily conveys visual information [3], while text and speech focus on language information [9]. As such, directly concatenating all three modalities in sequence-dimension is suboptimal for modality alignments. Ideally, the speech and text should exhibit high semantic consistency, while the vision is semantically complementary to the text. Therefore, vision and speech should be separately aligned to text in different ways.
Along with this idea, we introduce Stream-Omni, a language-vision-speech LMM based on efficient text-centric modality alignments, which can flexibly support interactions under various modality combinations. As shown in Figure 1(b), Stream-Omni is built upon the LLM backbone and aligns the vision and speech modalities to text using different mechanisms. For vision, which is semantically complementary to text, Stream-Omni employs sequence-dimension concatenation for vision-text alignment. For speech, which shares higher semantic consistency with text, Stream-Omni introduces a layer-dimension speech-text mapping for speech-text alignment. Specifically, Stream-Omni takes LLM as the core and introduces bottom and top speech layers to model speech-to-text mapping via Connectionist Temporal Classification (CTC) [15], thereby enabling external interaction through the speech modality and simultaneous internal generation via the text modality. With speech–text mapping, Stream-Omni can transfer the text capability of LLM backbone to the speech modality with less speech data. As a byproduct, Stream-Omni can simultaneously produce intermediate text results (i.e., transcription of instruction and response) during speech interaction, offering a more comprehensive multimodal experience. We evaluate Stream-Omni on various benchmarks covering visual understanding, speech interaction, and vision-grounded speech interaction, and the results demonstrate that Stream-Omni achieves strong performance using only 23,000 hours of speech data.
# 2 Related Work
Existing large multimodal models can be categorized into three types: vision-oriented, speechoriented, and omni-modal. For vision-oriented LMMs, LLaVA [3] is the most widely adopted architecture. In LLaVA, a vision encoder (CLIP [16]) is used to extract visual features from visual inputs, which are then concatenated with the text inputs and fed into LLM to generate text responses.
Speech Out Interactions under various modality combinations Stream-Omni Speech Decoder (a) Text + Vision (Optional) → Text □□□□□ · Vision In Text Representation Top Speech Layers Text In Large Language Model Text Out Vision Representation Alignment-based Fusion (b) Speech $^ +$ Vision (Optional) → Speech (Text) Speech Representation □□□□□□□□□□□□□□ (simultaneously produce intermediate ASR/output text) · It’s sunny today… Vision In Large Language Model Text Out Speech In Bottom Speech Language Large → Speech Top Speech Out Layers Model Layers ! 00000000 ASR Results Text Out Projection <blank> <blank> How autoregressive Vision Encoder weather <blank>? CTC Decoder ASR Results (c) Text + Vision (Optional) → Speech (Text) (simultaneously produce intermediate output text) Vision In Bottom Speech Layers … ViTseixotnIInn Text Embddin…g <2> <331> <42> <4023> <532> <72> <72> <965> <323> <3424> <39> <985> <2542> <865> (Sopnelye couhtpOut) BSLpoateyteorcsmh e LaMLnaogrudgaeg SLpaTeyoeprcsh Speech Out Text In Hwoewatihsetrh?e Speech In autoregressive Text Out
Based on LLaVA, the following works improve the vision-oriented LMMs through improved training data [5, 17, 18], enhanced image encoding [19, 4, 20], and extended video understanding [21–25].
For speech-oriented LMMs, existing methods rely on either continuous or discrete speech units. Methods based on continuous representations, such as Mini-Omni [8], LLaMA-Omni [9], and FreezeOmni [26], use a speech encoder (e.g., Whisper [27]) to extract speech features, which are then projected into the LLM’s embedding space to facilitate speech understanding. These approaches often incorporate a speech decoder to generate speech responses based on LLM’s text outputs. Methods based on discrete units, such as SpeechGPT [28], Moshi [10] and GLM-4-Voice [11], employ a speech tokenizer [29–31] to convert speech into discrete units, allowing the LLM to directly understand and generate speech units, which are finally synthesized into speech using a unit-based speech decoder [32, 31]. Compared to continuous representations, discrete units can be jointly modeled with text in LLM’s context, but they often rely on more speech data for speech pre-training [33, 11].
Existing omni-modal LMMs, such as VITA-1.5 [12], MiniCPM2.6-o [7], Baichuan-Omni [13], Qwen2.5-Omni [14], use various encoders to extract the modality representations, which are then concatenated and fed into the LLM to facilitate multimodal understanding, and finally a speech decoder is employed to synthesize speech from the generated text. Such methods typically model modality alignments in a data-driven manner. In contrast, Stream-Omni models the relationships between modalities more purposefully, thereby achieving efficient and flexible modality alignments.
# 3 Stream-Omni
We introduce Stream-Omni, a language–vision–speech LMM based on text-centric modality alignments. Stream-Omni aligns vision and speech to the text modality via sequence-dimension concatenation and layer-dimension mapping, respectively, thereby achieving efficient and flexible modality alignments. The architecture, training, and inference of Stream-Omni are introduced as follows.
# 3.1 Architecture
The architecture of Stream-Omni is illustrated in Figure 2. Stream-Omni adopts the LLM as its backbone and progressively aligns the vision and speech to the text, efficiently developing a LMM that supports text, vision, and speech. For vision-text alignment, Stream-Omni applies a vision encoder and projection to extract visual representations, which are then concatenated with the text tokens. For speech-text alignment, Stream-Omni introduces several speech layers at the bottom and top of LLM backbone to respectively map speech to the text and generate speech based on the text.
# 3.1.1 Vision Modality
Given the semantic complementarity between the vision and text modalities, Stream-Omni adopts a sequence-dimension concatenation for vision-text alignment, which is commonly employed in
vision-oriented LMMs [3, 5, 34]. Specifically, Stream-Omni introduces the vision encoder and projection to convert visual inputs into visual representations, which are then concatenated with text representations and jointly fed into the LLM to facilitate visual understanding.
# 3.1.2 Speech Modality
Compared to vision, aligning speech and text is more challenging due to the greater variability of speech representations and the relative scarcity of speech data. To address this, Stream-Omni leverages the higher semantic consistency between speech and text, employing a speech-text mapping to facilitate alignment through more direct supervision.
To achieve this, Stream-Omni incorporates an $N$ -layer LLM backbone as the inner core, with $N _ { s p e e c h } ^ { b o t t o m }$ tshpeetcohp lfaoyretresxta-dtod-esdpteoecthembaopttpoinmg.foOrvsepreaellc,hS-tore-taemxt-Omapnpiienxgteands yspere eLcLh lMa yinetros $N$ a $( N _ { \mathrm { s p e e c h } } ^ { \mathrm { b o t t o m } } + N + N _ { \mathrm { s p e e c h } } ^ { \mathrm { t o p } } )$ stpoepech)-layer decoder-only architecture, and leverages multi-task learning to separate different layers into different functions of speech-to-text mapping, text-to-text generation, and text-to-speech mapping. During inference, Stream-Omni autoregressively generates speech at the outermost layer, while relying on the LLM backbone at the inner layers for response generation. In this way, Stream-Omni preserves the generative capabilities and knowledge within the LLM core, while effectively broadening its interaction modalities, avoiding the high cost of using large-scale speech data to relearn textual knowledge. The speech interaction process in Stream-Omni includes speech tokenizer, speech-text mapping, text generation, and streaming speech generation.
Speech Tokenizer To enable the mapping with text token, Stream-Omni employs the pre-trained CosyVoice speech tokenizer [31] to discretize the raw speech $S$ into a sequence of discrete speech units $U = ( u _ { 1 } , \cdot \cdot \cdot , u _ { | U | } )$ :
$$
U = { \mathrm { S p e e c h T o k e n i z e r } } ( S ) ,
$$
where SpeechTokenizer $\cdot ( \cdot )$ denotes speech tokenizer, with the speech units vocabulary $\mathcal { V } ^ { \mathrm { U } }$ . To joint modeling speech and text, we extend the vocabulary by merging the speech unit vocabulary $\mathcal { V } ^ { \mathrm { U } }$ with the LLM’s text vocabulary $\mathcal { V } ^ { \mathrm { T } }$ , and introduce a special blank token $\left. { \mathrm { b l a n k } } \right.$ , yielding the multimodal vocabulary of Stream-Omni $\mathcal { V } ^ { \mathrm { o m n i } } = \mathcal { V } ^ { \mathrm { T } } \cup \mathcal { V } ^ { \mathrm { U } } \cup \left\{ \langle \sf b l a n k \rangle \right\}$ .
Speech-Text Mapping To take advantage of LLM’s capabilities, Stream-Omni introduces the bottom and top speech layers to learn the speech-text mapping, thereby transferring the text capabilities within LLM to the speech modality. Specifically, the bottom and top speech layers consist of $\dot { N } _ { s p e e c h } ^ { b o t t o m }$ and N top Transformer layers, which share the same configuration as the LLM backbone. The bottom speech layers $\mathcal { F } _ { s p e c h } ^ { b o t t o m } ( \cdot )$ maps the speech units $U$ to the text:
$$
H ^ { \mathrm { U } } = { \mathcal { F } } _ { s p e e c h } ^ { b o t t o m } ( U ) ,
$$
where $H ^ { \mathrm { U } }$ denotes the representation of the speech units. Then, to achieve speech-to-text mapping, Stream-Omni introduces a Connectionist Temporal Classification (CTC) [15] decoder $\mathrm { { C T C D e c } ( \cdot ) }$ to decode the text sequence from $H ^ { \mathrm { U } }$ :
$$
D ^ { \mathrm { U } } = \mathrm { C T C D e c } ( H ^ { \mathrm { U } } ) ,
$$
where $D ^ { \mathrm { U } } \in \mathbb { R } ^ { | U | \times | \mathcal { V } ^ { \mathrm { o m n i } } | }$ represents the probability distribution over the multimodal vocabulary for each speech unit, which can be decoded into a CTC sequence that includes repeated and blank tokens. During training, this module is optimized using the CTC loss:
$$
\mathcal { L } _ { C T C } = - \log \sum _ { Z \in \Pi ^ { - 1 } ( X ) } p ( Z \mid D ^ { \mathrm { U } } ) ,
$$
where $\Pi ^ { - 1 } ( X )$ denotes the set of all possible CTC sequences that map to the text sequence $X$ by removing repeated and blank tokens, and $p ( Z \mid D ^ { \mathrm { U } } )$ is the decoding probability of sequence $Z$ from $D ^ { \mathrm { U } }$ . At inference time, Stream-Omni can decode the CTC sequence from $D ^ { \mathrm { U } }$ to produce streaming speech recognition results as an intermediate output for user. More potentially, the CTC decoder holds promise for real-time speech interaction by detecting when the user has stopped speaking based on the consecutive blank tokens in the CTC sequence [35].
Text Generation Through CTC modeling, the bottom speech layers map the speech units into the text representation, achieving speech-text alignment at the representational level. To further bridge the structural gap between speech and text, Stream-Omni removes blank tokens $\left. \mathrm { b l a n k } \right.$ from $H ^ { \mathrm { U } }$ to produce the refined sequence $\hat { H } ^ { \mathrm { U } }$ . To preserve the model’s understanding of the speech inputs, this blank token removal is only performed during the generation phase (i.e., generated speech).
The processed speech representation $\hat { H } ^ { \mathrm { U } }$ is then concatenated with the visual representation $H ^ { \mathrm { V } }$ (if has visual inputs) and fed into the LLM backbone $\mathcal { F } _ { l l m } ( \cdot )$ to generate the text representation $H ^ { \mathrm { T } }$ :
$$
H ^ { \mathrm { T } } = \mathcal { F } _ { l l m } ( [ H ^ { \mathrm { V } } : \hat { H } ^ { \mathrm { U } } ] ) ,
$$
where $[ \cdot : \cdot ]$ is sequence concatenation. Owing to the semantic alignment via CTC modeling, StreamOmni can transfer text intelligence to the speech modality while preserving the text capabilities.
Streaming Speech Generation While autoregressively generating the text outputs, Stream-Omni uses top speech layers to generate the corresponding speech units in a streaming manner. To ensure consistency between the generated speech and text, we introduce an alignment-based fusion to use text information to guide speech unit generation.
As illustrated in Figure 3, the top speech layers take the speech representations $H ^ { \mathrm { U } }$ from bottom speech layers and text representations $H ^ { \mathrm { T } }$ from the LLM backbone as the inputs, where each layer comprises self-attention, alignmentbased fusion, and FFN. The alignment-based fusion module fuses the text representations $\dot { H } ^ { \mathrm { T } }$ into the speech representations $H ^ { \mathrm { U } }$ , thereby achieving text-to-speech mapping. However, to enable streaming generation, the key challenge lies in accurately identifying which text corresponds to each speech unit, thereby generating the speech units once the related text token is generated.
Figure 3: Diagram of top speech layers.
Fortunately, the CTC decoder introduced in Stream-Omni can naturally capture the positional alignment between speech and text [35], which can be used to guide the alignment-based fusion. Formally, based on the CTC sequence $D ^ { \mathrm { U } }$ , Stream-Omni computes the number of aligned text tokens (excluding duplicate and blank tokens) corresponding to the speech sequence up to unit $u _ { i }$ , denoted as ${ \mathcal { N } } _ { i }$ . That is, within the first $i$ speech units $U _ { \leq i }$ , Stream-Omni identifies the first ${ \mathcal { N } } _ { i }$ text tokens $X _ { \leq \mathcal { N } _ { i } }$ . Accordingly, when autoregressively generating the next speech unit $u _ { i + 1 }$ , Stream-Omni should use the next text token $x _ { \mathcal { N } _ { i } + 1 }$ to guide the generation of speech unit $u _ { i + 1 }$ .
In practice, to involve richer text context, Stream-Omni extends the fusion window from the aligned text token $x _ { \mathcal { N } _ { i } + 1 }$ to its preceding $W - 1$ tokens, where $W$ is the hyperparameter of window size. The alignment-based fusion is implemented via cross-attention, with the speech representations attending to the text representations, so the fused representation $h _ { i } ^ { f u s i o n }$ of speech unit $u _ { i }$ is calculated as:
$$
h _ { i } ^ { f u s i o n } = \mathrm { C r o s s A t t n } \left( u _ { i } , H _ { \mathcal { N } _ { i } + 2 - W : \mathcal { N } _ { i } + 1 } ^ { \mathrm { T } } \right) ,
$$
where HTi+2 W : i+1 are W text representations within the local window (W = 5 in Stream-Omni). To reducNe gen−eratiNon latency, similar to the widely used wait- $\mathbf { \nabla } \cdot \mathbf { k }$ policy in simultaneous translation [36–40], Stream-Omni begins streaming speech generation after lagging $K$ text tokens ( $K = 3$ in Stream-Omni). Therefore, the first speech unit will be generated immediately after $K$ text tokens have been produced. Using the top speech layers Fstpoepech(·), Stream-Omni can simultaneously generate both text and the corresponding speech units:
$$
\hat { U } = \mathcal { F } _ { s p e e c h } ^ { t o p } \left( H ^ { \mathrm { U } } , H ^ { \mathrm { T } } \right) ,
$$
where $\hat { U }$ denotes the generated speech unit sequence. Finally, a CosyVoice speech decoder [31] is used to synthesize the speech waveform from the generated speech units.
# 3.2 Training
Stream-Omni achieves efficient alignment across text, visual, and speech modalities, thus requiring only a small amount of tri-modal training data. Given the scarcity of existing datasets that jointly incorporate all three modalities, we first construct a tri-modal corpus consisting of text, images, and speech through an automated pipeline. Then, Stream-Omni adopts a three-stage training strategy to progressively align the text, visual, and speech modalities.
Table 1: Training stages and data of Stream-Omni.
# 3.2.1 Data Construction
The training of Stream-Omni involves text-vision, text-speech, and text-vision-speech multimodal datasets to support interactions across various modality combinations. For text-vision data, we adopt the LLaVA [3] and the LLaVA-OV dataset [6], while filtering out samples involving maths, code, and other content unsuitable for speech interaction. For text-speech data, we use automatic speech recognition (ASR) corpora from LibriSpeech [41] and WenetSpeech [42] to train bottom speech layers. Given the scarcity of public speech interaction data, we construct speech interaction dataset by converting existing text-only and vision-language instruction datasets into speech interactions datasets using open-source text-to-speech synthesis (TTS) [31], named InstructOmni2. The construction details are introduced in Appendix A. Table 1 summarizes the used training data (only 23K hours of speech), where those marked with superscript ‘tts’ indicate synthesized speech interaction dataset.
# 3.2.2 3-Stage Training
Stream-Omni is initialized using a LLM and adopts a three-stage training strategy, which aligns vision and speech with the text in succession, and then models alignments across three modalities.
Stage 1: Vision-Text Alignment In this stage, Stream-Omni uses the standard training method used in vision-oriented LMMs such as LLaVA [3].
Stage 2: Speech-Text Alignment In this stage, the speech-text alignment is achieved by training the bottom and top speech layers using a combination of CTC loss after the bottom speech layers (refer to Eq.( 4)) and cross-entropy loss after the top speech layers. Note that, the text representations fed into the top speech layers during training (i.e., $H ^ { \mathrm { T } }$ in Eq.(6)) are drawn from ground-truth transcriptions rather than LLM generated text, which aim to avoid text-speech dismatching caused by generating incorrect text, thereby enhancing the consistency of text-to-speech generation.
Stage 3: Text-Vision-Speech Alignment Finally, we train the LLM backbone of Stream-Omni using constructed tri-modal data through multi-task learning. Specifically, we formulate multiple tasks by combining different modalities, including Vision $^ +$ Text $$ Text, Vision $^ +$ Speech $$ Text, and Vision $^ { + }$ Speech $$ Speech, which are all optimized using the cross-entropy loss. In this way, StreamOmni is able to flexibly support interactions under various modality combinations.
# 3.3 Inference
Algorithm 1 gives the inference process of Stream-Omni when performing vision-grounded speech interaction. Given vision input $V$ and speech input $S$ , Stream-Omni generates the text token $y$ in an autoregressive manner, and simultaneously synthesizes the corresponding speech of $y$ . During speech synthesis, Stream-Omni autoregressively generates speech units $u$ based on $y$ , until the entire speech corresponding to $y$ is generated. To determine whether the generated speech units for $y$ are complete, Stream-Omni leverages alignment in the CTC decoder (in Eq.(3)). If the CTC decoder identifies a new text token from the generated $u$ (i.e., the semantics of the generated speech are complete), the model proceeds to generate the next text token. Otherwise, the model continues to generate speech units for the current $y$ . Stream-Omni repeats the above process until $\left. \cos \right.$ is generated.
# Algorithm 1 Inference of Stream-Omni
Input: Speech input $S$ , Vision input $V$ , Fusion window size $W$ , Lagging text tokens $K$
Output: Generated speech output $\widehat { S }$
Init: ASR results (CTC sequence) $\widehat { A } = \left[ \begin{array} { l l l } \end{array} \right]$ ; Generated text tokens ${ \widehat { Y } } = [ ]$ ; Generated speech units $\widehat { U } = \left[ \begin{array} { l l l l l } \end{array} \right]$
1: Extract visual representation $H ^ { \mathrm { V } }$ from $V$ using the vision encobder and projection;
2: Extract speech units $U$ from $S$ using the speech tokenizer;
3: $H ^ { \mathrm { U } } \dot { \mathcal { F } } _ { s p e e c h } ^ { b o t t o m } ( U )$ ; $\cdot$ simultaneously produce ASR results of speech inputs
4: while $\widehat { Y } [ - 1 ] \neq \langle e o s \rangle$ do
5: $y \gets \mathcal { F } _ { l l m } ( [ H ^ { \vee } : \hat { H } ^ { \cup } : \widehat { Y } ] )$
6: $\widehat { Y } . \mathrm { a p p e n d } ( y )$ ; ▷ simultaneously produce text outputs
7: ibf $| \widehat { Y } | < K$ then continue; ▷ lagging $K$ text tokens
8: // Gbenerate speech units corresponding to $y$ until the text token is recognized in the generated speech
9: while $\widehat { A } [ - 1 ] = = \langle b l a n k \rangle$ or ${ \widehat { A } } [ - 1 ] = = { \widehat { A } } [ - 2 ]$ do ▷ generate speech for text $y$
10: G nberate speech unit $u$ b ebd on $H ^ { \mathrm { U } }$ nbd $\widehat { Y } [ - W : ]$ based on Eq.(6);
11: $\widehat { U }$ .append(u);
12: $a \gets$ argmax CTCDec(F sbpoetetcohm(U)); $D$ recognize text from generated speech
13: A.append(a);
14: Syntbhesize speech $s$ from $\widehat { U }$ using the speech decoder;
15: $\ddot { S }$ .append(s);
16: retubrn $\widehat { S }$
Besides vision-grounded speech interaction, Stream-Omni also supports interaction of various modality combinations. As shown in Figure 2(right), by flexibly integrating the vision encoder, bottom speech layers, LLM, and top speech layers, Stream-Omni can support various multimodal scenarios.
# 4 Experiments
# 4.1 Benchmarks
We evaluate the multimodal capabilities of Stream-Omni across vision and speech benchmarks. For vision evaluation, we conduct experiments on 11 benchmarks used by LLaVA, including VQA-v2 $\mathrm { ( V Q A } ^ { \mathrm { v 2 } } )$ [43], GQA [44], VizWiz [45], ScienceQA-IMG (SciQA) [46], TextVQA (VQAT) [47], POPE [48], MME [49], MMBench (MMB) [50], SEED-Bench (SEED) [51], LLaVA-Bench-in-theWild (LLaVAW) [52], and MM-Vet [53]. All evaluations follow LLaVA [3] to ensure comparability. For speech evaluation, we assess the model’s knowledge-grounded speech interaction on spoken question answering benchmarks, Llama Questions (Llama Q.) [54] and Web Questions (Web Q.) [55], where the metric is the accuracy that whether the model’s response matches the ground-truth answer.
To further assess Stream-Omni’s vision-grounded speech interaction capabilities, we construct a real-world visual-speech interaction benchmark based on the real-world VQA benchmark VisIT [56], named SpokenVisIT3. Following Fang et al. [9], the evaluation for SpokenVisIT employs the GPT model (gpt-4o version) to assign a score ranging from 1 to 5 for response. Appendix B gives the details of SpokenVisIT benchmark. Following previous works [9, 11], all speech evaluations are further divided into speech-to-text $\mathrm { \nabla { S \to T } } )$ and speech-to-speech $( S { } S )$ settings. For generated speech responses, we use Whisper-large-v3 [27] to transcribe the speech into text for evaluation.
# 4.2 Baselines
We compare Stream-Omni with vision-oriented, speech-oriented, and omni-modal LMMs of similar model scale and training data size. Vision-oriented LMM baselines include models comparable in scale to LLaVA-v1.5 [3], such as BLIP-2 [57], InstructBLIP [58], IDEFICS [59], Qwen-VL [17], Qwen-VL-Chat [17], SPHINX [19], and mPLUG-Owl2 [20]. Speech-oriented LMM baselines include TWIST [60], SpeechGPT [28], Spectron [54], Moshi [10], Freeze-Omni [26], LLaMA-Omni [9], and GLM-4-Voice [11]. Most existing omni-modal LMMs are trained on large-scale proprietary datasets. For a fair comparison, we compare Stream-Omni with VITA-1.5 [12], a text-vision-speech LMM trained on a comparable amount of data, primarily based on LLaVA [3] and LLaVA-OV [6].
Table 2: Results on visual understanding benchmarks.
# 4.3 Configuration
Stream-Omni is built upon the LLaMA-3.1-8B-Instruct4 [61], which consists of 32 Transformer layers. For vision, Stream-Omni employs the SigLIP-so400m-patch $\mathsf { I } 4 \mathrm { - } 3 8 4 ^ { 5 }$ [62] as the vision encoder. For speech, Stream-Omni incorporates the bottom speech layers with 3 Transformer layers and top speech layers with 5 Transformer layers, where all Transformer layers share the same architecture and parameter configuration as those in LLM. The speech tokenizer and flow-matching-based speech decoder are adopted from CosyVoice-300M-25Hz [31]. The vocabulary of Stream-Omni comprises 128K text tokens from LLaMA-3.1-8B-Instruct, 4096 speech units from the CosyVoice tokenizer, and a blank token ⟨blank⟩. Stream-Omni is trained using 8 H800 GPUs and tested on 1 A100 GPU.
# 5 Results and Analyses
# 5.1 Visual Understanding
We evaluate the visual understanding capabilities of Stream-Omni in Table 2. Compared to advanced vision-oriented LMMs and VITA-1.5 [12], Stream-Omni demonstrates strong visual capabilities on various visual tasks. More importantly, despite being a unified model that simultaneously supports vision, speech, and text, Stream-Omni achieves performance comparable to vision-oriented LMMs, indicating its effectiveness in mitigating modality interference.
# 5.2 Speech Interaction
To verify whether Stream-Omni can acquire speech capabilities and knowledge with a small amount of speech data, we conduct experiments on knowledge-based LLaMA Question and Web Question, covering both speech-to-text $( \mathsf { S } \to \mathsf { T } )$ 0 and speech-to-speech $( \boldsymbol { \mathsf { S } } \mathrm { } \boldsymbol { \mathsf { S } } )$ tasks. As shown in Table 3, Stream-Omni demonstrates strong knowledge-based speech interaction performance. Speech-oriented LMMs based on discrete speech units, such as SpeechGPT, Moshi, and GLM-4- Voice, typically rely on speech pretraining to acquire knowledge from large-scale speech data [28, 10, 11], Stream-Omni achieves superior
Table 3: Results on spokenQA benchmarks.
knowledge-based speech interaction with significantly less speech data of 23K hours, particularly in the speech-to-text setting. This advantage primarily stems from the CTC-based speech-to-text mapping in Stream-Omni, which effectively transfers the text knowledge within LLM to the speech modality and thereby supports knowledge-based speech interaction in more efficient manner.
# 5.3 Vision-grounded Speech Interaction
We evaluate the vision-grounded speech interaction of Stream-Omni on the SpokenVisIT benchmark in Table 4. As the omni-modal LMMs with similar training data, StreamOmni demonstrates superior real-world visual understanding capabilities compared to VITA-1.5. In addition, StreamOmni supports speech generation, extending its potential for multimodal interaction. Appendix C gives specific case studies, demonstrating the advantages of Stream-Omni’s speech-text mapping in cross-modal consistency.
Table 4: Results on SpokenVisIT (‘V’: vision, ‘T’: text, ‘S’: speech).
# 5.4 Quality of Speech-Text Mapping
Stream-Omni introduces the auxiliary ASR task to train the bottom speech layers and CTC decoder, thereby learning effective speech-to-text mapping. To evaluate the quality of mapping, we evaluate the ASR performance of Stream-Omni on the LibriSpeech benchmark [41]. As shown in Table 5, Stream-Omni achieves advantages in both accuracy and inference time. SpeechGPT [28], Freeze-Omni [26], and GLM-4-Voice [11] need to forward full LMM to autoregressively generating the ASR results. In contrast, StreamOmni generates the ASR results using its bottom
Table 5: Results on LibriSpeech benchmarks.
speech layers in a non-autoregressive manner, resulting in lower inference time for ASR task. More importantly, this layer-dimension allows Stream-Omni to simultaneously present intermediate ASR results during speech interaction, providing users with a more comprehensive interaction experience.
# 5.5 Effect of Alignment-based Fusion
Stream-Omni generates speech from text in a streaming manner using alignment-based fusion. To evaluate its effectiveness, we conduct the ablation study of alignment-based fusion on Llama Questions and Web Questions benchmarks $( { \mathsf { S } } \mathrm { } { \mathsf { S } } )$ in Table 6, focusing on the fusion type and the fusion window.
Fusion Type For the fusion type, we compare the current cross-attention (named “Attention”) with adding aligned text representations to the input (named “Add (input)”) or each layer (named “Add (per layer)”)
Table 6: Analysis on alignment-based fusion.
of the top speech layers. Results show that the attention-based approach outperforms the others, mainly due to its ability to attend to a broader context rather than merely adding a single text token.
Fusion Window For the fusion window, we find that attending to either very few or all text tokens during speech generation is less effective than focusing on a moderate window of tokens, which is attributed to the inherent monotonicity and locality in text-to-speech generation. This is also in line with the widely used speech-text interleaved generation methods [33, 11, 63]. The difference lies in that previous methods achieve consistency between generated speech and the current text through interleaving along the sequence dimension, while alignment-based fusion ensures consistency by guiding the speech to attend to the current text along the layer dimension. | The emergence of GPT-4o-like large multimodal models (LMMs) has raised the
exploration of integrating text, vision, and speech modalities to support more
flexible multimodal interaction. Existing LMMs typically concatenate
representation of modalities along the sequence dimension and feed them into a
large language model (LLM) backbone. While sequence-dimension concatenation is
straightforward for modality integration, it often relies heavily on
large-scale data to learn modality alignments. In this paper, we aim to model
the relationships between modalities more purposefully, thereby achieving more
efficient and flexible modality alignments. To this end, we propose
Stream-Omni, a large language-vision-speech model with efficient modality
alignments, which can simultaneously support interactions under various
modality combinations. Stream-Omni employs LLM as the backbone and aligns the
vision and speech to the text based on their relationships. For vision that is
semantically complementary to text, Stream-Omni uses sequence-dimension
concatenation to achieve vision-text alignment. For speech that is semantically
consistent with text, Stream-Omni introduces a CTC-based layer-dimension
mapping to achieve speech-text alignment. In this way, Stream-Omni can achieve
modality alignments with less data (especially speech), enabling the transfer
of text capabilities to other modalities. Experiments on various benchmarks
demonstrate that Stream-Omni achieves strong performance on visual
understanding, speech interaction, and vision-grounded speech interaction
tasks. Owing to the layer-dimensional mapping, Stream-Omni can simultaneously
provide intermediate text outputs (such as ASR transcriptions and model
responses) during speech interaction, offering users a comprehensive multimodal
experience. | [
"cs.AI",
"cs.CL",
"cs.CV",
"cs.SD",
"eess.AS"
] |
# I. INTRODUCTION
cination, focuses on reconstructing high-resolution (HR) facial images from low-resolution (LR) inputs. FSR plays a critical role in real-world scenarios where LR faces may appear in surveillance videos, historical archives, or mobile imaging. Enhancing facial image resolution is essential for improving downstream tasks such as face recognition [1], face alignment [2], and facial attribute analysis [3].
Siyu Xu and Guangwei Gao are with the Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing 210046, China, Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai 200240, and also with the Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou 215006, China (e-mail: csggao $@$ gmail.com, xusiyu200107@163.com).
Wenjie Li is with the School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100080, China (e-mail: lewj $2 4 0 8 @$ gmail.com).
Jian Yang is with the School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, China (email: csjyang $@$ njust.edu.cn).
Guo-Jun Qi is with the Research Center for Industries of the Future and the School of Engineering, Westlake University, Hangzhou 310024, China, and also with OPPO Research, Seattle, WA 98101 USA (e-mail: guojunq $@$ gmail.com).
Chia-Wen Lin is with the Department of Electrical Engineering and the Institute of Communications Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan 30013, R.O.C. (e-mail: cwlin@ee.nthu.edu.tw).
Fig. 1: (a) Spectral energy distribution of features extracted by Mamba and CNN architectures, where Mamba exhibits concentrated low-frequency responses compared to CNN’s high-frequency sensitivity. Color intensity corresponds to normalized energy values. (b) Efficiency trade-offs on CelebA test set [4], demonstrating our method’s optimal balance between PSNR, Params, and inference speed.
One fundamental challenge in FSR lies in the longtailed distribution of image frequency components [5]. Highfrequency parts, such as the eyes, nose, mouth, and facial contours, account for a small portion of the facial area but require disproportionate modeling capacity due to their high variability. These regions are particularly sensitive to identity, illumination, and expression changes, making them significantly harder [6] to reconstruct than low-frequency areas such as skin, which contribute little to identity-specific features.. This imbalance results in a performance bottleneck, where failure to recover high-frequency content leads to perceptual degradation. However, most existing methods treat all pixels equally, ignoring the skewed distribution of visual complexity and importance. This uniform processing strategy leads to inefficient resource usage and suboptimal reconstruction quality. While some methods attempt to separate frequency components [7], [8], they often struggle to maintain a balanced representation across global and local facial structures.
To address these limitations, we aim to adaptively distinguish and process high- and low-frequency information. Meanwhile, we wonder about a dual-path feature fusion to help resolve the imbalance between local and global modeling that arises in previous frequency-aware methods, ultimately enabling more efficient and accurate FSR. As shown in Fig. 1a, we present the spectral energy maps generated by CNN [9], and Mamba [10]. Interestingly, Mamba exhibits broader responses in the low-frequency regions of facial images, whereas CNN tends to be more sensitive to high-frequency components. Moreover, compared to the Transformer [11] framework, Mamba achieves global modeling through scanning mechanisms, thus keeping the computational complexity within a linear range and avoiding the expensive quadratic cost. These observations lead us to consider an alternative design: could we develop a two-branch architecture that allows Mamba and CNN to each specialize in the frequency domains they are most suited for, while integrating their outputs to better balance global and local facial structure representation?
Based on the above observations and analysis, we propose a Frequency-Aware Dual-Path Network (FADPNet), which allocates distinct submodules to specialize in different frequency domains. In particular, we introduce the Low-Frequency Enhancement Block (LFEB) and High-Frequency Enhancement Block (HFEB) to extract and refine, respectively, stable identity-preserving low-frequency features and fine-grained, structure-critical high-frequency details. The LFEB integrates an Attentive State Space Block (ASSB) and a Squeeze-andExcitation Block (SEB) to enhance low-frequency features. At the same time, the HFEB combines a High-Frequency Refinement (HFR) module and a Depthwise Position-aware Attention (DPA) module to refine high-frequency details. These two modules integrate complementary modeling strategies—global and local—to better capture the intrinsic characteristics of facial components across different frequency levels. By decomposing and processing features based on their frequency significance, our approach not only enhances facial fidelity but also improves computational efficiency by focusing resources on perceptually important regions. As shown in Fig. 1b, our proposed FADPNet achieves superior efficiency and reconstruction quality compared to existing FSR methods. In summary, the main contributions are as follows:
We propose the LFEB, which combines ASSB and SEB to enhance facial low-frequency representations by capturing global structures and emphasizing informative channel-wise responses. We propose the HFEB, which combines HFR and DPA to strengthen local high-frequency facial structures and further capture long-range spatial dependencies to adaptively refine high-frequency contexts. • Based on the architecture designed from our frequency analysis, our FADPNet demonstrates competitive FSR ability relative to existing FSR methods across performance, model size, and inference speed.
# II. RELATED WORK
# A. Face Super-Resolution
Face super-resolution [12] has made notable strides with the advancement of deep learning. Early approaches [8], [13]– [15] leverage explicit facial priors to guide reconstruction. For instance, FSRNet [13] incorporated facial landmarks and parsing maps to enforce structural consistency, while FAN [8] applied a face alignment network with progressive training to enhance realism. DIC [14] introduced an iterative refinement loop combining facial components and landmark predictions. To improve the representation of facial spatial features, Hu et al. [15] first utilized 3D shape priors in the network to better preserve sharp facial spatial structures.
However, prior-based methods depend heavily on accurate prior estimation, which is unreliable at extremely LR scenes and adds computational overhead. To mitigate this, recent works have adopted attention-based [16] or data-driven strategies [17]. SPARNet [16] used spatial attention to focus on key facial regions. SISN [18] decoupled attention into structural and textural streams. AD-GNN [19] combined spatial attention with graph-based message passing to model feature dependencies. LAAT [20] introduced local-aware refinement within a Transformer framework to enhance detail, while SCTANet [21] jointly modeled spatial and channel attention. SFMNet [22] employed parallel frequency-spatial branches for multi-scale feature learning. WFEN [23] used waveletbased encoding to mitigate downsampling artifacts. However, existing FSR methods apply uniform processing across spatial regions, overlooking the varying complexity of frequency components. In contrast, our method introduces a frequencyaware dual-path design that adaptively handles high- and lowfrequency features at multiple scales, leading to more robust and detail-preserving reconstruction under LR conditions.
# B. Long-Range Modeling
Modeling long-range dependencies is vital for FSR, as facial structures span distant spatial regions. Transformerbased methods [20], [22], [24] have demonstrated strong global modeling via self-attention. For example, LAAT [20] enhanced fine-grained facial modeling via self-refinement in a Transformer framework, improving detail restoration. SFMNet [22] adopted spatial and frequency branches to jointly capture global and local features across diverse receptive fields. CTCNet [25] further enhanced FSR by fusing multiscale global representations extracted from Transformers.
However, vision Transformers [25]–[27] suffer from quadratic complexity, limiting their scalability. This challenge has prompted exploration of alternative long-range modeling with lower computational cost. Structured State-Space Models (SSMs) [28] have gained attention for their linear-time global modeling capability. Notably, Mamba [10] introduced a selective scan mechanism for efficient long-sequence processing while preserving strong global context, and has been successfully adapted to visual tasks [29], [30]. Building on this, MambaIR [31] extended Mamba to image restoration, alleviating issues such as local pixel forgetting and redundant channel representations. Variants like MambaLLIE [32] further demonstrate its adaptability to various low-level vision tasks.
Despite its efficiency, Mamba inherits a causal bias from its sequence-oriented design, which conflicts with the non-causal nature of super-resolution [31] and limits spatial modeling. To address this, MambaIRv2 [33] introduced direction-agnostic scanning for richer context modeling with linear scalability. Unlike existing Mamba-based methods [31]–[33], our approach incorporates a customized non-causal state-space backbone into the low-frequency branch, explicitly designed for face super-resolution to better preserve structural consistency and global facial context.
Fig. 2: Overview of our FADPNet, which adopts a U-Shape structure composed of HFEBs for high-frequency facial feature modeling and LFEBs for low-frequency facial feature enhancement.
# C. Frequency-Aware Super-Resolution
Frequency-aware SR methods have emerged as an effective paradigm for enhancing structure, particularly in edges and textures [34]–[36]. By decomposing image content into distinct frequency bands, these methods enable neural networks to more precisely model the unique characteristics of each frequency component [37]. This design aligns with the wellestablished observation that low-frequency components mainly represent smooth and homogeneous regions, whereas highfrequency components capture fine-grained details and edge features essential for perceptual quality [38].
Explicit transformation methods decompose the image reconstruction process into frequency-specific operations. For instance, wavelet-based methods like SRCliqueNet [38] applied a discrete wavelet transform to separate LR images into subbands, which are then processed by specialized subnetworks to sharpen high-frequency components while preserving lowfrequency structures. Similarly, FreqNet [35] operated in the discrete cosine transform domain, predicting HR frequency coefficients through learnable spectral decoders. In contrast, implicit methods achieve frequency awareness through architectural design. SRDN [39] used high-pass constrained convolutions and residual connections to emphasize high-frequency propagation, while FSN [34] adopted Octave Convolution to split features into frequency-aware streams. AFD [40] estimated feature drifting in the DCT frequency domain, adaptively enhances features. However, the above methods either concentrate solely on high-frequency enhancement [41] or fail to model the interaction between global low-frequency structures and local high-frequency details. To address this, we propose a dual-branch framework that separately models frequency-specific features, improving both reconstruction quality and computational efficiency.
# III. PROPOSED METHOD
# A. Overview of FADPNet
The proposed network architecture, as illustrated in Fig. 2, implements a hierarchical frequency-aware processing framework. It commences with an input image ILR ∈ R3×H×W from which shallow facial features $\mathbf { f } _ { 1 } ~ \in ~ \mathbb { R } ^ { C \times H \times W }$ are extracted. Our architecture adopts a three-level U-Net [42] structure, where each resolution stage includes a cascaded basic block that decomposes features into high- and low-frequency components for separate processing. The High-Frequency Enhancement Block (HFEB) restores fine-grained details using convolutions, Residual Blocks (RB), a High-Frequency Refinement (HFR) module, and a Depthwise Position-Aware Attention (DPA) module. The Low-Frequency Enhancement Block (LFEB) employs a dual-branch design: the Squeezeand-Excitation Block (SEB) enhances local identity cues, while the Attentive State Space Block (ASSB) captures global facial structure. Outputs are fused and refined through a Feed-Forward Network (FFN). To reduce spatial misalignment caused by downsampling and upsampling, we introduce an offset-based warping module that aligns coarse structures with fine-level details, improving cross-scale consistency. Finally, features from all levels are fused and dimensionality reduction to reconstruct an HR face image ISR ∈ R3×H×W .
Fig. 3: The architecture of Attentive State Space Block (ASSB)
# B. Low-Frequency Enhancement Block (LFEB)
Our Low-Frequency Enhancement Block (LFEB) is designed to model low-frequency facial features, such as smooth skin regions on the forehead, cheeks, and chin, which exhibit gradual intensity variations and contain minimal identityspecific details. Specifically, given a low-frequency feature $\bar { \mathbf { F } } _ { L } \in \mathbb { R } ^ { C \times H \times W }$ extracted from the frequency separation module. First, LFEB begins with a layer normalization for stable training, followed by a dual-branch structure: the Squeeze-andExcitation Block (SEB) [43] for adaptively calibrating feature channels to improve the focus on important features, while the Attentive State Space Block (ASSB) captures global facial structure. A learnable fusion scheme adaptively balances two feature streams, guided by scaling parameters optimized during training. The fused output is further processed by a FeedForward Network (FFN) [10] and combined with the block’s input via a residual connection. This design enhances identitysalient, low-frequency attributes such as coarse facial shape and alignment while preserving photorealistic appearance.
1) Attentive State Space Block (ASSB): As shown in Fig. 3, building upon the Mamba [11] framework, ASSB introduces non-causal global modeling through an integration of semantic perceptual sequence reorganization and a cue-guided attention mechanism, which is used to address the limitations of causal modeling in the traditional state-space. Specifically, it begins by applying a positional embedding to the input feature $\mathbf { F } _ { 1 } \in$ $\mathbb { R } ^ { C \times H \times W }$ to obtain a feature $\mathbf { F } _ { 2 } \in \mathbb { R } ^ { C \times H \times \bar { W } }$ :
$$
\mathbf { F } _ { 2 } = \mathbf { F } _ { 1 } \cdot { \boldsymbol { \sigma } } ( \operatorname { D W C o n v } _ { 3 \times 3 } ( \left( \operatorname { C o n v } _ { 1 } ( \mathbf { F } _ { 1 } ) \right) ) ,
$$
where $\mathrm { C o n v _ { 1 } }$ is a $1 \times 1$ convolution , $\mathrm { D W C o n v _ { 3 \times 3 } }$ denotes a $3 \times 3$ depthwise convolution and $\sigma ( \cdot )$ denotes sigmoid activation. After that, to transcend the causal constraints of standard Mamba architectures, we incorporate the learned prompts $\mathbf { P }$ into a conventional State-Space Equation (SSE), enabling richer spatial interactions beyond sequential dependency. The final prompt representation $\mathbf { P }$ consists of two components: a prompt pool $\mathbf { P } _ { \mathrm { p o o l } }$ and a prompt matrix $\mathbf { P _ { \mathrm { m } } }$ :
$$
\mathbf { P } = \mathbf { P } _ { \mathrm { m } } \cdot \mathbf { P } _ { \mathrm { p o o l } } .
$$
The prompt pool is parameterized via low-rank decomposition for efficiency. The shared basis ${ \bf { M } } _ { \mathrm { { A } } }$ captures cross
block semantic commonality, while block-specific $\mathbf { M } _ { \mathrm { B } }$ enables adaptive feature combination:
$$
\begin{array} { r } { \begin{array} { r l } { \mathbf { P } _ { \mathrm { p o o l } } = \mathbf { M } _ { \mathrm { A } } \cdot \mathbf { M } _ { \mathrm { B } } , } & { } \\ { \mathbf { M } _ { \mathrm { A } } \in \mathbb { R } ^ { T \times r } , \mathbf { M } _ { \mathrm { B } } \in \mathbb { R } ^ { r \times d } } & { ( r \ll \operatorname* { m i n } \{ T , d \} ) , } \end{array} } \end{array}
$$
where $r$ is the inner rank, $T$ is the number of prompts and $d$ is the number of hidden states in Mamba. Instance-specific prompt matrix is governed by a routing mechanism employing gumbel-softmax [44], which generates $L$ instance-specific prompts $\mathbf { P } _ { \mathrm { m } } \in \mathbb { R } ^ { L \times T }$ dynamically associated input pixels with relevant prompts from the pool:
$$
\mathbf { P } _ { \mathrm { m } } = \mathrm { g u m b e l - s o f t m a x } ( \mathrm { L o g S o f t m a x } \left( \mathbf { W } _ { \mathrm { p } } \cdot \mathbf { F } _ { 2 } \right) ) ,
$$
where $\mathrm { W _ { p } }$ denotes the linear layer, LogSoftmax refers to the logarithm of the softmax function, and gumbel-softmax is a gumbel softmax [44] operator. Then, the Semantic Guided Neighboring (SGN) unfolds this 2D feature $\mathbf { F } _ { 2 }$ into 1D sequences for SSE. SGN-Unfold mechanism redefines the traditional spatial-to-sequential transformation by dynamically reorganizing pixels into 1D sequences based on semantic similarity, effectively mitigating the long-range decay inherent to existing models [33]. Notably, the semantic information that the mechanism relies on during this process stems from the prompt matrix $\mathbf { P _ { \mathrm { m } } }$ :
$$
{ \mathrm { I n d e x } } = { \mathrm { a r g m a x } } ( \mathbf { P } _ { \mathrm { m } } , \mathrm { d i m } = - 1 ) .
$$
$$
\mathbf { F } _ { 1 D } = \mathrm { S N G } ( \mathbf { F } _ { 2 D } , \mathrm { I n d e x } ) .
$$
Tokens are subsequently sorted based on their semantic indices to group semantically similar tokens into contiguous sequences. Central to ASSB operation is SSE, a modified statespace formulation that transcends the causal constraints of standard Mamba architectures. SSE enhances the output matrix $\mathbf { C }$ by incorporating learnable semantic prompts—compact embeddings representing pixel groups with shared semantics. This formulation injects global contextual awareness into SSE:
$$
y _ { i } = ( \mathbf { C } + \mathbf { P } ) h _ { i } + \mathbf { D } x _ { i }
$$
Finally, an SGN-fold is applied as the inverse operation, reconstructing the semantically sorted sequence back into the original spatial layout to form the 2D feature. This reversion is guided by the previously computed reverse index, ensuring accurate spatial alignment. The restored feature is then passed through a linear projection to produce the final output feature ${ \bf F } _ { 3 }$ . Critically, this architecture enables single-directional scanning—unlike conventional bidirectional Mamba variants requiring multiple scans—reducing computational redundancy while preserving global coherence. This efficiency breakthrough not only accelerates inference but also facilitates seamless integration with subsequent enhancement stages, where the refined low-frequency features serve as stable anchors for high-frequency detail reconstruction.
# C. High-Frequency Enhancement Block (HFEB)
In our method, HFEB specializes in high-frequency detail recovery. Specifically, for the high-frequency feature $\mathbf { F } _ { H } ~ \in ~ \mathbb { R } ^ { C \times H \times W }$ obtained from frequency separation operator, we first apply a $1 \times 1$ convolution to reduce chanane $\textstyle { \frac { C } { 2 } }$ tnovolmuitniiomn zteo ecxopmanpdu acthioananl cs tstos. feonr fwoel ouwse$1 \times 1$ $C$ up operation. Subsequently, from a local enhancement perspective, features traverse a sequence of our High-Frequency Refinement (HFR), which progressively extracts hierarchical high-frequency patterns, strengthening details through iterative feature transformation. From a global modeling angle, interspersed among subsequent residual blocks, Depthwise Position-aware Attention (DPA) modules selectively enhance critical high-frequency facial attributes, while preserving facial realism through adaptive feature modulation. This synergistic operation establishes essential groundwork for achieving highfidelity face super-resolution.
Fig. 4: The architecture of (a) High-Frequency Refinement (HFR) (b) Depthwise Position-aware Attention (DPA
1) High-Frequency Refinement $( H F R )$ : As shown in Fig. 4 (a), our HFR module operates in a recurrent manner, to iteratively refine high-frequency components—fine-grained facial details like eyes, eyebrows, and lips—over $r$ cycles. Given an input feature $\mathbf { X } \in \mathbf { \dot { \mathbb { R } } } ^ { C \times H \times W }$ , the module first extracts multiscale spatial features through parallel depth-wise separable convolutions with complementary receptive fields:
$$
\begin{array} { r } { \mathbf { X } _ { 1 } = \mathrm { D W C o n v } _ { 7 \times 7 } ( \mathbf { X } ) , \quad \mathbf { X } _ { 2 } = \mathrm { D W C o n v } _ { 5 \times 5 } ( \mathbf { X } ) , } \end{array}
$$
where the $7 \times 7$ kernel captures broad contextual patterns(e.g., structural contours) and the $5 \times 5$ kernel focuses on localized details(e.g., fine textures). These multi-scale outputs are concatenated along the channel dimension to obtain $\mathbf { X } _ { \mathrm { c a t } }$ :
$$
\mathbf { X } _ { \mathrm { c a t } } = \mathrm { C o n c a t } [ \mathbf { X } _ { 1 } , \mathbf { X } _ { 2 } ] .
$$
To integrate hierarchical spatial information. Then a $1 \times 1$ convolution compresses the concatenated features into the original channel dimension while learning cross-scale correlations. To mitigate channel grouping-induced information isolation, a channel shuffle operation and a second $1 \times 1$ convolution are applied to get $\mathbf { X } _ { 3 }$ :
$$
\mathbf { X } _ { 3 } = \mathrm { C o n v } _ { 4 } \cdot C S ( \mathrm { C o n v } _ { 3 } \cdot \mathbf { X } _ { \mathrm { c a t } } ) ,
$$
where ${ \mathrm { C o n v } } _ { 3 }$ is a $1 \times 1$ convolution for channel expansion, $C S ( \cdot )$ denotes channel-shuffle and $\mathrm { C o n v } _ { 4 }$ is a $1 \times 1$ convolution for channel reduction. Combined with channel shuffle, this operation dynamically permutes channel subgroups to enhance inter-channel communication.
By fusing multi-scale context and enforcing channel diversity, the HFR amplifies suppressed high-frequency signals while filtering spatial redundancies. The refined features are then expanded back to channels and fused with residual branch outputs, providing enriched high-frequency information for subsequent components in HFEB.
2) Depthwise Position-aware Attention $( D P A )$ : As shown in the Fig. 4 (b), given an input feature Y1 ∈ RC×H×W , a depthwise convolution (DWConv) is first applied to ${ \bf f } _ { i n }$ , expanding its channel dimension to $3 C \times H \times W$ . Following established multi-head attention frameworks, we perform channel-wise partitioning into $h$ heads, each with $\frac { \hat { C } } { h }$ channels. This partitioning enables concurrent learning of distinct self-attention patterns across different feature subspaces. The partitioned feature map is then rearranged into $Q , K$ and $V$ matrices of size, defined as:
$$
Q , K , V = R S \left( \mathrm { D W C o n v _ { 3 \times 3 } \left( C o n v _ { 5 } \left( \mathbf { Y } _ { 1 } \right) \right) } \right) ,
$$
where $\mathrm { C o n v } _ { 5 }$ is a $1 \times 1$ convolution that expands channels from $C$ to $3 C$ , $\mathrm { D W C o n v _ { 3 \times 3 } }$ denotes a $3 \times 3$ depthwise convolution and $R S$ denotes a reshape and split operator. To enhance positional modeling capabilities, we introduce a dynamical positional encoding mechanism. The input feature map is processed through a sub-network consisting of two $1 \times 1 \times 1$ convolutions with GELU activation in between:
$$
\mathrm { \bf p o s } = \mathrm { C o n v } _ { 7 } \cdot \mathrm { G E L U } ( \mathrm { C o n v } _ { 6 } \cdot { \bf Y } _ { 1 } ) ,
$$
where ${ \mathrm { C o n v } } _ { 6 }$ , $\mathrm { C o n v } _ { 7 }$ denote $1 \times 1$ convolutions. However, we observe that using positional encoding alone did not yield the desired performance. To further refine the attention mechanism, we introduce a temperature generator module using a series of linear transformations to generate a temperature scale temp, which temp can dynamically adjust attention scaling factors:
TABLE I: Ablation study of HFR on CelebA test [4].
$$
\mathbf { t e m p } = \mathbf { W } _ { 2 } \cdot \mathrm { G E L U } ( \mathbf { W } _ { 3 } \cdot \mathbf { Y } _ { 1 } ) ,
$$
where $\mathrm { W _ { 2 } , W _ { 3 } }$ denotes the linear layer, and temp represents the learnable temperature parameter. Unlike fixed-temperature scaling, this adaptive mechanism provides greater flexibility in capturing long-range dependencies. In summary, the attention scores are calculated by multiplying $Q$ and $K$ , and then scaled by the temperature value $\alpha$ . The resulting attention-weighted $V$ is combined with the original positional-aware features and finally reshaped back to the original feature map size. The complete attention computation can be formulated as:
${ \mathrm { A t t n } } \left\{ Q , K , V \right\} = V \cdot { \mathrm { R E L U } } ( Q K ^ { T } \cdot { \mathbf { t e m p } } ) + \sigma ( { \mathbf { p o s } } ) ,$ (15) where $\sigma ( \cdot )$ denotes a sigmoid activation.
# D. Offsets mechanism
As shown in Fig. 2, instead of using residual connections between input and output as in existing methods [16], [23], [25] to boost original facial information, our approach employs an offset mechanism [45] to bridge the input with the residual features of multi-scale outputs. This design stems from the observation that fixed-grid sampling in traditional convolutional or upsampling operations often struggles to capture spatially variant misalignments, especially in areas with complex geometries or motion. To tackle this, our offset mechanism adaptively predicts and compensates for these misalignments by generating pixel-wise 2D displacement vectors through a lightweight convolutional predictor subnetwork, which analyzes the input features for precise feature alignment. Specifically, for the initial input $\mathbf { f } _ { 1 }$ :
$$
\begin{array} { r } { \Delta \mathrm { \ r } = \mathbf { f } _ { \mathrm { o f f s e t s } } ( \mathbf { f } _ { 1 } ) \in \mathbb { R } ^ { 2 \times \mathrm { H } \times \mathrm { W } } , \mathbf { f } _ { 3 } = \phi ( \mathbf { f } _ { 2 } , \Delta \mathrm { \ r } ) , } \end{array}
$$
where $\Delta \mathrm { r }$ is the content-related offsets matrix, $\phi ( \cdot )$ denotes bilinear interpolation. Specifically, the offset-driven warping compensates for spatial misalignments caused by cascaded downsampling/upsampling operations, enhances highfrequency detail preservation through adaptively repositioning features according to local texture patterns, and promotes cross-scale feature consistency by aligning coarse-level structural information with fine-level details. Learned endto-end without explicit supervision, this process allows the network to automatically find optimal spatial transformations that maximize feature coherence during reconstruction, which is especially crucial for keeping facial edge sharpness and structural integrity in FSR.
TABLE II: Ablation study of DPA on Helen test set [46]. The last line denotes the strategy used in our final model.
TABLE III: Ablation study of ASSB is conducted on the Helen test set [46] by replacing it with three representative alternatives: self-attention (SA) [47], convolutional blocks (Conv), and Vision State Space Block (VSSB) [31].
# IV. EXPERIMENTS
# A. Datasets and Evaluation Metrics
We employ the CelebA [4] dataset for training and evaluate our model on three benchmarks: CelebA [4], Helen [46], and SCface [50]. Therefore, we crop the image based on its center point and resize it to $1 2 8 \times 1 2 8$ pixels, serving as the highresolution (HR) ground truth. Corresponding low-resolution (LR) inputs ( ${ 1 6 \times 1 6 }$ pixels) are generated by downsampling HR images via bicubic interpolation. The training set comprises 18,000 images from CelebA, with 1,000 and 200 images reserved for testing and validation, respectively. For crossdataset evaluation, we test on 50 samples from Helen and the SCface dataset without fine-tuning. We assess reconstruction quality using four widely adopted metrics: PSNR and SSIM [51] for pixel-level fidelity, LPIPS [52] for perceptual similarity, and VIF [53] for texture preservation.
# B. Implementation Details
Our FADPNet is implemented in PyTorch and trained on an NV IDIA RT X 3090 GPU. We initialize the model with 32 channels and design its architecture across three distinct resolution levels. Specifically, the number of lowfrequency modules is 2 at each level. Meanwhile, the number of HEFB modules is configured as follows: 2 in RC×H×W stage, 3 in $\mathbb { R } ^ { 2 \mathrm { C } \times { \frac { \mathrm { H } } { 2 } } \times { \frac { \mathrm { W } } { 2 } } }$ stage, and 4 in $\mathbb { R } ^ { 4 \mathrm { C } \times \frac { \mathrm { H } } { 4 } \times \frac { \mathrm { W } } { 4 } }$ stage. Optimization is performed using the Adam optimizer $\beta _ { 1 } = 0 . 9$ and $\beta _ { 2 } = 0 . 9 9 \$ ) with an initial learning rate of $2 \times 1 0 ^ { - 4 }$ The loss function combines reconstruction loss, and training converges within 150 epochs using a batch size of 16.
# C. Ablation Studies
1) Effectiveness of HFR: To further assess the impact of the HFR module in our model, we conduct an ablation study on the CelebA test set, as summarized in Table I. First, we evaluate the model without the HFR module. Removing HFR leads to a noticeable degradation in both PSNR and SSIM, indicating that the HFR module plays a crucial role in enhancing high-frequency details. We then investigate the influence of different depth-wise convolution kernel configurations. When using only the $5 \times 5$ kernel, the model achieves a PSNR of 28.15 and SSIM of 0.8058, slightly below the performance of the configuration that employs both kernels. This suggests that the $5 \times 5$ kernel is effective at capturing localized details but insufficient for modeling a wide range of high-frequency information. In contrast, using only the $7 \times 7$ kernel yields a PSNR of 28.18 and SSIM of 0.8077, indicating its superiority in capturing broader contextual features. However, it still underperforms compared to the full HFR setup. Adding an extra $3 \times 3$ kernel results in a marginal performance drop (PSNR: 28.16, SSIM: 0.8070), implying that this addition does not significantly enhance detail restoration. Moreover, removing the channel shuffle (CS) operation causes a slight decrease in both PSNR and SSIM, highlighting the importance of inter-channel interaction in feature representation. To further validate these observations, we present residual heatmaps visualizing the pixel-wise differences between the reconstructed outputs and their HR counterparts.
TABLE IV: Quantitative comparisons for $\times 8$ SR on the CelebA and Helen test sets.
Fig. 5: Comparisons of error maps on our HFR. Redder regions indicate larger pixel-wise errors. The complete HFR module in our FADPNet leads to the lowest residual errors and shows superior ability to restore high-frequency facial details.
Fig. 6: (a) Ablation studies of the effectiveness of frequencyspecific modules on the CelebA [4] and Helen datasets [46]. “swap” is our model in which the low- and high-frequency blocks are swapped, resulting in a noticeable performance drop compared to our original model. (b) Feature map visualization and FSR results between our original model and swappedfrequency variant show our modules design for different frequency features produces sharp and accurate facial contours.
TABLE V: Ablation study of our attention strategy on CelebA [4] and Helen [46] test set.
As shown in Fig. 5, the model without the HFR module exhibits prominent errors, especially in regions rich in highfrequency textures such as facial contours. The model with HFR but without CS shows moderate improvement, suggesting that the main structure of HFR still contributes positively even without CS. Models employing only a single kind of depthwise convolution kernel exhibit complementary characteristics: the $5 \times 5$ kernel better preserves localized features while missing global structures, whereas the $7 \times 7$ kernel captures largerscale patterns but struggles with fine textures. In contrast, the full HFR achieves the lowest errors, visually affirming its superior capacity for high-frequency detail restoration and consistent with its leading PSNR and SSIM scores.
2) Effectiveness of DPA: To comprehensively evaluate the individual and synergistic effects of components in our proposed Depthwise Position-aware Attention (DPA) module, we perform systematic ablation studies on the Helen test set [1], with detailed results presented in Table II. Our investigation focuses on three critical elements: the learnable temperature generator (T gen), fixed temperature parameter (T para), and positional embedding (P emb), analyzing their impacts on both reconstruction quality and model complexity.
Fig. 7: Visual comparisons for $\times 8$ SR on the CelebA test set [4]. Our FADPNet can reconstruct clear face images.
The initial configuration with only T para provides a baseline level of reconstruction, but its fixed scaling restricts adaptability to spatial and contextual variations, resulting in suboptimal attention modulation for detailed facial regions. Adding P emb to T para slightly improves SSIM, suggesting better structural preservation, though PSNR shows a marginal decline. The P emb component enables more precise localization of structural details, particularly in semantically critical facial regions like ocular areas, nasal contours, and lip boundaries. Our FADPNet combines T gen with P emb, achieving peak performance. This configuration establishes a synergistic relationship where T gen dynamically modulates attention sharpness according to local content complexity, while P emb ensures spatial consistency in feature aggregation. Notably, implementing T gen without P emb guidance degrades performance. This reveals that without spatial constraints, the self-learned scaling factors may produce inconsistent attention distributions, particularly in homogeneous facial regions where positional cues are crucial for accurate feature matching.
These systematic experiments validate the necessity of integrating both adaptive temperature scaling and explicit spatial encoding in our DPA design. The final architecture achieves superior performance through three key mechanisms: (a) Content-aware attention sharpness adaptation via T gen, (b) Geometry-preserving feature aggregation through P emb, and (c) Computational efficiency from depthwise implementation. The results demonstrate our module’s effectiveness in addressing the unique challenges of facial super-resolution, particularly in preserving identity-critical details while maintaining natural texture synthesis.
3) Effectiveness of ASSB: To evaluate the suitability of the Attentive State Space Block (ASSB) for low-frequency modeling in our method, we conduct ablation studies by replacing it with three representative alternatives: self-attention (SA) [47], residual block (CNN) [54], and Vision State Space Block (VSSB) [31]. As shown in Table III, our full model with ASSB achieves the best overall performance, indicating superior perceptual quality and structural fidelity.
It is worth noting that ASSB is not our original design, but a recently proposed state-space module tailored for longrange dependency modeling. In our framework, we integrate ASSB within LFEB to exploit its strength in global structure preservation. Unlike Conv, which tends to focus on local or high-frequency features, ASSB’s non-causal state-space formulation excels at capturing smooth, spatially consistent facial regions such as contours and skin tones. It’s builtin low-rank prompt decomposition also aids in suppressing high-frequency noise and promotes stable propagation across homogeneous areas. Compared to the more general-purpose VSSB, ASSB offers a lighter structure and better frequency selectivity when dealing with low-frequency signals. Despite using fewer parameters (0.203M vs. 0.269M), ASSB outperforms or matches VSSB in all metrics, demonstrating its superior efficiency-to-performance ratio when applied to
Fig. 8: Visual comparisons for $\times 8$ SR on the Helen test set [46]. Our FADPNet can reconstruct clear face images
Fig. 9: Visual comparisons with existing methods on downstream tasks, like face parsing.
DIC SPARNet SISN ELSFace SPADNet SFMNet Ours HR 00000000 28.39/0.8393 28.91/0.8518 28.94/0.8479 28.82/0.8466 28.83/0.8462 28.42/0.8432 29.69/0.8629 PSNR/SSIM 乡心u区@区 28.59/0.846128.26/0.8377 28.52/0.8393 28.73/0.8478 28.74/0.8476 29.29/0.8569 29.37/0.8575 PSNR/SSIM high-frequency enhancement block (HFEB). As illustrated in Fig. 6a, this modification leads to a consistent drop in PSNR across both datasets, highlighting the critical importance of the original frequency-aware design. The original model achieves superior reconstruction performance by appropriately allocating low- and high-frequency processing to their respective branches. In contrast, the swapped configuration disrupts this balance, resulting in suboptimal representation and degraded image quality. Fig. 6b visually compares the original and the swapped variants. The results clearly show that the proposed model preserves facial structures and fine details more effectively, yielding sharper contours and more realistic textures. Conversely, the swapped version produces noticeably blurrier outputs, especially around facial components like eyes and mouth, further validating the necessity of maintaining the original design of frequency-specific modules.
low-frequency pathways. These results collectively justify our choice of ASSB as the most effective backbone for global low-frequency modeling in our frequency-aware architecture. Its integration helps maintain large-scale facial coherence and mitigates structural misalignment—challenges that conventional transformer or Conv-based designs struggle to address. 4) Study of the architecture of low- and high-frequency modules: To investigate the effectiveness of low- and highfrequency architecture, we conduct an ablation study on the CelebA and Helen datasets by interchanging the positions of the low-frequency enhancement block (LFEB) and
5) Study of local-global attention strategy: As summarized in Table V, our goal is to examine the individual contributions of local and global modeling components in both high- and low-frequency enhancement branches. First, we remove the squeeze-and-excitation block from LFEB. The performance drops on CelebA and Helen. This suggests that incorporating squeeze-and-excitation operations to extract low-frequency global interactions and emphasize informative channels is beneficial, even in smooth regions. These global interactions help preserve structurally important facial components that might otherwise be weakened in low-frequency representa
Fig. 10: Visual comparison on SCface [50] of real surveillance scenarios for $\times 8$ SR. Our FADPNet can reconstruct clear face components.
Fig. 11: (a) PSNR, FLOPs, and speed tradeoffs on the CelebA test set [4]. (b) PSNR, FLOPs, and Params tradeoffs on the Helen test set [46], showing the balance of our method.
28.2 Ours 27.2 Ours
20 LAATSEMNE 27.0 SEAN LAAT Restormer-M AD-GNN
27.4 ELSFace GR 26.6 SPARNet SPADNet AD-GNN 26.4
27.2 FACN FLOPs(G) FLOPs(G) FSRNet 26.2 DIC
27.0 10G20G30G 10G20G 30G 26.0
26.8 20 40 60 80 100 120 5 10 15 20 25 Speed (ms) Params (M) (a) PSNR-Speed-FLOPs (b) PSNR-Params-FLOPs
tions. Next, we remove the DPA module from HFEB. The result shows a more noticeable degradation, with performance dropping to $2 8 . 0 4 \mathrm { ~ / ~ } 0 . 8 0 3 2$ on CelebA and $2 6 . 8 0 \mathrm { ~ / ~ } 0 . 7 9 8 1$ on Helen. This highlights the crucial role of DPA in capturing localized textures and reducing artifacts in high-frequency content. Finally, removing the offset mechanism causes a moderate decline in performance, indicating that it contributes to cross-scale feature alignment.
# D. Comparison with Other Methods
In this section, we conduct a comprehensive comparison of our proposed method against state-of-the-art (SOTA) techniques in FSR, encompassing general image SR methods like RCAN [54], Restormer-M [47] and FreqFormer [36], specialized FSR approaches such as FSRNet [13], DIC [14], SPARNet [16], AD-GNN [19], SISN [18], LAAT [20], SFMNet [22], ELSFace [48], and SPADNet [49]. All methods are trained on the CelebA [4] dataset under identical settings—including preprocessing, loss functions, and optimizer configurations—to eliminate implementation biases and ensure direct attribution of performance differences to architectural innovations.
TABLE VI: Quantitative comparison on face average similarity with existing methods on SCface [50] of real surveillance scenarios.
1) Comparison on the CelebA dataset and the Helen dataset: We present the quantitative results for the CelebA and Helen test datasets in Table IV. The best and second-best results are highlighted in bold and underlined, respectively. These results demonstrate the strength of our architecture in balancing structural accuracy and perceptual quality. To further evaluate the generalization ability of our method, we directly apply the model trained on CelebA to the Helen test set. The quantitative results for $\times 8$ FSR are also presented in Table IV, showing that our method comprehensively outperforms other approaches. Despite having only 8.6M parameters (similar to SFMNet [22]), ur FADPNet achieves $3 9 . 6 \mathrm { m s }$ inference speed, which is $1 . 8 \times$ faster than FreqFormer [36] (75ms) and comparable to lightweight methods like SPARNet [16] (36.6ms). Compared to ELSFace [48], despite the similar parameter counts (8.6M vs. 8.7M), FADPNet improves PSNR by $0 . 2 5 \mathrm { d B }$ on CelebA and 0.36dB on Helen. As summarized in Fig. 7 and Fig. 8, while competing methods often produce blurred facial contours and distorted features (e.g., illdefined eyes), our reconstructions successfully preserve sharp edges, facial symmetry, and fine-grained details (e.g., eyeball), closely aligning with the ground-truth HR images. As shown in Fig. 9, we also use the face parsing model [56] to analyze the FSR results of different models on the Helen dataset. The visual results clearly demonstrate that our method produces the most complete and accurate facial structure, preserving better face details than competing models. In summary, while existing methods struggle with blurred shapes and significant detail loss, our model effectively preserves facial contours and fine details. This highlights not only the robustness of our approach across different datasets but also its superiority over SOTA techniques in both performance and efficiency.
2) Comparison on real-world surveillance scenarios: Restoring facial details from real-world surveillance imagery remains a formidable challenge due to inherent low resolution, uncontrolled lighting, and sensor noise—factors that are often inadequately captured in synthetic benchmarks. To address this, we conduct experiments using low-quality face images from the SCface dataset [50], which inherently represents realworld surveillance scenarios without manual downsampling. As illustrated in Fig. 10, visual comparisons reveal that existing methods often struggle to reconstruct critical facial features due to the challenges posed by noisy, low-resolution inputs, leading to blurred contours and loss of texture details. In contrast, our FADPNet leverages adaptive feature separation and dual frequency enhancement, effectively restoring sharper facial structures, finer textures, and more natural facial symmetry. Beyond visual fidelity, we assess the impact on downstream tasks such as face matching, comparing the similarity between restored faces and their corresponding high-definition reference images. Table VI presents the quantitative results, demonstrating that our approach consistently achieves higher similarity scores across multiple test cases. This confirms its practical applicability in challenging surveillance scenarios.
3) Model Complexity Analysis: Beyond reconstruction quality, we conduct a comprehensive evaluation of model efficiency in terms of parameter count and inference speed, both of which are crucial for devices with limited computing resources. At each stage, inputs are dynamically decomposed into high-frequency and low-frequency components, which are then processed by our specialized sub-networks: a CNN branch tailored for capturing high-frequency facial details, and a Mamba-based branch better suited for modeling low-frequency, long-range dependencies. This innovative design achieves an optimal balance between computational efficiency and an enhanced representational capacity for highfrequency details. By leveraging this structured decomposition, our model adeptly handles complex facial features, leading to improved performance in FSR. As shown in Fig. 11a and Fig. 11b, our method demonstrates an excellent tradeoff between reconstruction accuracy and computational cost. Specifically, Fig. 11a illustrates that our approach surpass existing methods over 0.2dB PSNR with lower FLOPs and faster inference speed on the CelebA test set, highlighting its practical efficiency. Similarly, Fig. 11b further shows that on the Helen test set, our method maintains superior PSNR while keeping the parameter count relatively low, outperforming most existing methods in both accuracy and model compactness. Overall, our approach achieves a well-balanced tradeoff between model size, inference speed, and reconstruction quality, making it effective.
# V. LIMITATIONS AND FUTURE WORKS
Despite the dual-path design that independently captures low- and high-frequency information, the current fusion strategy is relatively simplistic and may fail to fully exploit the complementary nature of these frequency components, resulting in suboptimal recovery of fine-grained facial details. Moreover, the model demonstrates a performance bias toward frontal faces, showing reduced robustness when handling side views or large pose variations, due to limited diversity in training views. In future work, we plan to investigate more effective fusion mechanisms between the high- and lowfrequency branches to enhance inter-frequency interaction and detail restoration. Additionally, we will consider incorporating pose-aware augmentation or multi-view training strategies to improve generalization to non-frontal faces, further improving the model’s robustness in practical applications. | Face super-resolution (FSR) under limited computational costs remains an open
problem. Existing approaches typically treat all facial pixels equally,
resulting in suboptimal allocation of computational resources and degraded FSR
performance. CNN is relatively sensitive to high-frequency facial features,
such as component contours and facial outlines. Meanwhile, Mamba excels at
capturing low-frequency features like facial color and fine-grained texture,
and does so with lower complexity than Transformers. Motivated by these
observations, we propose FADPNet, a Frequency-Aware Dual-Path Network that
decomposes facial features into low- and high-frequency components and
processes them via dedicated branches. For low-frequency regions, we introduce
a Mamba-based Low-Frequency Enhancement Block (LFEB), which combines
state-space attention with squeeze-and-excitation operations to extract
low-frequency global interactions and emphasize informative channels. For
high-frequency regions, we design a CNN-based Deep Position-Aware Attention
(DPA) module to enhance spatially-dependent structural details, complemented by
a lightweight High-Frequency Refinement (HFR) module that further refines
frequency-specific representations. Through the above designs, our method
achieves an excellent balance between FSR quality and model efficiency,
outperforming existing approaches. | [
"cs.CV"
] |
# 1 Introduction
The Goal-oriented Proactive Dialogue System (GPDS) focuses on achieving specific objectives by actively guiding and anticipating user needs (Liu et al., 2020; Wang et al., 2024a, 2023b). Unlike traditional dialogue systems that passively respond to user requests (Touvron et al., 2023; Achiam et al., 2024), GPDS strategically steers the conversation along a goal-oriented path, ensuring that a goal is naturally achieved while maintaining a positive user experience. GPDS has a wide range of applications in various domains, such as recommendation systems (Fu et al., 2020; Liu et al., 2020, 2021) and medical consultations (Xu et al., 2024b).
Figure 1 presents an example of GPDS, which generates responses (e.g., $S _ { i }$ ) based on the dialogue context, such as the user profile, dialogue history, domain knowledge, and subgoals within a goaloriented path. GPDS can be divided into two primary sub-tasks: goal-oriented path planning and response generation. Initially, the system plans a goal-oriented path where each step is represented by an <action, topic> pair (e.g., “Q&A | Jimmy Lin’s constellation” $$ “Chat about the Star Jimmy Lin” $$ “Movie recommendation Grandpa’s Love” in Figure 1). Following this, the system generates responses aligned with the planned path, thereby guiding the conversation proactively and naturally toward achieving the final target (recommending the movie Grandpa’s Love).
Most previous studies on GPDS have primarily focused on planning goal-oriented paths using techniques such as CNN-based classifiers (Liu et al., 2020), Seq2seq paradigms (Deng et al., 2023; Wang et al., 2023c, 2024b,a), and graph interaction methods (Zhang et al., 2024). However, these approaches often overlook the inconsistencies that arise between generated responses and dialogue contexts. These inconsistencies manifest in several ways. First, there is an inconsistency with the dialogue history. As shown in Figure 1, the system asserts “of course he knows Lin” in $S _ { 2 }$ , even though the dialogue history does not inquire about his acquaintance with Lin. Second, there is an inconsistency with the subgoal. Although $S _ { 4 }$ ’s action is to recommend a movie, it fails to address the topic of Grandpa’s Love, resulting in an invalid recommendation. Third, there is an inconsistency with the domain knowledge. $S _ { 5 }$ states that Yanping Zhu is the star of Grandpa’s Love, whereas the do
User Profile Goal-oriented Path Dialogue History System Response (Action | Topic) Q&A Can you tell me what sign Jimmy Lin is?
Name: Qifeng Zeng, | Jimmy Lin Jimmy Lin is a Libra. S1 He is a Libra. I see, thank you. You're welcome. Are you asking for his information SYou're welcome. Jimmy Lin is because you like him?Jimmy Linis the winner of a character in GuLong's
Favorite star: Jimmy Lin, the 2003 American International Outstanding Youth classic TV series. Of course I
Rejection: Music Award. ChatabouttheStar I like him very much. He is not only young and |Jimmy Lin promising,but also a character in Gu Long's classic Domain Knowledge TV series. His acting skils are super good. Yes,and he has been elected as one of the top ten SYes,he is also the Best Actor
iyLin os intheIdol Category of the Huading Inspirational
TV series characters>, I like watching his movies very much. I can
<Jimmy Lin, Achievements,Huading Award appreciate his acting skilsand hisappearance.
for Best Actorin Idol Inspirational Category>, Since you like his movies so much, why don't you S4That's great, I have a movie
<Jimmy Lin,Achievements,203American watch "Grandpa's Love" starring Jimmy Lin? It's
International Outstanding Youth Award>, very touching! a look, it’l touch your tears!
Movie Recommendation Who else sars in this movie?
three times> Grandpa'sLove The other stars include Steven Hao, Long Huang, S5The leading actor of this movie
<Grandpa's Love, Director, Yanping Zhu>, (final target) etc.
<Grandpa’s Love,Starring,Jimmy Lin>, <Grandpa's Love, Starring,Steven Hao>, I like them very much. I willdefinitely watch it when I have time.
<Grandpa's Love,Starring,Long Huang>, etc. You will otbe disappointedafter wathing it. S6 Ibelieve you wllie er watching it.
main knowledge indicates that Zhu is the director of the movie. Lastly, there is an inconsistency with the user profile. The system might generate a response that does not align with the user’s profile, as illustrated in Appendix A. For instance, the user’s profile shows a preference for news about Nicholas Tse. However, the system recommends unrelated social news. Hence, these inconsistencies can lead to a poor user experience in real-world scenarios, causing conversations to abruptly break down and failing to achieve the intended targets.
To address these inconsistencies, we propose a Consistency Reflection and Correction (CRC) framework, drawing inspiration from the reflective practice theory (Checkoway and Schön, 1985) of human cognition that emphasizes the systematic reflection on experiences to identify and improve areas of weakness. Specifically, in the consistency reflection stage, we guide the model to reflect its experience, i.e., whether its generated responses are consistent with the elements of dialogue contexts. If any inconsistencies are identified, the model is prompted to categorize the types of discrepancies and suggest potential corrections. In the consistency correction stage, the model is instructed to regenerate the responses more consistent with dialogue contexts based on the insights gained from the reflection stage.
To validate the effectiveness of our framework, we conducted extensive experiments on three widely-used datasets: DuRecDial (Liu et al., 2020), DuRecDial 2.0 (Liu et al., 2021) and TopDial (Wang et al., 2023b). Since our framework is model-agnostic, we tested it on different model architectures and various parameter sizes, including encoder-decoder models (BART and T5) and decoder-only models (GPT-2, DialoGPT, Phi3- 3.8B, Mistral-7B and LLaMA3-8B). The experimental results demonstrate that our CRC framework significantly improves the consistency between generated responses and dialogue context.
# 2 Related Work
# 2.1 Goal-oriented Proactive Dialogue System
Previous studies on GPDS typically began by planning a sequence of subgoals, followed by generating responses based on the subgoals to guide the conversation toward specific objectives. Most of them concentrated on goal-oriented path planning, employing the techniques such as CNN-base classifier (Liu et al., 2020), target-driven method (Wang et al., 2022, 2024a), Brownian bridge (Wang et al., 2023c), prompt-based method (Deng et al., 2023), graph-grounded planning (Liu et al., 2023), graphinteraction planning (Zhang et al., 2024), and bidirectional planning (Wang et al., 2024b).
However, these efforts often emphasize the importance of planning a goal-oriented path, overlooking the consistency between generated responses and dialogue contexts. This paper primarily focuses on response generation, aiming to improve GPDS by enhancing the consistency between generated responses and dialogue contexts.
# 2.2 Discourse Consistency in Dialogue
Discourse consistency in dialogue refers to the logical coherence and uniformity of information and themes, which is essential for fostering understanding and effective communication among participants. Previous research (Song et al., 2021; Wang et al., $2 0 2 3 \mathrm { a }$ ; Chen et al., 2023; Zhou et al., 2023) has frequently employed Natural Language Inference (NLI) models to assess and enhance discourse consistency between generated responses and dialogue contexts. However, these NLI models often depend on additional training data, which can hinder their generalizability. In an innovative approach, we propose leveraging the model’s inherent reflective capabilities to enhance the consistency between generated responses and dialogue contexts, thereby improving its generalizability.
# 3 Task Definition
Given a dataset $D = \{ U ^ { i } , K ^ { i } , H ^ { i } , G ^ { i } \} _ { i = 1 } ^ { N }$ , where $N$ is the size of the dataset. $U ^ { i } = \{ u _ { j } ^ { i } \} _ { j = 1 } ^ { N _ { u } }$ is the $i$ -th user profile, where each item $u _ { j } ^ { i }$ is a key-value pair, representing the user’s personal information (e.g., name and gender). $K ^ { i } = \{ k _ { j } ^ { i } \} _ { j = 1 } ^ { N _ { k } }$ is the domain knowledge related to the $i$ -th conversation of $D$ , where each item $k _ { j } ^ { i }$ is a triple <head, relation, tail>. $H ^ { i } = \{ h _ { m } ^ { i } \} _ { m = 1 } ^ { M }$ is the content of the $i$ -th conversation, co{nsimst}imng=1of $M$ turns. $G ^ { i } = \{ g _ { m } ^ { i } \} _ { m = 1 } ^ { M }$ is the goal-oriented path for the dialogue $H ^ { i }$ , where each $g _ { m } ^ { i }$ consists of a dialogue action $a _ { m } ^ { i }$ and a dialogue topic $t _ { m } ^ { i }$ . The final goal of the dialogue is represented by $g _ { M } ^ { i }$ . GPDS can be divided into two primary sub-tasks: goal-oriented path planning and response generation.
Goal-oriented Path Planning Goal-oriented path planning aims to plan a sequence of subgoals to proactively guide the conversation to achieve the final target $g _ { M } ^ { i }$ . Each subgoal $g _ { m } ^ { i } ( 1 \leq m \leq M )$ which is formulated as follows.
$$
g _ { m } ^ { i } = G P P ( U ^ { i } , K ^ { i } , H _ { \leq m } ^ { i } , G _ { < m } ^ { i } )
$$
where $G P P$ is a path prediction model, which has attracted the attention of most previous work. In this paper, we adopted the same path prediction model as Wang et al. (2024a) and mainly focus on generating responses that are consistent with dialogue contexts.
Response Generation A generative model is employed to generate a response that aligns with the action and topic in $g _ { m } ^ { i }$ , thereby actively steering the conversation towards the final target. This process is represented as follows.
$$
r _ { m } ^ { i } = R G ( U ^ { i } , K ^ { i } , H _ { \leq m } ^ { i } , g _ { m } ^ { i } )
$$
where $r _ { m } ^ { i }$ is the generated response, and $R G$ denotes an autoregressive model. Specifically, $R G$ autoregressively generates $r _ { m } ^ { i }$ conditioned on the concatenation of the dialogue context, and it is optimized by minimizing the negative log-likelihood as follows.
$$
\mathcal { L } ( \theta ) = - \sum _ { i = 1 } ^ { N } \sum _ { t = 1 } ^ { T } \log P ( r _ { m , t } ^ { i } \mid r _ { m , < t } ^ { i } , U ^ { i } ,
$$
where θ represents the trainable parameters, rim,t and $r _ { m , < t } ^ { i }$ are the $t$ -th token and the previous $t$ 1 tokens of the response $r _ { m } ^ { i }$ , respectively, and $T$ is the length of $r _ { m } ^ { i }$ . In this paper, we focus on improving $R G$ to enhance the consistency between generated responses and dialogue contexts.
# 4 CRC Framework
Motivation Most of the Response Generation (RG) models in GPDS cannot produce responses that align with dialogue context. This is likely because their learning experience primarily involves imitation learning, which lacks deep reflection and correction mechanisms. According to the theory of reflective practice (Checkoway and Schön, 1985), the process of learning and growth comes from a cycle of experience, i.e., reflection and correction. Without this reflective practice, most models struggle to learn from their own experiences and summarize effectively, making it challenging for them to consistently generate responses that align with the dialogue context. Therefore, establishing a framework that can involve the reflection and correction mechanism to improve the consistency between responses and dialogue contexts is crucial.
In this paper, we proposed a model-agnostic, two stage CRC framework including consistency reflection and consistency correction, as shown in Figure 2. In the consistency reflection stage, we first guide a $R G$ model to reflect on its experience, i.e., reflecting on the types of inconsistency between generated responses and dialogue contexts, and then suggest possible corrections. In the correction
Predefined elements User profile: Name: Qifeng Zeng, Gender: Female, $\mathrm { A g e }$ range: "18-25", Occupation status: Student, Favorite star: Jimmy Lin, etc.
Dialogue history: [USER] [USERIparticularly like watching his movies.Ican appreciate his acting skills and his goodlooks atthe same ime. [BOT] Since youlike his movies so much,whydon'tyou watch“Grandpa’sLove"staring JimmyLin?It'svery touching! [USER] Who else stars in this movie?
Subgoal: action: Movie Recommendations, topic: Grandpa’s Love. inconsistent typee RG suggestion s RG_R RG_C
Response Response with Reflection Response with Correction Theleading actor of this movie is Yanping Zhu.
The leading actor of this movie is Yanping Zhu. e Ii Lin, StevenHao,andLong Huang. Experience Reflection Correction
stage, we further guide the RG model to regenerate responses that are consistent with dialogue contexts on the reflection results.
Consistency Reflection As introduced in Section 1, the responses generated by the RG model may exhibit inconsistencies with dialogue context. These inconsistencies primarily pertain to the user profile $U$ , the domain knowledge $K$ , the dialogue history $H$ , and the subgoal $g$ . To address this issue, we prompt the model to reflect on and consider ways to correct these inconsistencies. Specifically, we not only ask the RG model to generate responses, but also encourage it to analyze the types of inconsistencies between dialogue responses and dialogue contexts, providing suggestions for improvement. This can be formalized as follows.
2024a), we utilize ChatGPT 1 to act as an annotator for the annotations of inconsistency types and correction suggestions. The prompt used is illustrated in Appendix B.
We feed the dialogue context and the generated response to ChatGPT and let it evaluate the consistencies. ChatGPT first requires to identify the inconsistency types and then provide correction suggestions. An example is shown in Figure 2, where the inconsistency type is related to the domain knowledge, and the correction suggestion is that “Yanping Zhu is the director, and the stars are Zhiying Lin, Steven Hao, and Long Huang.”. After obtaining the annotations $e$ and $s$ , we continue to fine-tune the RG model to obtain a new RG_R model with reflective capabilities. Let the concatenation of $r _ { m } ^ { i }$ , $e _ { m } ^ { i }$ , and $s _ { m } ^ { i }$ be denoted as $c _ { m } ^ { i }$ , the optimization of RG_R is as follows.
$$
r _ { m } ^ { i } , e _ { m } ^ { i } , s _ { m } ^ { i } = R G \_ R ( U ^ { i } , K ^ { i } , H _ { \leq m } ^ { i } , g _ { m } ^ { i } )
$$
where $R G \_ R$ represents an autoregressive model with reflective ability, $e _ { m } ^ { i }$ denotes the inconsistent type of the reflection results on the response $r _ { m } ^ { i }$ and $s _ { m } ^ { i }$ is a correction suggestion on the inconsistent type $e _ { m } ^ { i }$ . Therefore, the key lies in how to obtain the high-quality inconsistent type $e$ and the correction suggestion $s$ to stimulate the model’s reflective capabilities.
While manually annotating inconsistency types and correction suggestions is an ideal approach, the time-consuming and costly nature of manual annotation hinders its practical application. Thanks to the powerful understanding capabilities of Large Language Models (LLMs) like ChatGPT, which have already achieved success in data annotation across various fields (Wang et al., 2023d; Xu et al.,
$$
\mathcal { L } _ { c r } ( \theta ) = - \sum _ { i = 1 } ^ { N } \sum _ { t = 1 } ^ { T } \log P ( c _ { m , t } ^ { i } \mid c _ { m , < t } ^ { i } , U ^ { i } ,
$$
where $\theta$ represents the trainable parameters, $N$ is the data size and $T$ is the token length of $c _ { m } ^ { i }$ . During the learning process, the $\mathbb { R } \mathrm { G } \_ \mathrm { R }$ model needs to generate not only the response $r$ but also both the reflective results $e$ and $s$ regarding $r$ .
Consistency Correction During the consistency correction phase, we continue to train RG to generate the response $r _ { m } ^ { i ^ { \prime } }$ that is more consistent with the dialogue context on the reflective results from RG_R. This can be formalized as follows.
$$
r _ { m } ^ { i ^ { \prime } } = R G _ { - } C ( U ^ { i } , K ^ { i } , H _ { \leq m } ^ { i } , g _ { m } ^ { i } , c _ { m } ^ { i } )
$$
where $R G _ { - } C$ represents an autoregressive model with correction ability. Similar with Equ 5, $R G \_ C$ is trained by minimizing the negative loglikelihood as follows.
$$
\mathcal { L } _ { c c } ( \theta ) = - \sum _ { i = 1 } ^ { N } \sum _ { t = 1 } ^ { T } \log P ( r _ { m , t } ^ { i ^ { \prime } } \mid r _ { m , < t } ^ { i ^ { \prime } } , U ^ { i } ,
$$
Training The training process is primarily divided into three stages. First, we train an initial model $R G$ by optimizing $\mathcal { L }$ . Next, we enhance $R G$ by optimizing $\mathcal { L } _ { c r }$ to obtain $R G \_ R$ , which possesses reflective capabilities. Finally, we further optimize $R G$ by optimizing $\mathcal { L } _ { c c }$ to achieve $R G \_ C$ , which incorporates corrective capabilities.
Inference During the inference phase, we first feed the dialogue context into $R G \_ R$ to obtain $c$ , which includes the response $r$ , the inconsistency type $e$ , and the correction suggestion $s$ . Next, we feed both the dialogue context and $c$ into $R G _ { - } C$ to generate a response $\boldsymbol { r } ^ { \prime }$ that is more consistent with the dialogue context.
# 5 Experimentation
# 5.1 Experimental Settings
Datasets We conducted experiments on three widely recognized datasets: DuRecDial (Liu et al., 2020), DuRecDial 2.0 (Liu et al., 2021) and TopDial (Wang et al., 2023b). We followed the data processing procedures and splits outlined in previous work (Wang et al., 2024a; Zhang et al., 2024) and the statistics are presented in Appendix C.
Baselines We compared our CRC framework with several state-of-the-art baselines as follows. MGCG (Liu et al., 2020) utilizes CNN for goal prediction and employs modified generation-based models for response generation. UniMIND (Deng et al., 2023) unifies goal planning and response generation using prompt-based learning. TCP (Wang et al., 2022) uses a Transformer-based planner to generate a sequence of actions and topic paths to guide response generation. MGNN (Liu et al., 2023) employs graph neural networks to model complex interactions between dialogue elements. GIGF (Zhang et al., 2024) utilizes a directed heterogeneous graph to capture goal sequence information across different levels. TPNet (Wang et al., 2024a) is an enhanced version of TCP that leverages several pre-trained models, including BART (denoted as TP-BART), GPT-2 (denoted as TPGPT2), and DialoGPT (denoted as TP-Dial). In this paper, we adopted the same goal-oriented path as TPNet, primarily focusing on response generation. In addition to the aforementioned language models, we also applied our CRC framework to T5 (denoted as TP-T5), Phi3 (denoted as TP-Phi3), Mistral (denoted as TP-Mistral) and LLaMA3 (denoted as TP-LLaMA3). Furthermore, we employed the golden goal-oriented path on LLaMA3 (denoted as Golden-LLaMA3) to demonstrate the general applicability of our CRC framework, independent of the performance of the goal-oriented path planning task.
Evaluation Metrics We follow previous work (Wang et al., 2024a) and use the following metrics: Word-level $\mathrm { F _ { 1 } }$ $\mathrm { ~ W ~ F _ { 1 } ) }$ , BLEU, Distinct (Dist), Knowledge $\mathrm { F _ { 1 } }$ $( \textrm { K F } _ { 1 , } )$ , and Goal Success Rate (Succ). Word-level $\mathrm { F _ { 1 } }$ measures the exact word overlap between generated and reference responses. BLEU measures the n-gram overlap with reference responses. Distinct evaluates the diversity of the generated responses. Knowledge $\mathrm { F _ { 1 } }$ measures the correctness of generated knowledge against domain knowledge triples. Goal Success Rate evaluates whether the dialogue successfully achieves both the target action and topic.
Implementation Details Please refer to Appendix D for details.
# 5.2 Experimental Results
The results of response generation on the three datasets are presented in Tables 1 and 2, as well as in Table 12 in Appendix E. It can be observed that our CRC substantially enhances the performance of various model architectures across multiple metrics, showing notable improvements in Word-level $\mathrm { F _ { 1 } }$ , BLEU-2, Knowledge $\mathrm { F _ { 1 } }$ , and Goal Success Rate, while having minimal impact on Distinct. This suggests that by improving the consistency between generated responses and dialogue contexts, GPDS can more effectively guide conversations toward final targets without compromising the diversity of the responses. These findings demonstrate the effectiveness and generality of our CRC framework.
The observed enhancements in the Word-level $F _ { 1 }$ and BLEU scores suggest that our framework enables the model to generate responses that more closely match the reference responses. The notable improvement in Knowledge $F _ { 1 }$ can be attributed to our CRC framework, which prompts the model to address inconsistencies between responses and domain knowledge, thereby enhancing the model’s ability to accurately utilize the domain knowledge. Likewise, the increase in Goal Success Rate is due to the fact that CRC can guide the model to identify and rectify discrepancies between responses and subgoals. By ensuring greater consistency between responses and each subgoal in the goal-oriented path, the model is better equipped to achieve the final objective.
Table 1: Experimental results on the Chinese DuRecDial dataset. The parameter sizes of the models are annotated as subscripts adjacent to the model names.
Table 2: Experimental results on English DuRecDial 2.0.
Comparing TP-LLaMA3 with other pre-trained models, TP-LLaMA3 has a natural advantage in terms of the Distinct and Knowledge $F _ { 1 }$ metrics. This may be attributed to its incorporation of more diverse dialogue scenarios and knowledgeintensive corpora during the pre-training stage, which enhances its ability to generate diverse responses and accurately utilize domain knowledge. Notably, our CRC can still significantly enhance the performance of TP-LLaMA3 across various metrics, indicating that our framework remains effective even for those LLMs with a larger number of parameters and can be effective supplement to LLMs.
Table 3: Ablation results using TP-LLaMA3 on DuRecDial where UP, DH, DK and SG refer to user profile, dialogue history, domain knowledge and subgoal, respectively.
Additionally, in comparison with TP-LLaMA3, Golden-LLaMA3 using annotated goal-oriented path improves all metrics on two datasets and these results indicate that the performance improvements in the task of goal-oriented path planning can enhance GPDS. Our CRC can further boost the performance of Golden-LLaMA3, demonstrating its generality, regardless of the performance of the goal-oriented path planning. Besides, the performance gaps between Golden-LLaMA3 and TPLLaMA3 are relatively small, suggesting that further improvements in goal-oriented path planning may offer limited benefits for GPDS.
# 6 Analysis
# 6.1 Ablation Study
Our CRC aims to enhance response generation by improving consistency with the dialogue context, including the user profile, dialogue history, domain knowledge, and subgoals. Ablation experiments on the DuRecdial dataset using TP-LLaMA3 (see Table 3) reveal that removing any element leads to a performance decline in all metrics, highlighting the effectiveness of our CRC framework in maintaining and enhancing consistency with each element.
The removal of reflection and correction related to domain knowledge (w/o DK) significantly reduces Knowledge F1, highlighting the importance of consistency with domain knowledge for effective information utilization. Similarly, without subgoals (w/o SG), the Goal Success Rate drops markedly, demonstrating the importance of aligning responses with subgoals. The absence of user profiles (w/o UP) and dialogue history (w/o DH) negatively impacts all metrics except Distinct, showing the benefits of maintaining consistency with user profiles and dialogue history in enhancing GPDS.
Figure 3: Pairwise evaluation results for TP-LLaMA3 w/ CRC vs. TP-LLaMA3 w/o CRC.
Additionally, regardless of which element is removed, the performance on the Distinct metric remains almost unchanged. This demonstrates that our CRC framework not only improves the model’s ability to effectively guide conversations towards final targets but also maintains the model’s capacity to generate diverse responses.
# 6.2 Consistency Analysis
We conducted a pairwise human evaluation to compare the models with (w/) and without (w/o) CRC, assessing the consistency of the responses generated with each element of the dialogue context. We randomly selected 500 pairs of system responses from the DuRecDial dataset. The pairwise human evaluation results for TP-LLaMA3 are shown in Figure 3. The labels “win”, “tie”, and “lose” are used to indicate that TP-LLaMA3 w/ CRC is more consistent, equally consistent, or less consistent than TP-LLaMA3 w/o CRC, respectively. Appendix F provides the details of human evaluation.
The tie rates for TP-LLaMA3 w/ CRC and w/o CRC decrease across user profile (UP), dialogue history (DH), domain knowledge (DK), and the subgoal (SG). This suggests that the challenge of generating responses consistent with these elements becomes more pronounced.
Notably, TP-LLaMA3 w/ CRC exhibits a higher win rate compared to TP-LLaMA3 w/o CRC across all four elements. This underscores that our CRC framework effectively improves the consistency of the TP-LLaMA3’s responses and the elements of the dialogue context. Appendix F provides the results of human evaluation using TP-BART, TPT5, TP-GPT2 and TP-DialoGPT, which illustrate the same trend and further ensure the effectiveness of our CRC.
Figure 4: Subgoal failure rates on DuRecDial.
# 6.3 SubGoals Failure Analysis
It is essential for a goal-oriented proactive dialogue system to seamlessly steer the conversation towards the ultimate objective by generating the responses that align with each subgoal along the goal-oriented path. Consequently, we assessed whether each subgoal was successfully accomplished. Figure 4 presents the subgoal failure rates (the rate at which the model are unable to achieve the subgoals) on DuRecDial for the models both without and with CRC. It is evident that the models without CRC exhibit a higher rate of current turn goal failures, with a percentage exceeding $20 \%$ . Such failures in subgoals have the potential to diminish the naturalness of the conversation, which may in turn lead to a poor user experience and complicate the achievement of the final objective. In contrast, our CRC framework has been demonstrated to significantly reduce the failure rate of subgoals, thereby facilitating a more natural conversational process. A similar trend is evident on the DuRecDial 2.0, as illustrated in Appendix G.
# 6.4 Analysis of Inconsistency Detection and Reflection Content
We provide an in-depth analysis of the ability of ChatGPT and our reflection model DialoGPT to detect inconsistencies and generate meaningful explanations. To evaluate the quality of reflections generated by both models, we randomly selected 500 samples each from the training and test sets and analyzed their performance in inconsistent type identification and the generation of accurate corrective suggestions, respectively.
The results show that ChatGPT correctly identified $94 \%$ (245/261) of inconsistencies and generated accurate corrective suggestions for $9 7 \%$ (237/245) of inconsistent cases (i.e., generating reasonable reflective content), demonstrating the high reliability of using ChatGPT for annotating reflection data. In contrast, our reflection model correctly identified $94 \%$ (227/242) of inconsistencies and provided accurate suggestions for $90 \%$ (205/227) of the inconsistent cases. These results indicate that the reflection model successfully learned to identify and analyze inconsistencies from the annotations provided by ChatGPT, further validating the effectiveness of our CRC framework.
# 6.5 Analysis of Model Combination
We examine the performance when the reflection model $( R G _ { R } )$ and the correction model $( R G _ { C } )$ differ in architecture, as detailed in table 4 and table 5.
The results in Table 4 show that incorporating any models as a correction model significantly enhances performance compared to a setup without CRC. However, the impact varies across different metrics. For instance, utilizing T5 as a correction model enhances ${ \bf K } { \bf F } _ { 1 }$ relative to DialoGPT; however, it does not demonstrate substantial benefits in other metrics. Conversely, BART demonstrates a marked enhancement in Dist-2 and Succ metrics. It is noteworthy that utilizing a more large model, such as LLaMA3, as the correction model, results in substantial enhancements across all metrics compared to DialoGPT, with the exception of BLEU-2. The BLEU score is calculated as the overlap between generated and reference responses, so increased diversity in the generated responses (as indicated by Dist-2) may have a negative effect on the BLEU-2 score. These findings imply that employing a correction model with a greater number of parameters can yield substantial performance enhancements.
Similarly, Table 5 demonstrates that using any framework model as a reflection model results in a substantial enhancement of performance in comparison to a model devoid of CRC. However, it is important to note that as the parameter size of the reflection model increases, its impact on the same correction model remains consistent.
In summary, the employment of a framework or parameter size for reflection or correction models has been shown to significantly enhance the initial model’s performance. It is notable that a correction model with more parameters improves performance across most metrics. However, regardless of the architecture or parameter size of the reflection model, their contributions to model performance remain quite similar.
Table 4: Performance comparison on DuRecDial when the reflection model are DialoGPT and the correction model uses different model architectures.
Table 5: Performance comparison on DuRecDial when the correction model are DialoGPT and the reflection model uses different model architectures.
Table 6: A case generated by TP-GPT2.
# 6.6 Case Study
We conducted case studies to demonstrate the effectiveness of our CRC framework in enhancing the consistency between generated responses and each element of the dialogue context , as shown in Table 6 where the complete dialogue is presented in Figure 1. Regarding the dialogue history, TPGPT2 emphasizes “of course he knows Lin” which reduces the consistency with the history. In contrast, our CRC improves the consistency by omitting this statement. In terms of the domain knowledge, TP-GPT2 incorrectly identifies the director of the movie as the leading actor. Conversely, our CRC correctly utilizes the domain knowledge (refer to the domain knowledge item in Figure 1). For the subgoal, although TP-GPT2 mentions recommending a movie, it fails to address the topic of Grandpa’s Love, resulting in an ineffective recommendation. In contrast, our CRC successfully recommends the movie Grandpa’s Love. Additional case studies of TP-GPT2 concerning the user profile, as well as LLaMA3, are provided in Appendix H. | Goal-oriented proactive dialogue systems are designed to guide user
conversations seamlessly towards specific objectives by planning a
goal-oriented path. However, previous research has focused predominantly on
optimizing these paths while neglecting the inconsistencies that may arise
between generated responses and dialogue contexts, including user profiles,
dialogue history, domain knowledge, and subgoals. To address this issue, we
introduce a model-agnostic two-stage Consistency Reflection and Correction
(CRC) framework. Specifically, in the consistency reflection stage, the model
is prompted to reflect on the discrepancies between generated responses and
dialogue contexts, identifying inconsistencies and suggesting possible
corrections. In the consistency correction stage, the model generates responses
that are more consistent with the dialogue context based on these reflection
results. We conducted experiments on various model architectures with different
parameter sizes, including encoder-decoder models (BART, T5) and decoder-only
models (GPT-2, DialoGPT, Phi3, Mistral and LLaMA3), and the experimental
results on three datasets demonstrate that our CRC framework significantly
improves the consistency between generated responses and dialogue contexts. | [
"cs.CL"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.